@jackyalciné

Got an app idea or need some coding done? I can help! Learn more about how you & I can work together.

Using Minio As Your Object Store with Arc

The last thing I want to do is have a bucket in my AWS S3 account solely for the sake of development support. And having an actual store to hit in integration tests without incurring fees (you do load tests often, right)? is nice.

:book: guide :bookmark: elixir, arc, minio, s3, object store, guide, how to :clock7: :clock3: about 4 minutes

Whilst working on twch.at, I caught myself about to open a new bucket in AWS S3. A lot of subtle warning alarms began going off. “Using production keys to access a bucket solely for development?” echoed at me from tmux. I remembered an object store that had a compatible API to that of S3’s floating around. It was minio. Object storage, in a nut shell, is a way of representing arbitrary forms of data1. From their features page,

Minio is a distributed object storage server, written in Go and open sourced under Apache License Version 2.0.

All cool stuff. There’s a Docker image ready to use that they provide too so you can do something like:

1
2
$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio server /data

Then visit http://localhost:9000 in your browser to see something like the following:

posts/minio-browser-twchat.png

Getting Set Up

Ensure that you have both Arc and ExAWS up and going in your Mix dependencies.

def deps() do
  # Flexible object storage library.
  {:arc, "~> 0.8.0"},

  # Amazon AWS API wrapper.
  {:ex_aws, "~> 1.1"},
end

The interesting part is having it configured. Minio exposes a S3-compatible API so when done correctly (it took me four tries), you can use the following:

config :arc,
  storage: Arc.Storage.S3,
  bucket: {:system, "MINIO_BUCKET"}

config :ex_aws,
  debug_requests: true,
  access_key_id: [{:system, "MINIO_ACCESS_KEY"}],
  secret_access_key: [{:system, "MINIO_ACCESS_SECRET"}],
  region: "local"

config :ex_aws, :s3,
  scheme: {:system, "MINIO_SCHEME"},
  region: "local",
  host: %{
    "local" => {:system, "MINIO_HOST"}
  }

I tend to push out any value that can be changed and/or persisted elsewhere into an environment value, as per the 12-Factor tenants. With all of that configured, if you now have an ‘uploader’ or a Arc.Definition module for your application like so:

defmodule MyApp.FileStorage.HTMLStuffing do
  use Arc.Definition

  @versions [:original]

  def validate({file, _}) do
    ~w(.htm .html) |> Enum.member?(Path.extname(file.file_name))
  end

  def storage_dir(version, {file, email_id}) do
    "static/html/#{email_id}"
  end
end

Then you can get a valid URI by doing something like:

Erlang/OTP 20 [erts-9.1.3] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false]

17:37:16.027 pid=<0.308.0> application=maru module=Maru.Supervisor function=endpoint_spec/3 file=/opt/twchat/deps/maru/lib/maru/supervisor.ex line=37 [info]  Starting Elixir.Twchat.Api with Cowboy on http://0.0.0.0:5050
Interactive Elixir (1.5.2) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)>
iex(2)> Twchat.FileStorage.Email.HTMLBody.url({"foo","bar"})
"https://s3.amazonaws.com/twchat/static/emails/html/bar/foo"
iex(3)> Twchat.FileStorage.Email.HTMLBody.url({"foo","bar"}, signed: true)
"http://objstr:9000/twchat/static/emails/html/bar/foo?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=foobar5%2F20171107%2Flocal%2Fs3%2Faws4_request&X-Amz-Date=20171107T173739Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=0ac669f0ab8ffb1e25396e42791085cb142dc7b2a469500156b960f2c372c18f"
iex(4)>

You might have noticed that the unsigned URI goes back to Amazon’s host name. I have an idea as to why this happens; I’ll make a follow up post when I figure it out. But now, you can use Minio as a S3 clone on your local machine without having to fiddle with a remote service! :sparkles:

  1. Things like email bodies, video, or even compressed archives of data you’ve pulled out of PostgreSQL for your monthly backups that you invoke. Right? Of course not, because those backups are automatic :grin: