Need a hand with tech consulting? I can help!
Learn more about how we can work via black.af .

Using Minio As Your Object Store with Arc

The last thing I want to do is have a bucket in my AWS S3 account solely for the sake of development support. And having an actual store to hit in integration tests without incurring fees (you do load tests often, right)? is nice.

:pencil: by Jacky Alciné Jacky Alciné :book: an guide post :bookmark: elixir , arc , minio , s3 , object store , guide , how to :clock7: written :eyeglasses: about 4 minutes, 841 words :link: Comments - 1 Mention(s) - Permalink

Whilst working on twch.at, I caught myself about to open a new bucket in AWS S3. A lot of subtle warning alarms began going off. “Using production keys to access a bucket solely for development?” echoed at me from tmux. I remembered an object store that had a compatible API to that of S3’s floating around. It was minio. Object storage, in a nut shell, is a way of representing arbitrary forms of data1. From their features page,

Minio is a distributed object storage server, written in Go and open sourced under Apache License Version 2.0.

All cool stuff. There’s a Docker image ready to use that they provide too so you can do something like:

$ docker pull minio/minio
$ docker run -p 9000:9000 minio/minio server /data

Then visit http://localhost:9000 in your browser to see something like the following:

Getting Set Up

Ensure that you have both Arc and ExAWS up and going in your Mix dependencies.

def deps() do
  # Flexible object storage library.
  {:arc, "~> 0.8.0"},

  # Amazon AWS API wrapper.
  {:ex_aws, "~> 1.1"},

The interesting part is having it configured. Minio exposes a S3-compatible API so when done correctly (it took me four tries), you can use the following:

config :arc,
  storage: Arc.Storage.S3,
  bucket: {:system, "MINIO_BUCKET"}

config :ex_aws,
  debug_requests: true,
  access_key_id: [{:system, "MINIO_ACCESS_KEY"}],
  secret_access_key: [{:system, "MINIO_ACCESS_SECRET"}],
  region: "local"

config :ex_aws, :s3,
  scheme: {:system, "MINIO_SCHEME"},
  region: "local",
  host: %{
    "local" => {:system, "MINIO_HOST"}

I tend to push out any value that can be changed and/or persisted elsewhere into an environment value, as per the 12-Factor tenants. With all of that configured, if you now have an ‘uploader’ or a Arc.Definition module for your application like so:

defmodule MyApp.FileStorage.HTMLStuffing do
  use Arc.Definition

  @versions [:original]

  def validate({file, _}) do
    ~w(.htm .html) |> Enum.member?(Path.extname(file.file_name))

  def storage_dir(version, {file, email_id}) do

Then you can get a valid URI by doing something like:

Erlang/OTP 20 [erts-9.1.3] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false]

17:37:16.027 pid=<0.308.0> application=maru module=Maru.Supervisor function=endpoint_spec/3 file=/opt/twchat/deps/maru/lib/maru/supervisor.ex line=37 [info]  Starting Elixir.Twchat.Api with Cowboy on
Interactive Elixir (1.5.2) - press Ctrl+C to exit (type h() ENTER for help)
iex(2)> Twchat.FileStorage.Email.HTMLBody.url({"foo","bar"})
iex(3)> Twchat.FileStorage.Email.HTMLBody.url({"foo","bar"}, signed: true)

You might have noticed that the unsigned URI goes back to Amazon’s host name. I have an idea as to why this happens; I’ll make a follow up post when I figure it out. But now, you can use Minio as a S3 clone on your local machine without having to fiddle with a remote service! :sparkles:

  1. Things like email bodies, video, or even compressed archives of data you’ve pulled out of PostgreSQL for your monthly backups that you invoke. Right? Of course not, because those backups are automatic :grin: