This site is in the process of moving to
You can view my instance at
Using Minio As Your Object Store with Arc
The last thing I want to do is have a bucket in my AWS S3 account solely for
the sake of development support. And having an actual store to hit in
integration tests without incurring fees (you do load tests often, right)?
Whilst working on twch.at, I caught myself about to open a new bucket in
AWSS3. A lot of subtle warning alarms began going off. “Using production
keys to access a bucket solely for development?” echoed at me from tmux.
I remembered an object store that had a compatible API to that of S3’s floating
around. It was minio. Object storage, in a nut shell, is a way of
representing arbitrary forms of data1. From their features page,
Minio is a distributed object storage server, written in Go
and open sourced under Apache License Version 2.0.
All cool stuff. There’s a Docker image ready to use that they provide too
so you can do something like:
Ensure that you have both Arc and ExAWS up and going in your Mix
The interesting part is having it configured. Minio exposes a S3-compatible API
so when done correctly (it took me four tries), you can use the following:
I tend to push out any value that can be changed and/or persisted elsewhere into
an environment value, as per the 12-Factor tenants. With all of that
configured, if you now have an ‘uploader’ or a Arc.Definition module for your
application like so:
Then you can get a valid URI by doing something like:
You might have noticed that the unsigned URI goes back to Amazon’s host name. I
have an idea as to why this happens; I’ll make a follow up post when I figure
it out. But now, you can use Minio as a S3 clone on your local machine without
having to fiddle with a remote service!
Things like email bodies, video, or even compressed archives of data
you’ve pulled out of PostgreSQL for your monthly backups that you invoke.
Right? Of course not, because those backups are automatic . ↩