Mastodon, an ActivityPub implementation picking up most of the slack from Twitter, can use any S3-compatible object store for user-uploaded images/videos and caching media from other servers.
The mastodon.yshi.org server is running on Linode, and they have an S3-compatible object store, so I figured that would be better than hosting the media on the VM. In theory, Linode is backing up the buckets for me — and if somebody has cheaper disk, it’s easy enough to move it to another S3-compatible provider just by sync-ing the objects and updating the credentials in Mastodon’s
There were a couple oddities in setting it up, so by request, I am doing a blog post on how to make it work!
Assumptions & Prerequisites
The key assumption is you have a brand-new Mastodon server. I set this up from the get-go, so the first time it needed to write media was to the object store.
I don’t know anything about migrating from the filesystem to the object store. It looks like you could just copy everything under
public/system, but I did not try this and cannot give you any good advice on it.
The second assumption is that you’ve set up the Linode object store by paying Linode a few dollars of money. For $5/m, you get 250 GB of storage across 50 million objects and a ton of bandwidth to serve stuff.
250 GB is adequate, at least to begin with. If you have a lot of users posting lots of images, you might need more — but cached media is what uses the most space. Mastodon aggressively caches avatars, link cards, and media from other servers in case they go down.
This should be pruned regularly since nobody is likely to look at month-old posts (and if they do, the media will be re-cached as needed).
Before the Twitter deal closed, my bucket was ~10 – 15 GB. Currently, it hangs out between 50 GB – 60 GB. We use the server a lot more now, and more people are posting more pictures.
Creating the Bucket with a CNAME
I created my bucket with a CNAME on it:
mastomedia.yshi.org. This is so it’s easy to move the bucket to another region (or service) in the future, without having to worry about any links breaking.
Linode has a guide on setting this up, but let me tell you the biggest caveat: you need to name the bucket what you’ll be CNAMEing it. I believe the failure mode for naming it incorrectly was that the TLS certificate wouldn’t work, so it wasn’t immediately obvious.
You should also create an access key in the Linode console. These will need to go into your
S3_ENABLED=true S3_PROTOCOL=https # Both should be your CNAME S3_ALIAS_HOST=mastomedia.yshi.org S3_BUCKET=mastomedia.yshi.org # Set your region & regional API endpoint S3_HOSTNAME=us-southeast-1.linodeobjects.com S3_ENDPOINT=https://us-southeast-1.linodeobjects.com/ # The Linode Object Store access key & secret AWS_ACCESS_KEY_ID= AWS_SECRET_ACCESS_KEY=
The CNAME can then but put in DNS, with the target being the hostname shown underneath the bucket name:
Caveat: This is what the Linode doc says to do, but my CNAME is actually pointing to
us-southeast-1.linodeobjects.com, without my bucket name prepended. No idea why I did that, but it works. Might work for you too. 🤷
Add a Certificate
To serve the files over HTTPS, you will need to issue a certificate and keep it renewed. David Coles write a tool to do the necessary certbot dance with the bucket to prove ownership and then attach the certificate. This can be run from cron to keep the certificate up-to-date.
I cloned this and wrote a quick wrapper script to put in cron weekly, called
#!/bin/env sh cd ~/mastomedia.yshi.org-certificate PYTHONPATH=./acme-linode-objectstorage python3 -m acme_linode_objectstorage -k mastomedia.yshi.org.pem -C us-southeast-1 mastomedia.yshi.org
I have a
LINODE_TOKEN environment variable set in the crontab with an API key. That is not the bucket’s access key; it’s an API key for my Linode account with just the Object Store scope.
LINODE_TOKEN=your-api-token-here @daily /home/mastodon/mastomedia.yshi.org-certificate/object_storage_certbot.sh
It worked fine for a while, and then it stopped being able to renew randomly because Let’s Encrypt needed some CAA records added to DNS:
issue CAA letsencrypt.org and
issuewild CAA letsencrypt.org.
At this point, make sure you made the changes to the
.env.production file with the object store stuff and restart the Mastodon processes (web, streaming, sidekiq). You can test it by trying to post an image.
To enable pruning cached stuff, Administration -> Server Settings -> Content Retention and fill in the Media cache retention period setting.