Published on

Using Linode Object Store with Mastodon


Mastodon, an ActivityPub implementation picking up most of the slack from Twitter, can use any S3-compatible object store for user-uploaded images/videos and caching media from other servers.

The server is running on Linode, and they have an S3-compatible object store, so I figured that would be better than hosting the media on the VM. In theory, Linode is backing up the buckets for me — and if somebody has cheaper disk, it’s easy enough to move it to another S3-compatible provider just by sync-ing the objects and updating the credentials in Mastodon’s .env.production file.

There were a couple oddities in setting it up, so by request, I am doing a blog post on how to make it work!

Assumptions & Prerequisites

The key assumption is you have a brand-new Mastodon server. I set this up from the get-go, so the first time it needed to write media was to the object store.

I don’t know anything about migrating from the filesystem to the object store. It looks like you could just copy everything under public/system, but I did not try this and cannot give you any good advice on it.

The second assumption is that you’ve set up the Linode object store by paying Linode a few dollars of money. For $5/m, you get 250 GB of storage across 50 million objects and a ton of bandwidth to serve stuff.

250 GB is adequate, at least to begin with. If you have a lot of users posting lots of images, you might need more — but cached media is what uses the most space. Mastodon aggressively caches avatars, link cards, and media from other servers in case they go down.

This should be pruned regularly since nobody is likely to look at month-old posts (and if they do, the media will be re-cached as needed).

Before the Twitter deal closed, my bucket was ~10 – 15 GB. Currently, it hangs out between 50 GB – 60 GB. We use the server a lot more now, and more people are posting more pictures.

Creating the Bucket with a CNAME

I created my bucket with a CNAME on it: This is so it’s easy to move the bucket to another region (or service) in the future, without having to worry about any links breaking.

Linode has a guide on setting this up, but let me tell you the biggest caveat: you need to name the bucket what you’ll be CNAMEing it. I believe the failure mode for naming it incorrectly was that the TLS certificate wouldn’t work, so it wasn’t immediately obvious.

You should also create an access key in the Linode console. These will need to go into your .env.production file:


# Both should be your CNAME

# Set your region & regional API endpoint

# The Linode Object Store access key & secret

The CNAME can then but put in DNS, with the target being the hostname shown underneath the bucket name:

Object Storage, showing a bucket named as and the full hostname Linode assigns it

Caveat: This is what the Linode doc says to do, but my CNAME is actually pointing to, without my bucket name prepended. No idea why I did that, but it works. Might work for you too. 🤷

Add a Certificate

To serve the files over HTTPS, you will need to issue a certificate and keep it renewed. David Coles write a tool to do the necessary certbot dance with the bucket to prove ownership and then attach the certificate. This can be run from cron to keep the certificate up-to-date.

I cloned this and wrote a quick wrapper script to put in cron weekly, called

#!/bin/env sh

cd ~/
PYTHONPATH=./acme-linode-objectstorage python3 -m acme_linode_objectstorage -k -C us-southeast-1

I have a LINODE_TOKEN environment variable set in the crontab with an API key. That is not the bucket’s access key; it’s an API key for my Linode account with just the Object Store scope.

@daily /home/mastodon/

It worked fine for a while, and then it stopped being able to renew randomly because Let’s Encrypt needed some CAA records added to DNS: issue CAA and issuewild CAA

CAA records for Let's Encrypt for issue & issuewild

Finishing Up

At this point, make sure you made the changes to the .env.production file with the object store stuff and restart the Mastodon processes (web, streaming, sidekiq). You can test it by trying to post an image.

To enable pruning cached stuff, Administration -> Server Settings -> Content Retention and fill in the Media cache retention period setting.