Using Jellyfin with S3 as media storage
Following in breadNET’s footsteps, I wanted to setup my Jellyfin instance to use media from an S3 bucket. If chosen and used correctly, an S3 bucket is significantly cheaper than an equivalent customer cloud storage service.
For reliability purposes, I want my setup to survive reboots and network errors. As such, I won’t settle for rclone mount
kept alive in a screen
instance. I could write a systemd service instead, but even that isn’t necessary, since rclone
can be used as a unix mount helper. Let’s dive in.
Choosing an S3 provider⌗
Since Jellyfin is a media server, and media files can get fairly large, you’ll want a provider that grants low egress fares. I’m currently using a Tigris bucket, since they do not charge for egress (they charge transfers per-request, not per-byte).
Setting up an rclone remote⌗
I want my S3 mount to essentially behave as an external drive. Therefore, it makes sense to configure this system-wide. Let’s create an rclone.conf
file in /etc
.
[media]
type = s3
env_auth =
access_key_id = < YOUR_ACCESS_KEY_ID >
secret_access_key = < YOUR_SECRET_ACCESS_KEY >
region = < YOUR_REGION >
endpoint = < YOUR_S3_ENDPOING >
location_constraint =
acl =
server_side_encryption =
storage_class =
provider = Other
You can also create this config file interactively with rclone config
by then copying over the contents of ~/.config/rclone/rclone.conf
to /etc/rclone.conf
. Let’s now test this config. Try listing the contents of your bucket with
rclone ls --config=/etc/rclone.conf media:<YOUR_BUCKET>
This should run without any error.
Setting up the mount in /etc/fstab
⌗
First, we need to make rclone
available as as a unix mount helper. According to its documentation, this is simply a matter of symlinking the rclone
binary.
bash -c 'sudo ln -s $(which rclone) /sbin/mount.rclone'
With the mount helper set up, we can edit our fstab
to configure the mount.
media:my-bucket /media rclone config=/etc/rclone.conf,vfs_fast_fingerprint,allow_other,gid=1001,uid=1000,vfs_cache_mode=full,vfs_cache_min_free_space=1G
The bucket name (my-bucket
) and mountpoint (/media
) should be changed to whatever suits you. The gid
and uid
can be omitted, or should be set to the group ID and user ID that the files in the mountpoint will belong to. Let’s break down what the various options do:
config=/etc/rclone.conf
: Tell rclone to use this config fileallow_other
: Allows users other than root to access the mounted filesystemvfs_cache_mode=full
: Reads and writes are buffered to disk before being sent to S3. This makes the filesystem much more responsive, especially when dealing with large files, and has the added bonus of possibly reducing the number of requests made.vfs_cache_min_free_space=1G
: The filesystem cache will always keep 1GB free on disk.vfs_fast_fingerprint
: Do not include the slow operations (hash, modtime) when fingerprinting files.
You can read more about VFS options on rclone’s documentation. Test the mounting of your bucket with sudo mount -a
. Should the need arise, you can unmount with sudo fusermount -u /media
.
Setting up Jellyfin⌗
Set up jellyfin as you would with a local library. Jellyfin is finicky about the permissions of the Library and its parent directory. This is where the gid
and uid
parameters from your fstab
will come in handy. I’ve got mine setup to mount as group media
, which the jellyfin unix user is a part of.
Jellyfin’s documentation mentions the following caveat:
For cloud storage, it is recommended to disable image extraction as this requires downloading the entire file to perform this task.
Final thoughts⌗
I’ve been using this setup for a while now and haven’t noticed any kind of performance issue. Transcoding also works fine.
Since I’m also using some servarr services, I’ve setup downloading to happen locally with automatic deletion upon finishing. Note that renaming tasks in those services can take time, since S3 doesn’t implement a directive to rename or move a file (move is implemented as copy + rm).