Linux Media Server

Posted on  by

Jellyfin enables you to collect, manage, and stream your media. Run the Jellyfin server on your system and gain access to the leading free-software entertainment system, bells and whistles included.

  1. OpenFLIXR is an automated media server that integrates with Plex to provide all the same features along with the ability to auto-download TV shows and movies from torrents. It even fetches the subtitles automatically, giving you a seamless experience when coupled with the Plex media software.
  2. Kodi is definitely one of the best media server programs available for Linux. Many of the other best media server tools are based on Kodi as well.

With this problem of the pandemic, there has been a boom in the area of video conferences. Articles about webrtc, documentation of projects that allow users to communicate online have become a fairly popular target on the internet. Even i was hired recently to work on a telemedicine application that would allow patients to be attended online by their doctors.

For important projects where we talk about the WebRTC ecosystem, a Server-Side Solution is crucial for implementing robust applications that offer the client the possibility to record sessions on the server side. For example, with a server all the video streams go through it can be recorded and stored for any purpose, something that would be pretty difficult to do on a mesh architecture. Things like this are features that makes WebRTC a very important tool of another level, it allows a richer and innovative real-time interactions that can add a lot of value to your communication platform.

In this top, we will share with you the top 5 of most mature open source WebRTC media server implementations that you can implement by yourself on your servers to create your own video conferencing application.

5. Jitsi

Demo Github Technologies: Java, JavaScript

Jitsi Meet is an open-source (Apache) WebRTC JavaScript application that usesJitsi Videobridgeto provide high quality,secureand scalable video conferences. Jitsi Meet in action can be seen athere at the session #482 of the VoIP Users Conference. The Jitsi Meet client runs in your browser, without installing anything else on your computer. You can try it out athttps://meet.jit.si.

Jitsi Meet allows very efficient collaboration. Users can stream their desktop or only some windows. It also supports shared document editing with Etherpad.

4. AntMedia

Github Technologies: Java

Ant Media Server is a software that can stream live and VoD streams. It supports scalable, ultra low latency (0.5 seconds) adaptive streaming and records live videos in several formats like HLS, MP4, etc.

Here are the fundamental features of Ant Media Server:

  • Ultra Low Latency Adaptive One to Many WebRTC Live Streaming in Enterprise Edition.
  • Adaptive Bitrate for Live Streams (WebRTC, MP4, HLS) in Enterprise Edition.
  • SFU in One to Many WebRTC Streams in Enterprise Edition.
  • Live Stream Publishing with RTMP and WebRTC.
  • WebRTC to RTMP Adapter.
  • IP Camera Support.
  • Recording Live Streams (MP4 and HLS).
  • Restream to Social Media Simultaneously(Facebook and Youtube in Enterprise Edition).
  • One-Time Token Control in Enterprise Edition.
  • Object Detection in Enterprise Edition.

Ant Media Server has two versions. One of them is the Community Edition(Free) and the other one is Enterprise Edition. Community Edition is available to download on Github. Enterprise Edition can be purchased on antmedia.io

3. Kurento

Github Technologies: C, C++

Kurento is an open source software project providing a platform suitable for creating modular applications with advanced real-time communication capabilities. Kurento is a WebRTC media server and a set of client APIs making simple the development of advanced video applications for WWW and smartphone platforms. Kurento Media Server features include group communications, transcoding, recording, mixing, broadcasting and routing of audiovisual flows.

As a differential feature, Kurento Media Server also provides advanced media processing capabilities involving computer vision, video indexing, augmented reality and speech analysis. Kurento modular architecture makes simple the integration of third party media processing algorithms (i.e. speech recognition, sentiment analysis, face recognition, etc.), which can be transparently used by application developers as the rest of Kurento built-in features.

2. MediaSoup

Demo Github Technologies: Node.js, C++, TypeScript

Instead of creating yet another opinionated server, mediasoup is a Node.js module which can be integrated into a larger application. Mediasoup and its client side libraries provide a super low level API. They are intended to enable different use cases and scenarios, without any constraint or assumption. Some of these use cases are:

  • Group video chat applications.
  • One-to-many (or few-to-many) broadcasting applications in real-time.
  • RTP streaming.

mediasoup and its client side libraries are designed to accomplish with the following goals:

  • Be aSFU(Selective Forwarding Unit).
  • Support both WebRTC and plain RTP input and output.
  • Be a Node.js module in server side.
  • Be a tiny JavaScript and C++ libraries in client side.
  • Be minimalist: just handle the media layer.
  • Be signaling agnostic: do not mandate any signaling protocol.
  • Be super low level API.
  • Support all existing WebRTC endpoints.
  • Enable integration with well known multimedia libraries/tools.

1. Janus WebRTC Server

Demo Github Technologies: C, C++, JavaScript

Janus is a WebRTC Server developed by Meetecho conceived to be a general purpose one. As such, it doesn't provide any functionality per se other than implementing the means to set up a WebRTC media communication with a browser, exchanging JSON messages with it, and relaying RTP/RTCP and messages between browsers and the server-side application logic they're attached to.

Any specific feature/application is provided by server side plugins, that browsers can then contact via Janus to take advantage of the functionality they provide. Example of such plugins can be implementations of applications like echo tests, conference bridges, media recorders, SIP gateways and the like. This version of the server is tailored for Linux systems, although it can be compiled for, and installed on, MacOS machines as well. Windows is not supported, but if that's a requirement, Janus is known to work in the 'Windows Subsystem for Linux' on Windows 10: do NOT trust repos that provide .exe builds of Janus, they are not official and will not be supported.

If you know about another awesome open source WebRTC media server project, please share it with the community in the comment box.

The Perfect Media Series has a new home at perfectmediaserver.com. This series remains public here for informational purposes only - for questions or support please visit the new site.

It's been almost 18 months since my original article in 2016 on the 'perfect' media server - this article assumes you've read that one for background on the software we're about to install. It still proves a very popular piece so I thought it about time to update the article where appropriate and give some further information on how you can put this setup together yourself. And as if I needed further excuses, Debian just released version 9 so now is a great time upgrade / switch.

edit: Sept 2017 - We have just launched a Discord server. Head on over for support and to ask any questions!

Media

Following up on extensive feedback to the original post in the past 18 months I will this time, explain how you can actually set up all the required software yourself - without any real previous knowledge of scripting tools like Ansible (which I still highly recommend and personally use, by the way!).

The core tenet of using only Free Open Source Software wherever possible still remains in place from before. As do the original requirements in the 2016 article. It is not only free as in beer, but none of the software used requires you to pay a single penny to get up and running - it is completely free.

The verdict... (after 18 months)

I touted the solution I last wrote about as 'flawless'. Ok, that might be a bit rich but honestly nothing major has needed to change since the original article. The system is put together on top of Debian 9 (Stretch), docker, snapraid and mergerfs.

This is a system I can leave running for months at a time without needing to look at it. My current uptime is 68 days. Those who know me, know this is ridiculous for my home server. Debian is absolutely bullet proof, docker encapsulates the 'risky' applications into safe sandboxes, mergerfs just works and snapraid too.

Something must have changed in 18 months, right? Yes, actually. I've switched to docker-compose for managing my containers. It's just so convenient. One single file defines the 10-15 or so containers on my system at any one time and I have created aliases meaning the snappy command dcrun up -d is all that's required to start all the applications - more on docker-compose later though.

Comprehensive Installation Guide

Last year, I extolled the virtues of Ansible and automation. Whilst I think this is a great solution, many of the readers here might find these topics a little too advanced (you should still totally check out Ansible, it is awesome and will save you time in the long run). Therefore, I'm going to attempt to do a beginners guide to building a DIY NAS over the remaining course of this article.

I'll cover installing Debian, docker, mergerfs and snapraid plus show you the basics of using docker-compose to manage your containers. Plus some new goodies such as cockpit project and Portainer.io to provide web UIs for server and container administration - a common (but in my opinion unnecessary) request. I'm not an animal though so I'll listen to the feedback and include these projects here.

There are several parts to the YouTube playlist, they correspond to sections below accordingly!

Install Debian

** Disclaimer: Disconnect your data drives before installation - just incase! **

This is the first and most essential part to get right. Installation of Debian is quite straightforward as the installer has a UI which is click, click, click done. Make sure to unselect the debian desktop environment, it's unlikely you really want this on a headless file server. Pay particular attention to the partitioning step as this cannot easily be changed after installation without wiping and starting again.

See the attached YouTube video for more info on this step if you're new to installing Linux.

Debian just released 'Stretch', version 9 (their releases are named after Toy Story characters). Feel free to use 'Jessie' as well as most things should be the same.

There are a bunch of post-installation house keeping tasks you might wish to perform such as setting up ssh keys, installing any packages you might like and configuring users or anything on the system to your preference.

Setting up the drives using MergerFS

In this section, we'll cover how to make Debian aware of which hard drives you want to use for what purpose in your system. We'll use MergerFS to provide a single way to pool access across these multiple drives - much like unRAID, Synology, Qnap or others do with their technologies.

To recap: MergerFS allows us to mix and match any number of mismatched, unstriped data drives under a single mountpoint. I do provide a full explanation in the 2016 article, please go there if you want more information. The important difference between MergerFS and traditional RAID is that the data is not striped thus increasing the likelihood of you being able to recover data in the event of a disk failure.

MergerFS installation

As of Debian 9 MergerFS is now in the main repositories (congrats to trapexit, the mergerfs dev on this!). The version in the repos is a little behind the version available on Github. You have two options at this point:

  • Option 1 - apt install mergerfs fuse
    • This option will auto update with the rest of your system when new versions of MergerFS are made available in the repositories 'upstream'
  • Option 2 - wget https://github.com/trapexit/mergerfs/releases/download/*/mergerfs_*.debian-stretch_amd64.deb && dpkg -i mergerfs*.deb
    • This option will require you to manually update Mergerfs with .deb files from Github (you also need to install fuse from apt)

Unless you have good reason, pick option 1.

Mount point creation

Next you must create mount points for each individual data disk you wish to mount and a mount point to 'pool' the drives under (in the picture above, this is /mnt/storage). We must also create a mount point for the snapraid parity drive - more on this later.

Create fstab entries

Next we need to create an entry in /etc/fstab. This file tells your OS how, where and which disks to mount. It looks a bit complex but an fstab entry is actually quite simple and breaks down to <device> <mountpoint> <filesystem> <options> <dump> <fsck> - documentation here. We first need to find the required information (don't forget to look at the accompanying YouTube video if you're confused at this point!) and so we must run some commands...

Depending on how many drives you have here, this might end up being quite a lot of text. You must identify the partition you wish to mount (probably it'll be the part1 lines) and put that into your /etc/fstab file. If haven't created a partition on your drives yet (i.e. they are new) then use gdisk to do that - instructions here. Here is my finished example file.

Note: do not modify the existing contents of this file else your system might not boot, just add your stuff at the end.

I'm assuming you can fill in the correct variables for the correct drive. Note that the only caveat above is that your parity drive must be the largest or as large as your largest disk in the snapraid array.

You should now be able to run mount -a followed by df -h to see the following output.

Common problems here include typos, the wrong fstype in the 3rd column or the system doesn't have the correct software installed to mount the drives (XFS for example requires apt install xfsprogs).

Linux Media Server Distro

Drive setup summary

This is a bit of a tricky set of steps when you're new, so here's a quick summary before we move on...

  • Create the mount points for:
    • Data drives
    • Parity drives
    • MergerFS pool
  • Create and/or find the drive partitions to mount
  • Create some entries in /etc/fstab
  • Mount your drives!

Installing Docker

This a the relatively easy bit as docker provide a script you can run to install docker for you. They have documentation you should read if you're curious to know more. Or you can trust me and run:

Beware! Piping something with root privileges to sh is a horrible security risk. Double check the contents of the script at get.docker.com before running the above command. It'll probably be fine, but don't just blindly copy and paste from random websites (like this one) as root!!

To make your life easier administering docker add your user to the docker group (you need to be root or have sudo to do this). It is considered a security risk to add users to this group so only do so if absolutely necessary and/or you're a bit lazy like me. This is a home server we're building not a bank.

Docker is now installed. Check with systemctl status docker that the service is started. You can run your first container with docker run --rm hello-world to check everything is working as expected.

Linux Media Server Software

I'm about to show you the power of Docker in the next section, read on...

Installing and configuring SnapRAID

Hopefully you're getting the idea now that the detailed information and rationale for using SnapRAID can be found the 2016 article. A quick TL;DR is that we use SnapRAID to calculate a snapshot of parity data we can use to recover data if a drive fails. I don't pretend to understand the mathematics behind parity but essentially using the arrangement of 1s and 0s from parity, SnapRAID can reconstruct data from a failed drive entirely. If you exceed the fault tolerance of your SnapRAID array you will only loose data on the failed drives, the rest is fine as we are not striping data like a normal RAID would.

SnapRAID (and to some extent MergerFS too) is best suited to largely static datasets, like media. Write once read many is the general rule for suitable datasets. Databases (even such as Plex data) are best excluded from calculations or hosted on a separate storage solution altogether.

Compiling and installing SnapRAID

SnapRAID isn't in the debian repos so we have to compile it from source. Let's use docker for this as it means we don't need to install a bunch of stuff on our system we'll only use once. Luckily for you, I have written some scripts that take care of everything and they are available on Github.

Some good discussion about SnapRAID can be found on r/homeserver.

The steps required to install SnapRAID are

  1. clone the git repo
  2. compile snapraid using the build.sh script
  3. install the compiled .deb file.

Run the following commands and you'll end up with SnapRAID installed.

For me, this highlights just how cool docker can be. We've just installed a bunch of build dependencies and compiled some software from source and not left a trace on the host system. Containers rock!

If the version is behind what's available at snapraid.it please notify me @IronicBadger or submit a pull request on Github! At the time of writing v11.1 was the latest. SnapRAID should now be installed.

Configuring SnapRAID

Snapraid has excellent documentation to help you get started...

SnapRAID is configured using /etc/snapraid.conf. You can find a full working example that runs my system here. It's a straightforward file to understand. Fill in the blanks, add and/or delete lines as required. Pay attention to the exclude sections as I have my downloads, appdata and other directories there which might not be what you want.

Once you're happy with the contents of the config file run:

This may take a while so consider running it in screen or tmux so that if your ssh connection dies, the sync continues.

Automating SnapRAID

To ensure that we have a recent snapshot of the arrays data it's a good idea to automate the running of SnapRAID parity syncs. We'll use cron to automate this task and also the excellent snapraid-runner script available on Github. It's a little dated and un-maintained these days but 'just works'.

Remember: Until you complete a parity sync with snapraid sync your parity is out of date and any data changes since the last sync are unprotected!

You can find an example configuration file for snapraid-runner in my Github. Edit the variables as required and save it to /opt/snapraid-runner/snapraid-runner.conf. Also, download the snapraid-runner.py file in /opt/snapraid-runner/ too.

Finally, let's add an entry to cron with crontab -e and paste in the following (this sets the script to run at 8am daily - change it as you see fit.

SnapRAID is now automated. You modify the threshold that the sync will auto exit if it detects too many files have been deleted since the previous sync - it's set to 250 which I find a good value.

Configuring network file sharing

To turn the system into a NAS we need to enable samba and nfs for network file sharing.

Configure Samba

Now you have samba installed, backup the default samba config mv /etc/samba/smb.conf /etc/samba/smb.orig, and modify my smb.conf to suit your needs. Usually there are a ton of comments in the file and it gets a bit messy, I've removed everything I could to make it simpler to understand.

Once the new configuration file is complete, restart the samba service.

Configure NFS

Most useful for things like Kodi and Linux hosts, NFS is a lightweight way to share files. I never use NFS in write mode, so all of my shares are read only here but modify ro in the configuration file at /etc/exports to change that if you wish.

To finish up nfs installation run a couple of commands:

Linux Media Server

Running apps with docker-compose

The largest change since my original article last year is my now heavy reliance on docker-compose. I bemoaned it in my last article but since then it has been heavily developed and I've grown to really like it.

First of all, let's install docker-compose from the Debian repos...

Sometimes when using data volumes (-v flags) permissions issues can arise between the host OS and the container. We avoid this issue by allowing you to specify the user PUID and group PGID. Ensure the data volume directory on the host is owned by the same user you specify and it will 'just work' TM. I usually end up creating a special user, just for this purpose with useradd dockeruser.

In this instance PUID=1001 and PGID=1001. To find yours use id user as below:

Create /etc/environment and add the following two lines using the ID of the user from the above command.

Next, let's define the docker-compose.yml file which will tell Docker all of the parameters we want to feed to each docker container. You can find my full yml file on Github here. Hopefully it'll give you some inspiration!

Once you're happy that you have defined the applications you want, let's run them!

A couple of top tips here are to save passing the -f file.yml everytime is to use a bash alias. Edit ~/.bash_profile and add the two followings lines. Then run source ~/.bash_profile.

You then type dcp <command> instead of docker-compose -f /path/to/file.yml <command>. Just a nice quality of life tweak.

Updating your containers

Updating to the latest versions of your containers is often a big pain but with docker-compose it's dead easy. Type dcrun pull, wait for the updated versions to pull if available then type dcrun up -d and watch the magic. Most things you could do with docker alone are possible with docker-compose. If you find any neat tricks, let me know!

Cockpit UI

This tool probably deserves it's own article but for now, it's stuck at the end of this massive long one. We are always looking for writers so if you're at all interested get in touch!

Add the following to /etc/apt/sources.list

Then run:

Go to https://<server-ip>:9090 and authenticate using your system username and password.

From here you can do most daily tasks including managing your containers too. I've only recently discovered Cockpit so have only scratched the surface. If you find interesting stuff to do with Cockpit, let us know in the comments.

Portainer.io container UI

Portainer is a great way to manage your containers with a webUI, LinuxServer.io is even their only featured template partner! It's very easy to get setup, just add the follow to your docker-compose.yml file and you should be good to go. Portainer

Summary

Congratulations! You've now finished setting up your completely free and open source software based home media server. You know every nut and bolt down to the last and hopefully learned some stuff along the way.

In terms of feature parity of this solution versus paid solutions I'd be keen to know your thoughts. I haven't covered KVM GPU passthrough stuff here though it is easily possible with a few packages - if this is something you're interested in again, get it touch and we'll cover it.