Plex Behind An Nginx Reverse Proxy: The Hard Way

 Uncategorized  Comments Off on Plex Behind An Nginx Reverse Proxy: The Hard Way
May 082024
 

Did IT deploy a new web filter at work? Is it preventing you from streaming music to drown out the droning of your co-workers taking meetings in your open plan office? Have you got a case of the Mondays?

That was the situation I found myself in recently. By default, Plex listens on port 32400, though it’ll happily use any port and it plays nice with gateways that support UPNP/NAT-PMP and pick a random public port to forward. That random port was the source of my problem. The new webfilter doesn’t mind the Plex domain, but it doesn’t like connections that aren’t on ports 80 or 443 – not even 22, and certainly not 32400.

Time for a reverse proxy. There’s lots of documentation about putting Plex behind a reverse proxy out there, but as is often the case with me, I had some additional requirements that complicated things a bit.

I already run a reverse proxy on my public IP that terminates TLS for a few services I host internally on my LAN behind an OAuth proxy. And by default, the connections from Plex clients want to connect directly to the media server via the plex.direct domain, which I don’t control and for which I can’t easily create TLS certificates (in truth, I probably could using Lets Encrypt and either the HTTP or ALPN challenge, but where’s the fun in that?).

Here’s the behaviour I need:
1. Stream connections for *.plex.direct to the Plex media server
2. Terminate TLS for primary domain name and proxy those connections internally
3. (Optional) Accept SSH connections on 443 and stream those to OpenSSH

First, create a new HTTPS proxy entry for plex, and update all of your proxies to use an alternate port. For fun, create a server entry that returns HTTP status code 418 – we’ll use that as a default fallthrough for connections we aren’t expecting.

http {
    server {
        listen 127.0.0.1:8443 ssl http2;
        server_name  wan.example.com;
        location / {
          proxy_pass https://home.lan.example.com;
        }
    }
    server {
        listen 127.0.0.1:8443 ssl http2;
        server_name  plex.example.com;
        location / {
          proxy_pass https://plex.lan.example.com:32400;
        }
    }
    server {
      listen 127.0.0.1:8080 default_server;
      return 418;
    }
}

Combine that with the Custom server access URLs setting and you’re probably good. But where’s the fun in that? We want maximum flexibility and connectivity from clients, so let’s mix it up with the stream module.

stream {
  log_format stream '$remote_addr - - [$time_local] $protocol '
                    '$status $bytes_sent $bytes_received '
                    '$upstream_addr "$ssl_preread_server_name" '                    
                    '"$ssl_preread_protocol" "$ssl_preread_alpn_protocols"';

  access_log /var/log/nginx/stream.log stream;

  upstream proxy {
    server      127.0.0.1:8443;
  }

  upstream teapot {
    server      127.0.0.1:8080;
  }

  upstream plex {
    server      172.16.10.10:32400;
  }

  upstream ssh {
    server      127.0.0.1:22;
  }

  map $ssl_preread_protocol $upstream {
    "" ssh;
    "TLSv1.3" $name;
    "TLSv1.2" $name;
    "TLSv1.1" $name;
    "TLSv1" $name;
    default $name;
  }

  map $ssl_preread_server_name $name {
    hostnames;
    *.plex.direct       plex;
    plex.example.com    proxy;
    wan.example.com     proxy;
    default             teapot;
  }

  server {
    listen      443;
    listen      [::]:443;
    proxy_pass  $upstream;
    ssl_preread on;
  }
}

Reading from the bottom we see that we’re listening on port 443, but not terminating TLS. We enable ssl_preread, and proxy_pass via $upstream. That uses the $ssl_preread_protocol map block to identify SSH traffic and send that to the local SSH server, otherwise traffic goes to $name.

$name uses the $ssl_preread_server_name map block, which uses the SNI name to determine which proxy to send the traffic to. Because we specify the hostnames variable, we can use wildcards in our domain matches. Connections for *.plex.direct stream directly to the Plex media server, while those for my domain name are streamed to the HTTPS reverse proxy I defined previously, which handles the TLS termination. Finally, any connection for a domain I don’t recognize gets a lovely 418 I’m a Teapot response code.

 Posted by at 2:32 AM

Bypassing Bell Home Hub 3000 with a media converter and FreeBSD

 FreeBSD  Comments Off on Bypassing Bell Home Hub 3000 with a media converter and FreeBSD
Nov 262020
 

I recently moved and decided to have Bell install their Fibe FTTH service. Bell provides an integrated Home Hub 3000 (HH3k from now on) unit to terminate the fibre and provide wifi/router functionality. It’s not terrible as this ISP provided units go and probably relatively serviceable for regular consumer use, but it’s got some limitations that annoy anal retentive geeks like me.

I wanted to bypass it. It’ll do PPPoE passthrough, so you can mostly bypass it just by plugging your existing router into the HH3k and configuring your PPPoE settings. If you want to you can disable the wifi on the HH3k. You can also use the Advanced DMZ setting to assign a public IP via DHCP to a device you designate.

But what if you want to bypass it physically and not deal with this bulky unit at all? Turns out you can get a fibre to Ethernet media converter for $40CAD from Amazon, and just use that instead. On your router you’ll need to configure your PPPoE connection to use VLAN35 on the interface connected to the media converter/fibre connection, but if you’re using pfSense or raw FreeBSD like me, this is simple enough.

Physical Setup:

  1. Buy a media converter. Personally I purchased this product from 10Gtek (I don’t use referral codes or anything).
  2. In the HH3k you’ll find the fibre cable is plugged into a GBIC. Disconnect the fiber cable and you’ll find a little pull-latch on the GBIC you can use to pull it from the HH3k. The GBIC itself is (I believe) authenticated on the Bell network, so don’t break or lose it. Plug the GBIC into the media converter.
  3. Plug the fibre cable into the GBIC.
  4. Plug the Ethernet port of the media converter into the WAN port on your router.

FreeBSD configuration:

  1. Configure your WAN NICs in /etc/rc.conf:
vlans_igb0=35
ifconfig_igb0="inet 192.168.2.254 netmask 255.255.255.0"

Adjust for your NIC type/number. I found I had to assign an IP address to the root NIC before the PPPoE would work over the VLAN interface. I used an IP from the default subnet used by the HH3k. This way if I ever plug the HH3k back in, I’ll be able to connect to it to manage it.

2. Update your mpd5.conf to reference your new VLAN interface:

default:
        load bell
bell:
        create bundle static BellBundle0
        set bundle links BellLink0
        set ipcp ranges 0.0.0.0/0 0.0.0.0/0
        set ipcp disable vjcomp
        set iface route default
        create link static BellLink0 pppoe
        set auth authname blahblah
        set auth password foobar
        set pppoe iface igb0.35
        set pppoe service "bell"
        set link max-redial 0
        set link keep-alive 10 60
        set link enable shortseq
        set link mtu 1492
        set link mru 1492
        set link action bundle BellBundle0
        open

And that’s literally it. Bounce your configuration (or your router) and everything should come up. I found the PPPoE connection was effectively instantaneous in this configuration, where it had taken a bit to light up when the HH3k was in the mix.

 Posted by at 9:42 PM

My beets Configuration (Nov. 2016 Edition)

 Software  Comments Off on My beets Configuration (Nov. 2016 Edition)
Nov 052016
 

This is mostly for my own convenience. I recently rebuilt a host for managing my beets library, and these are the packages (both deb and pip) that I needed to install for my particular beets config to work. And since that’s not really very helpful to anyone else without my beets config, I’ve included that as well.

This should work for both ubuntu trusty and xenial.

Requirements for beets:

$ sudo apt-get install python-dev python-pip python-gi libchromaprint-tools imagemagick mp3val flac lame flac gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0 plugins-ugly gir1.2-gstreamer-1.0 libxml2-dev libxslt1-dev zlib1g-dev
$ sudo pip install beets pyacoustid discogs-client pylast requests bs4 isodate lxml

I use two plugins not included by default:
bandcamp
rymgenre

Beets config:

############################################################################
## Beets Configuration file.
## ~./config/beets/config.yaml
#############################################################################

### Global Options
library: ~/.config/beets/library.blb
directory: /mnt/music/
pluginpath: ~/.config/beets/plugins
ignore: .AppleDouble ._* *~ .DS_Store
per_disc_numbering: true
threaded: yes

# Plugins
plugins: [
  # autotagger extentions
  chroma,
  discogs,
  # metadata plugins
  acousticbrainz,
  embedart,
  fetchart,
  ftintitle,
  lastgenre,
  mbsync,
  replaygain,
  scrub,
  # path format plugins
  bucket,
  inline,
  the,
  # interoperability plugins
  badfiles,
  # misc plugins
  missing,
  info,
  # other plugins
  bandcamp
]

# Import Options
import:
  copy: true
  move: false
  write: true
  delete: false
  resume: ask
  incremental: false
  quiet_fallback: skip
  none_rec_fallback: skip
  timid: false
  languages: en
  log: ~/beets-import.log

# Path options
paths:
  # Albums/A/ASCI Artist Name, The/[YEAR] ASCI Album Name, The [EP]/01 - Track Name.mp3
  default: 'Albums/%bucket{%upper{%left{%the{%asciify{$albumartist}},1}}}/%the{%asciify{$albumartist}}/[%if{$year,$year,0000}] %asciify{$album} %aunique{albumartist album year, albumtype label catalognum albumdisambig}/%if{$multidisc,$disc-}$track - %asciify{$title}'
  # Singles/ASCII Artist Name, The - Track Name.mp3
  singleton: 'Singles/%%the{%asciify{$artist}} - %asciify{$title}'
  # Compilations/[YEAR] ASCI Compilation Name, The/01-01 - Track Name.mp3
  comp: 'Compilations/[%if{$year,$year,0000}] %the{%asciify{$album}%if{%aunique, %aunique{albumartist album year, albumtype label catalognum albumdisambig}}}/%if{$multidisc,$disc-}$track - %asciify{$title}'
  # Sountracks/[YEAR] ASCI Soundtrack Name, The/01 - Track Name.mp3
  albumtype:soundtrack: 'Soundtracks/[%if{$year,$year,0000}] %the{%asciify{$album}%if{%aunique, %aunique{albumartist album year, albumtype label catalognum albumdisambig}}}/%if{$multidisc,$disk-}$track - %asciify{$title}'

### Plugin Options

# Inline plugin multidisc template
item_fields:
  multidisc: 1 if disctotal > 1 else 0

# Collects all special characters into single bucket
bucket:
  bucket_alpha:
    - _
    - A
    - B
    - C
    - D
    - E
    - F
    - G
    - H
    - I
    - J
    - K
    - L
    - M
    - N
    - O
    - P
    - Q
    - R
    - S
    - T
    - U
    - V
    - W
    - X
    - Y
    - Z
  bucket_alpha_regex:
    '_': ^[^A-Z]

# Per album genres selected from a custom list
# My genre-tree.yaml is ever so slightly custom as well
# I found per-album genres in last.fm could be very misleading.
lastgenre:
  canonical: ~/.config/beets/genres-tree.yaml
  whitelist: ~/.config/beets/genres.txt
  min_weight: 20
  source: artist
  fallback: 'Unknown'

# rymgenre doesn't run on import, so I use it as a backup
# for when lastgenre is giving me garbage results.
rymgenre:
  classes: primary
  depth: node

# Fetch fresh album art for new imports
fetchart:
  sources: coverart itunes amazon albumart
  store_source: yes

# I want the option to scrub, but don't feel the need to scrub automatically
scrub:
  auto: no

# Gstreamer is a pain, but still the correct backend
replaygain:
  backend: gstreamer
  overwrite: yes

acoustid:
  apikey: <API_KEY>

echonest:
  apikey: <API_KEY>
  auto: yes
 Posted by at 1:01 AM

Fixing Recently Added In XBMC After Re-Importing Music

 Software  Comments Off on Fixing Recently Added In XBMC After Re-Importing Music
Sep 192014
 

I did some rejiggering of how XBMC accesses my NAS which required re-importing all my media. The video library sets the ‘date added’ field to the file modification date of a video, which means the ‘Recently Added’ view of my movies is correct, even though everything only just got re-imported. But the music library takes the literal approach and sorts by the order in which albums are scanned. After re-importing my recently added list was just a reverse alphabetical sort by artist. The video library behaviour seems to me to be the intuitively correct behaviour, and I care because when I’m at home I primarily use the recently added view to listen to music.

After a bit of digging around I determined that the music library sets an idAlbum field for each album which is an integer that increments upwards as albums are scanned, and this is how the recently added view sorts itself. So the solution was to blow the music library away and re-import (again), but this time forcing the order in which XBMC scans the music collection.

Step 1) Get an album list sorted by modification date

My music collection is well organized (I use beets) and all of my albums are in directories that have the release year in square brackets, so the following command gave my a list of all my albums sorted by modification date:

$ find . -type d -name *[* -printf '%T@ %p\n' | sort -k 1n
1407992390.0000000000 ./Albums/T/Talking Heads/[1987] More Songs About Buildings and Food
1407992431.0000000000 ./Albums/T/Talking Heads/[1983] Remain in Light
1407992516.0000000000 ./Albums/T/Talking Heads/[1984] Stop Making Sense
1407992594.0000000000 ./Albums/T/Talking Heads/[1986] True Stories

Except that when I reviewed the output I remembered that I’d recently done some updates to the tags on a bunch of albums and so the modification dates were way too recent on a lot of them. I was almost ready to start getting bummed out.

The Real Step 1) Get an album list sorted by creation date

POSIX doesn’t require that a filesystem keep track of file creation date and the XFS filesystem my music resides on dutifully doesn’t bother to track it. However, my music collection is synced with rsync every night from a server I colo downtown, and that host uses an ext4 filesystem which does in fact track the creation date. It’s not obvious, because the standard stat command does not return a creation date on linux but debugfs can.

Now, here’s where things get complicated, mostly because by this time it was late and while I’m sure there’s a better way to do everything that follows my brain was starting to get sleepy. Please feel free to send me more efficient regexes, or versions of scripts that cut out extra steps, or whatever. I’ll update and give credit.

debugfs is easier to work with if you’re just working with inodes so first generate a list of directories and their inodes:

$ find . -type d -name *[* -exec ls -di {} \; > inodelist
$ head -n 5 inodelist
26763273 ./Albums/A/Andy Stott/[2012] Luxury Problems
26763279 ./Albums/A/Arca/[2013] &&&&&
27189257 ./Albums/A/Arcade Fire/[2010] The Suburbs
27189258 ./Albums/A/Arcade Fire/[2004] Funeral
27189256 ./Albums/A/Arcade Fire/[2007] Neon Bible

I found it easiest to generate a separate list of timestamps and then merge the two files (again, I’m sure there’s a better way to do this but I didn’t bother for a one-time use process). First we need a short script that can determine the crtime of the directory:

#!/bin/bash
inode=$1
fs="/dev/md0"
crtime=$(sudo debugfs -R 'stat <'"${inode}"'>' "${fs}" 2>/dev/null | grep crtime | cut -d ' ' -f 2 | cut -d ':' -f 1)
printf "%d\n" "${crtime}"

And use that to get yourself a nice sorted list:

$ for i in `cat inodelist | cut -d ' ' -f 1`; do ./get_crtime.sh $i >> crtimelist; done
$ paste -d " " crtimelist inodelist | sort | cut -f2- -d '/' > albumlist
$ head -n 5 albumlist
Albums/P/Prince/[1987] Dirty Mind
Albums/P/Public Enemy/[1988] It Takes a Nation of Millions to Hold Us Back
Albums/M/mum/[2002] Finally We Are No One
Albums/A/Amon Tobin/[1997] Bricolage
Albums/B/Blonde Redhead/[2000] Melody of Certain Damaged Lemons

Congratulations. You now have a list of albums sorted by their creation date. I generated the file with paths relative to the root of my music directory, because it’s easier to work with that way. You’ll clearly need to play with the cut options to get something that works for your particular library of music.

Step 2) Scan your directories in the correct order

XBMC has a JSON-RPC API, which includes the helpful AudioLibrary.Scan method, and even a wiki page on how to use it to trigger scans, though we need to pass the optional directory parameter:

$ curl --data-binary '{ "jsonrpc": "2.0", "method": "AudioLibrary.Scan", "params": { "directory": "/mnt/music/Albums/F/FKA twigs/[2013] EP2" }, "id": "1"}' -H 'content-type: application/json;' http://localhost:8080/jsonrpc
{"id":"1","jsonrpc":"2.0","result":"OK"}

In testing I found that a lot of the jsonrpc calls were silently failing – I’d get the OK result for all of them, but watching the logs I could see that not all of the requests were actually being executed. In my first test I used a while loop and fired over 700 requests in a couple seconds and saw that only about 30% of them executed (I didn’t even bother to check if they were in the correct order). I watched the import notification on screen when I imported a single album and saw it took roughly ten seconds to import the album. With that in mind for the second test I waited 20 seconds between each request and I still saw only 80-90% of them executed. I doubt it’s because the previous request was still running because then I’d expect the first test to have only resulted in a single (maybe two) successfully executed requests.

By this time it’s really late and I didn’t care enough to troubleshoot further – I decided to just brute force the matter:

$ while read album; do 
echo $album
curl -s --data-binary '{ "jsonrpc": "2.0", "method": "AudioLibrary.Scan", "params": { "directory": "/mnt/music/'"$album"'" }, "id": "1"}' -H 'content-type: application/json;' http://localhost:8080/jsonrpc > /dev/null
sleep 20
curl -s --data-binary '{ "jsonrpc": "2.0", "method": "AudioLibrary.Scan", "params": { "directory": "/mnt/music/'"$album"'" }, "id": "1"}' -H 'content-type: application/json;' http://localhost:8080/jsonrpc > /dev/null
sleep 20
curl -s --data-binary '{ "jsonrpc": "2.0", "method": "AudioLibrary.Scan", "params": { "directory": "/mnt/music/'"$album"'" }, "id": "1"}' -H 'content-type: application/json;' http://localhost:8080/jsonrpc > /dev/null
sleep 20
done < albumlist

It’s about as elegant as a sledgehammer, but it works. The extra calls to the RPC method are redundant at worst since it does no harm to scan the directory repeatedly but at least you can be reasonably sure the album will get scanned successfully. Run it in screen overnight and when you return in the morning you should have all of your albums imported, in the order in which you acquired them.

UPDATE: Even the sledgehammer wasn’t enough for three albums. Evidently it’s easy for the AudioLibrary.Scan method call to be skipped. So I blew away the music library again and this time used a script to scan each album, this time checking the XBMC logs. You need to enable debug logging for this script to work, but I’ll leave enabling that as an exercise for the user since there’s a few ways to do it. Anyway, here’s a better solution for importing:

$ cat import_albums.sh 
#!/bin/bash

while read album; do
  while true; do
    IMPORTED=`grep -F "$album" ~/.xbmc/temp/xbmc.log`
    if [ $? == 0 ]; then
      break
    fi
    echo $album
    curl -s --data-binary '{ "jsonrpc": "2.0", "method": "AudioLibrary.Scan", "params": { "directory": "/mnt/music/'"$album"'" }, "id": "1"}' -H 'content-type: application/json;' http://localhost:8080/jsonrpc > /dev/null
    sleep 15
  done
done < $1
$ import_albums.sh albumlist

Areas for improvement:
1) That crap with get_crtime.sh and the steps around it, in particular generating a separate crtimelist file and merging it back in. Maybe something that can be called directly from find -exec?
2) The sledgehammer import. Perhaps check the log after making a request and seeing if the DoScan event shows up there before moving on?

 Posted by at 12:09 AM

Netatalk 3.1.3 For Debian Wheezy

 Software  Comments Off on Netatalk 3.1.3 For Debian Wheezy
Jul 162014
 

I’ve found the netatalk available in Wheezy (version 2.2.2) to be flakey for a while now. Even Jessie only has 2.2.5, while the latest from netatalk.sourceforge.net is 3.1.3. My specific problem was intermittently failing time machine backups from the Macs in my house. Netatalk helpfully includes directions on compiling netatalk from source on Wheezy here, but I don’t like to have all those dev packages on my fileserver, which means spinning up a build host, creating a deb, yada yada yada. Anyway, no point in going to all that bother and not sharing it.

First make sure you remove the old version: # apt-get remove –purge netatalk

Install the pre-requisites: # apt-get install libdbus-glib-1-2 libmysqlclient18 mysql-common libcrack2 avahi-daemon

Download this: netatalk_3.1.3-1_amd64.deb

And install it: # dpkg -i netatalk_3.1.3-1_amd64.deb

Netatalk v3 uses an entirely new config format, so you’ll have to recreate your config files (hence why we used –purge above). It’s actually way easier to configure now though, so don’t fret too much.

Big important note: I’m providing this purely as a convenience. I accept no liability and provide no guarantees. Use at your own risk.

 Posted by at 8:37 PM