Author: Mike

  • Configuring DNS and DHCP For A LAN In 2025

    Configuring DNS and DHCP For A LAN In 2025

    14 years ago I described how to configure ISC BIND and DHCP(v6) Server on FreeBSD to get DHCP with local domain updates working on a dual stack LAN. However, ISC DHCP Server went End of Life on October 5th, 2022, replaced with their new Kea DHCP server. I also wouldn’t recommend running a non-filtering DNS resolver for your LAN any longer.

    AdGuard Home

    If you’re reading this blog, you’ve almost certainly heard of Pi-hole. However, I’ve found that I prefer AdGuard Home. AdGuard offers paid DNS filtering apps (I happily pay for AdGuard Pro for my iPhone), however their Home product is open source (GPL3) and free. I wont repeat the official Getting started docs, except to point out that AdGuard Home is available in FreeBSD ports so go ahead and install it with pkg install adguardhome.

    There are some configuration changes we’re going to make that cannot be done in the web UI and have to be done directly in the AdGuardHome.yaml config file. I wonder cover everything in the file, just the interesting bits.

    First, we’re going to be specific about which IPs to bind to, so we don’t accidentally create a public resolver, and also because there are LAN IPs on the router we don’t want to bind to (more on this in just a moment).

    http:
      pprof:
        port: 6060
        enabled: false
      address: 172.16.2.1:3000
      session_ttl: 720h
    ...
    dns:
      bind_hosts:
        - 127.0.0.1
        - ::1
        - 172.16.2.1
        - 2001:db8:ffff:aa02::1

    Your choice of upstream resolver is of course personal preference, but I wanted a non-filtering upstream since I want to the control and visibility into why requests are passing/failing. I’m also Canadian, so I prefer (but don’t require) that my queries stay domestic. I’m also sending requests for my LAN domains to the authoritative DNS server, which you can see is configured on local host IP 127.0.0.53 and on the similarly numbered alias IPs on my LAN interface (hence why I had to be specific about which IPs I wanted AdGuard to bind to).

      upstream_dns:
        - '# public resolvers'
        - https://private.canadianshield.cira.ca/dns-query
        - https://unfiltered.adguard-dns.com/dns-query
        - '# local network'
        - '[/lan.example.com/]127.0.0.53 172.16.2.53 2001:db8:ffff:aa02::53'
    ...
      trusted_proxies:
        - 127.0.0.0/8
        - ::1/128
        - 172.16.2.1/32
    ...
      local_ptr_upstreams:
        - 172.16.2.53
        - 2001:db8:ffff:aa02::53

    Lastly, we’re going to configure the webserver for HTTPS and DNS-over-HTTPS (DoH). I use dehydrated to manage my Let’s Encrypt certs, but any tool will do (and is outside the scope of this doc). The important thing to note is that the web UI will now run on port 8453, and will answer DoH queries.

    tls:
      enabled: true
      server_name: router.lan.example.com
      force_https: true
      port_https: 8453
      port_dns_over_tls: 853
      port_dns_over_quic: 853
      port_dnscrypt: 0
      dnscrypt_config_file: ""
      allow_unencrypted_doh: false
      certificate_chain: ""
      private_key: ""
      certificate_path: /usr/local/etc/dehydrated/certs/router.lan.example.com/fullchain.pem
      private_key_path: /usr/local/etc/dehydrated/certs/router.lan.example.com/privkey.pem
      strict_sni_check: false

    The rest of the configuration should be done to taste in the web UI. Personally, I find this filter list is effective while still having a very low false positive rate:

    • AdGuard DNS filter
    • AdAway Default Blocklist
    • AdGuard DNS popup Hosts filter
    • HaGeZi’s Threat Intelligence Feeds
    • HaGeZi’s Pro++ Blocklist
    • OISD Blocklist Big

    More than that and it just becomes unwieldy.

    BIND

    Good old BIND. It’ll outlive us all. This part is basically unchanged I first described it in 2011, except that I’m going to have BIND listen on 127.0.0.53 and alias IPs I created on my LAN networks (also using the .53 address) by setting this in my /etc/rc.conf:

    ifconfig_igb1="inet 172.16.2.1 netmask 255.255.255.0"
    ifconfig_igb1_ipv6="inet6 2001:db8:ffff:aa02::1 prefixlen 64"
    ifconfig_igb1_aliases="\
      inet 172.16.2.53 netmask 255.255.255.0 \
      inet6 2001:db8:ffff:aa02::53 prefixlen 64"

    Next, create an rndc key with rndc-confgen -a -c /usr/local/etc/namedb/rndc.example.com and configure BIND with the following in /usr/local/etc/named/named.conf (don’t remove the logging or zones at the bottom of the default named.conf).

    "acl_self" {
      127.0.0.1;
      127.0.0.53;
      172.16.2.1;
      172.16.2.53;
      ::1;
      2001:db8:ffff:aa02::1;
      2001:db8:ffff:aa02::53;
    };
    
    acl "acl_lan" {
      10.42.0.0/16;
      10.43.0.0/16;
      172.16.2.0/24;
      2001:db8:ffff:aa02::/64;
      fe80::/10;
    };
    
    options {
      directory             "/usr/local/etc/namedb/working";
      pid-file              "/var/run/named/pid";
      dump-file             "/var/dump/named_dump.db";
      statistics-file       "/var/stats/named.stats";
      allow-transfer        { acl_lan; };
      allow-notify          { "none"; };
      allow-recursion       { "none"; };
      dnssec-validation     auto;
      auth-nxdomain         no;
      recursion             no;
      listen-on             { 127.0.0.53; 172.16.2.53; };
      listen-on-v6          { 2001:db8:ffff:aa02::53; };
      disable-empty-zone "255.255.255.255.IN-ADDR.ARPA";
      disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
      disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
      version "BIND";
    };
    
    include "/usr/local/etc/namedb/rndc.example.com";
    
    controls {
      inet 127.0.0.53 allow { "acl_self"; "acl_lan"; } keys { "rndc.example.com";};
      inet 172.16.2.53 allow { "acl_self"; "acl_lan"; } keys { "rndc.example.com";};
      inet 2001:db8:ffff:aa02::53 allow { "acl_self"; "acl_lan"; } keys { "rndc.example.com";};
    };
    
    include "/usr/local/etc/namedb/named.zones.local";

    The local zones are configured in /usr/local/etc/namedb/named.zones.local:

    acl zone "lan.example.com" {
      type master;
      file "../dynamic/lan.example.com";
      update-policy { grant rndc.example.com zonesub ANY; };
    };
    
    zone "2.16.172.in-addr.arpa" {
      type master;
      file "../dynamic/2.16.172.in-addr.arpa";
      update-policy { grant rndc.example.com zonesub ANY; };
    };
    
    zone "2.0.a.a.f.f.f.f.8.B.D.0.1.0.0.2.ip6.arpa" {
      type master;
      file "../dynamic/2001.0db8.ffff.aa02.ip6.arpa";
      update-policy { grant rndc.example.com zonesub ANY; };
    };

    Here’s a starter zone for lan.example.com:

    $ORIGIN .
    $TTL 1200       ; 20 minutes
    lan.example.com      IN SOA  ns0.lan.example.com. admin.example.com. (
                                    2020138511 ; serial
                                    1200       ; refresh (20 minutes)
                                    1200       ; retry (20 minutes)
                                    2419200    ; expire (4 weeks)
                                    3600       ; minimum (1 hour)
                                    )
                            NS      ns0.lan.example.com.
                            A       172.16.2.53
                            AAAA    2001:db8:ffff:aa02::53
    $ORIGIN lan.example.com.
    router                  A       172.16.2.1
                            AAAA    2001:db8:ffff:aa02::1

    An IPv4 reverse zone:

    $ORIGIN .
    $TTL 1200       ; 20 minutes
    2.16.172.in-addr.arpa IN SOA ns0.lan.example.com. admin.example.com. (
                                    2020051192 ; serial
                                    1200       ; refresh (20 minutes)
                                    1200       ; retry (20 minutes)
                                    2419200    ; expire (4 weeks)
                                    3600       ; minimum (1 hour)
                                    )
                            NS      ns0.lan.example.com.
    $ORIGIN 2.16.172.in-addr.arpa.
    1                       PTR     router.lan.example.com.

    And an IPv6 reverse zone:

    $ORIGIN .
    $TTL 1200       ; 20 minutes
    2.0.a.a.f.f.f.f.8.B.D.0.1.0.0.2.ip6.arpa IN SOA ns0.lan.example.com. mikemacleod.gmail.com. (
                                    2020049273 ; serial
                                    1200       ; refresh (20 minutes)
                                    1200       ; retry (20 minutes)
                                    2419200    ; expire (4 weeks)
                                    3600       ; minimum (1 hour)
                                    )
                            NS      ns0.lan.example.com.
    $ORIGIN 0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.a.a.f.f.f.f.8.B.D.0.1.0.0.2.ip6.arpa.
    1.0                     PTR     router.lan.example.com.

    Kea DHCP Server

    The final piece to this is the Kea DHCP server. It’s still from ISC, but this is a from-scratch implementation of DHCP and DHCPv6 that to use modern designs and tools. We won’t be using many of the new bells and whistles, but there’s a couple things we can do now that we couldn’t with ISC DHCP.

    The first thing you’ll notice is that the Kea config files are JSON, and there are four of them. First up is kea-dhcp4.conf, where we configure our IPv4 DHCP options and pool, and also the options necessary to enable dynamic updating of our LAN domain via RFC2136 DDNS updates. Note that because I had an existing zone that had been updated by ISC DHCP and other stuff, I set "ddns-conflict-resolution-mode": "no-check-with-dhcid". You can find more info here.

    {
      "Dhcp4": {
        "ddns-send-updates": true,
        "ddns-conflict-resolution-mode": "no-check-with-dhcid",
        "hostname-char-set": "[^A-Za-z0-9.-]",
        "hostname-char-replacement": "x",
        "interfaces-config": {
          "interfaces": [
            "igb1/172.16.2.1"
          ]
        },
        "dhcp-ddns": {
          "enable-updates": true
        },
        "subnet4": [
          {
            "id": 1,
            "subnet": "172.16.2.0/24",
            "authoritative": true,
            "interface": "igb1",
            "ddns-qualifying-suffix": "lan.example.com",
            "pools": [
              {
                "pool": "172.16.2.129 - 172.16.2.254"
              }
            ],
            "option-data": [
              {
                "name": "routers",
                "data": "172.16.2.1"
              },
              {
                "name": "domain-name-servers",
                "data": "172.16.2.1"
              },
              {
                "name": "domain-name",
                "data": "lan.example.com"
              },
              {
                "name": "ntp-servers",
                "data": "172.16.2.1"
              }
            ],
            "reservations": [
              {
    
                "hw-address": "aa:bb:cc:dd:ee:ff",
                "ip-address": "172.16.2.2",
                "hostname": "foobar"
              }
            ]
          }
        ],
        "loggers": [
          {
            "name": "kea-dhcp4",
            "output-options": [
              {
                "output": "syslog"
              }
            ],
            "severity": "INFO",
            "debuglevel": 0
          }
        ]
      }
    }

    The kea-dhcp6.conf file is basically identical, except IPv6 flavoured. One nice thing about Kea is you can set a DHCPv6 reservation by MAC address, which is something you could not do with ISC DHCPv6 Server.

    {
      "Dhcp6": {
        "ddns-send-updates": true,
        "ddns-conflict-resolution-mode": "no-check-with-dhcid",
        "hostname-char-set": "[^A-Za-z0-9.-]",
        "hostname-char-replacement": "x",
        "dhcp-ddns": {
          "enable-updates": true
        },
        "interfaces-config": {
          "interfaces": [
            "igb1"
          ]
        },
        "subnet6": [
          {
            "id": 1,
            "subnet": "2001:db8:ffff:aa02::/64",
            "interface": "igb1",
            "rapid-commit": true,
            "ddns-qualifying-suffix": "lan.example.com",
            "pools": [
              {
                "pool": "2001:db8:ffff:aa02:ffff::/80"
              }
            ],
            "option-data": [
              {
                "name": "dns-servers",
                "data": "2001:db8:ffff:aa02::1"
              }
            ],
            "reservations": [
              {
                "hw-address": "aa:bb:cc:dd:ee:ff",
                "ip-addresses": [
                  "2001:db8:ffff:aa02::2"
                ],
                "hostname": "foobar"
              }
              }
            ]
          }
        ],
        "loggers": [
          {
            "name": "kea-dhcp6",
            "output-options": [
              {
                "output": "syslog"
              }
            ],
            "severity": "INFO",
            "debuglevel": 0
          }
        ]
      }
    }

    Lastly, we have kea-dhcp-ddns.conf, which configures how the zones will actuall be updated. Note that I’m connecting to BIND on 127.0.0.53.

    {
      "DhcpDdns": {
        "tsig-keys": [
          {
            "name": "rndc.example.com",
            "algorithm": "hmac-sha256",
            "secret": "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz"
          }
        ],
        "forward-ddns": {
          "ddns-domains": [
            {
              "name": "lan.example.com.",
              "key-name": "rndc.example.com",
              "dns-servers": [
                {
                  "ip-address": "127.0.0.53",
                  "port": 53
                }
              ]
            }
          ]
        },
        "reverse-ddns": {
          "ddns-domains": [
            {
              "name": "2.16.172.in-addr.arpa.",
              "key-name": "rndc.example.com",
              "dns-servers": [
                {
                  "ip-address": "127.0.0.53",
                  "port": 53
                }
              ]
            },
            {
              "name": "2.0.a.a.f.f.f.f.8.B.D.0.1.0.0.2.ip6.arpa.",
              "key-name": "rndc.example.com",
              "dns-servers": [
                {
                  "ip-address": "127.0.0.53",
                  "port": 53
                }
              ]
            }
          ]
        },
        "loggers": [
          {
            "name": "kea-dhcp-ddns",
            "output-options": [
              {
                "output": "syslog"
              }
            ],
            "severity": "INFO",
            "debuglevel": 0
          }
        ]
      }
    }

    Extra Credit: Mobile DNS over HTTPS (DoH)

    I mentioned earlier that I pay for AdGuard Pro on my phone. Part of why I do that is it uses the MDM API in iOS to let you force your DNS to a DoH provider, including a custom one. Perhaps one you’re hosting yourself.

    I’m already running an nginx reverse proxy on my router, so let’s get mobile DoH setup. This is a simplified configuration and you’ll need to ensure you’ve got HTTPS properly configured, which is (again) outside the scope of this post.

    Note that I proxy the request to router.lan.example.com which will resolve to the LAN IP 172.16.2.1 rather than localhost, because we configured AdGuard Home to run it’s HTTP server on 172.16.2.1.

        server {
            listen 443 ssl;
            server_name  dns.example.com;
            location / {
                return 418;
            }
            location /dns-query {
                proxy_pass https://router.lan.example.com:8453/dns-query;
                proxy_set_header X-Real-IP  $proxy_protocol_addr;
                proxy_set_header X-Forwarded-For $proxy_protocol_addr;
            }
        }

    Conclusion

    That should do it. You’ve now got filtered DNS resolution for the LAN. You’ve got an authoritative LAN domain. You’ve got a modern DHCP service. And you’ve even got filtered DNS resolution when you’re out of the house.

  • Zero Trust K3s Network With Cilium

    Zero Trust K3s Network With Cilium

    I wanted to implement full zero-trust networking within my k3s cluster which uses the Cilium CNI, which has custom CiliumClusterwideNetworkPolicy and CiliumNetworkPolicyresources, which extend what is possible with standard Kubernetes NetworkPolicy resources.

    Cilium defaults to allowing traffic, but if a policy is applied to an endpoint, it switches and will deny any connect not explicitely allowed. Note that this is direction dependent, so ingress and egress are treated separately.

    Zero trust policies require you to control traffic in both directions. Not only does your database need to accept traffic from your app, but your app has to allow the connection to the database.

    This is tedious, and if you don’t get it right it will break your cluster and your ability to tell what you’re missing. So I figured I’d document the policies required to keep your cluster functional.

    Note that my k3s cluster has been deployed with --disable-network-policy, --disable-kube-proxy, --disable-servicelb, and --disable-traefik, because these services are provided by Cilium (or ingress-nginx, in the case of traefik).

    Lastly, while the policies below apply to k3s, they’re probably a good starting point for other clusters – the specifics will be different, but you’re always going to want to allow traffic to your DNS service, etc.

    Hubble UI

    Before attempting any network policies, ensure you’ve got hubble ui and hubble observe working. You should verify that the endpoints and ports used in the policies below match your cluster.

    Cluster Wide Policies

    These policies are applied cluster wide, without regard for namespace boundaries.

    Default Deny

    Does what it says on the tin.

    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "default-deny"
    spec:
      description: "Empty ingress and egress policy to enforce default-deny on all endpoints"
      endpointSelector:
        {}
      ingress:
      - {}
      egress:
      - {}

    Allow Health Checks

    Required to allow cluster health checks to pass.

    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "health-checks"
    spec:
      endpointSelector:
        matchLabels:
          'reserved:health': ''
      ingress:
        - fromEntities:
          - remote-node
      egress:
        - toEntities:
          - remote-node

    Allow ICMP

    ICMP is useful with IPv4, and absolutely necessary for IPv6. This policy allows select ICMP and ICMPv6 request types globally, both within and outside the cluster.

    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "allow-icmp"
    spec:
      description: "Policy to allow select ICMP traffic globally"
      endpointSelector:
        {}
      ingress:
      - fromEntities:
        - all
      - icmps:
        - fields:
          - type: EchoRequest
            family: IPv4
          - type: EchoReply
            family: IPv4
          - type: DestinationUnreachable
            family: IPv4
          - type: TimeExceeded
            family: IPv4
          - type: ParameterProblem
            family: IPv4
          - type: Redirect 
            family: IPv4
          - type: EchoRequest
            family: IPv6
          - type: DestinationUnreachable
            family: IPv6
          - type: TimeExceeded
            family: IPv6
          - type: ParameterProblem
            family: IPv6
          - type: RedirectMessage
            family: IPv6
          - type: PacketTooBig
            family: IPv6
          - type: MulticastListenerQuery
            family: IPv6
          - type: MulticastListenerReport
            family: IPv6
      egress:
      - toEntities:
        - all
      - icmps:
        - fields:
          - type: EchoRequest
            family: IPv4
          - type: EchoReply
            family: IPv4
          - type: DestinationUnreachable
            family: IPv4
          - type: TimeExceeded
            family: IPv4
          - type: ParameterProblem
            family: IPv4
          - type: Redirect 
            family: IPv4
          - type: EchoRequest
            family: IPv6
          - type: EchoReply
            family: IPv6
          - type: DestinationUnreachable
            family: IPv6
          - type: TimeExceeded
            family: IPv6
          - type: ParameterProblem
            family: IPv6
          - type: RedirectMessage
            family: IPv6
          - type: PacketTooBig
            family: IPv6
          - type: MulticastListenerQuery
            family: IPv6
          - type: MulticastListenerReport
            family: IPv6

    Allow Kube DNS

    This pair of policies allows the cluster to query DNS.

    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "allow-to-kubedns-ingress"
    spec:
      description: "Policy for ingress allow to kube-dns from all Cilium managed endpoints in the cluster"
      endpointSelector:
        matchLabels:
          k8s:io.kubernetes.pod.namespace: kube-system
          k8s-app: kube-dns
      ingress:
      - fromEndpoints:
        - {}
        toPorts:
        - ports:
          - port: "53"
            protocol: UDP
    ---
    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "allow-to-kubedns-egress"
    spec:
      description: "Policy for egress allow to kube-dns from all Cilium managed endpoints in the cluster"
      endpointSelector:
        {}
      egress:
      - toEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: kube-system
            k8s-app: kube-dns
        toPorts:
        - ports:
          - port: "53"
            protocol: UDP

    Kubernetes Services

    These policies are applied to the standard kubernetes services running in the kube-system namespace.

    Kube DNS

    Kube DNS (or Core DNS in some k8s distros) needs to talk to the k8s API server and also to DNS resolvers outside the cluster.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: kube-dns
      namespace: kube-system
    spec:
      endpointSelector:
        matchLabels:
          k8s:io.kubernetes.pod.namespace: kube-system
          k8s-app: kube-dns
      ingress:
      - fromEntities:
        - host
        toPorts:
        - ports:
          - port: "8080"
            protocol: TCP
          - port: "8181"
            protocol: TCP
      egress:
      - toEntities:
        - world
        toPorts:
        - ports:
          - port: "53"
            protocol: UDP
      - toEntities:
        - host
        toPorts:
        - ports:
          - port: "6443"
            protocol: TCP

    Metrics Server

    The metrics service needs to talk to most of the k8s services.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: metrics-server
      namespace: kube-system
    spec:
      endpointSelector:
        matchLabels:
          k8s:io.kubernetes.pod.namespace: kube-system
          k8s-app: metrics-server
      ingress:
      - fromEntities:
        - host
        - remote-node
        - kube-apiserver
        toPorts:
        - ports:
          - port: "10250"
            protocol: TCP
      egress:
      - toEntities:
        - host
        - kube-apiserver
        - remote-node
        toPorts:
        - ports:
          - port: "10250"
            protocol: TCP
      - toEntities:
        - kube-apiserver
        toPorts:
        - ports:
          - port: "6443"
            protocol: TCP

    Local Path Provisioner

    The local path provisioner only seems to talk to the k8s API server.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: local-path-provisioner
      namespace: kube-system
    spec:
      endpointSelector:
        matchLabels:
          k8s:io.kubernetes.pod.namespace: kube-system
          app: local-path-provisioner
      egress:
      - toEntities:
        - host
        - kube-apiserver
        toPorts:
        - ports:
          - port: "6443"
            protocol: TCP

    Cilium Services

    These policies apply to the Cilium services themselves. I deployed mine to the cilium namespace, so adjust as necessary if you deployed Cilium to the kube-system namespace.

    Hubble Relay

    The hubble-relay service needs to talk to all cilium and hubble components in order to consolidate a cluster-wide view.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cilium
      name: hubble-relay
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: hubble-relay
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "4222"
            protocol: TCP
          - port: "4245"
            protocol: TCP
      - fromEndpoints:
        - matchLabels:
            app.kubernetes.io/name: hubble-ui
        toPorts:
        - ports:
          - port: "4245"
            protocol: TCP
      egress:
      - toEntities:
        - host
        - remote-node
        - kube-apiserver
        toPorts:
          - ports:
            - port: "4244"
              protocol: TCP

    Hubble UI

    The hubble-ui provides the tools necessary to actually observe traffic in the cluster.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cilium
      name: hubble-ui
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: hubble-ui
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "8081"
            protocol: TCP
      egress:
      - toEndpoints:
        - matchLabels:
            app.kubernetes.io/name: hubble-relay
        toPorts:
          - ports:
            - port: "4245"
              protocol: TCP
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP

    Cert Manager

    These policies will help if you’re using cert-manager.

    Cert Manager

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cert-manager
      name: cert-manager
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: cert-manager
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "9403"
            protocol: TCP
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP
      - toEntities:
        - world
        toPorts:
          - ports:
            - port: "443"
              protocol: TCP
            - port: "53"
              protocol: UDP

    Webhook

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cert-manager
      name: webhook
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: webhook
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "6080"
            protocol: TCP
      - fromEntities:
          - kube-apiserver
        toPorts:
        - ports:
          - port: "10250"
            protocol: TCP
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP

    CA Injector

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cert-manager
      name: cainjector
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: cainjector
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP

    External DNS

    This policy will allow external-dns to communicate with API driven DNS services. To update local DNS services via RFC2136 updates, change the world egress port from 443 TCP to 54 UDP.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: external-dns
      name: external-dns
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: external-dns
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "7979"
            protocol: TCP
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP
      - toEntities:
        - world
        toPorts:
          - ports:
            - port: "443"
              protocol: TCP

    Ingress-Nginx & OAuth2 Proxy

    These policies will be helpful if you use ingress-nginx and oauth2-proxy. Note that I deployed them to their own namespaces, so you may need to adjust if you deployed them to the same namespace.

    Ingress-Nginx

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: ingress-nginx
      name: ingress-nginx
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
      ingress:
      - fromEntities:
          - kube-apiserver
        toPorts:
        - ports:
          - port: "8443"
            protocol: TCP
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "10254"
            protocol: TCP
      - fromEntities:
          - world
        toPorts:
        - ports:
          - port: "80"
            protocol: TCP
          - port: "443"
            protocol: TCP
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP
      - toEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: oauth2-proxy
            app.kubernetes.io/name: oauth2-proxy
        toPorts:
        - ports:
          - port: "4180"
            protocol: TCP

    OAuth2 Proxy

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: oauth2-proxy
      name: oauth2-proxy
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: oauth2-proxy
      ingress:
      - fromEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: ingress-nginx
            app.kubernetes.io/name: ingress-nginx
        toPorts:
        - ports:
          - port: "4180"
            protocol: TCP
      - fromEntities:
        - host
        toPorts:
        - ports:
          - port: "4180"
            protocol: TCP
      egress:
      - toEntities:
        - world
        toPorts:
          - ports:
            - port: "443"
              protocol: TCP

    Conclusion

    These policies should get your cluster off the ground (or close to it). You’ll still need to add additional policies for your actual workloads (and probably extend the ingress-nginx one).

  • Plex Behind An Nginx Reverse Proxy: The Hard Way

    Plex Behind An Nginx Reverse Proxy: The Hard Way

    Did IT deploy a new web filter at work? Is it preventing you from streaming music to drown out the droning of your co-workers taking meetings in your open plan office? Have you got a case of the Mondays?

    That was the situation I found myself in recently. By default, Plex listens on port 32400, though it’ll happily use any port and it plays nice with gateways that support UPNP/NAT-PMP and pick a random public port to forward. That random port was the source of my problem. The new webfilter doesn’t mind the Plex domain, but it doesn’t like connections that aren’t on ports 80 or 443 – not even 22, and certainly not 32400.

    Time for a reverse proxy. There’s lots of documentation about putting Plex behind a reverse proxy out there, but as is often the case with me, I had some additional requirements that complicated things a bit.

    I already run a reverse proxy on my public IP that terminates TLS for a few services I host internally on my LAN behind an OAuth proxy. And by default, the connections from Plex clients want to connect directly to the media server via the plex.direct domain, which I don’t control and for which I can’t easily create TLS certificates (in truth, I probably could using Lets Encrypt and either the HTTP or ALPN challenge, but where’s the fun in that?).

    Here’s the behaviour I need:
    1. Stream connections for *.plex.direct to the Plex media server
    2. Terminate TLS for primary domain name and proxy those connections internally
    3. (Optional) Accept SSH connections on 443 and stream those to OpenSSH

    First, create a new HTTPS proxy entry for plex, and update all of your proxies to use an alternate port. For fun, create a server entry that returns HTTP status code 418 – we’ll use that as a default fallthrough for connections we aren’t expecting.

    http {
        server {
            listen 127.0.0.1:8443 ssl http2;
            server_name  wan.example.com;
            location / {
              proxy_pass https://home.lan.example.com;
            }
        }
        server {
            listen 127.0.0.1:8443 ssl http2;
            server_name  plex.example.com;
            location / {
              proxy_pass https://plex.lan.example.com:32400;
            }
        }
        server {
          listen 127.0.0.1:8080 default_server;
          return 418;
        }
    }

    Combine that with the Custom server access URLs setting and you’re probably good. But where’s the fun in that? We want maximum flexibility and connectivity from clients, so let’s mix it up with the stream module.

    stream {
      log_format stream '$remote_addr - - [$time_local] $protocol '
                        '$status $bytes_sent $bytes_received '
                        '$upstream_addr "$ssl_preread_server_name" '                    
                        '"$ssl_preread_protocol" "$ssl_preread_alpn_protocols"';
    
      access_log /var/log/nginx/stream.log stream;
    
      upstream proxy {
        server      127.0.0.1:8443;
      }
    
      upstream teapot {
        server      127.0.0.1:8080;
      }
    
      upstream plex {
        server      172.16.10.10:32400;
      }
    
      upstream ssh {
        server      127.0.0.1:22;
      }
    
      map $ssl_preread_protocol $upstream {
        "" ssh;
        "TLSv1.3" $name;
        "TLSv1.2" $name;
        "TLSv1.1" $name;
        "TLSv1" $name;
        default $name;
      }
    
      map $ssl_preread_server_name $name {
        hostnames;
        *.plex.direct       plex;
        plex.example.com    proxy;
        wan.example.com     proxy;
        default             teapot;
      }
    
      server {
        listen      443;
        listen      [::]:443;
        proxy_pass  $upstream;
        ssl_preread on;
      }
    }

    Reading from the bottom we see that we’re listening on port 443, but not terminating TLS. We enable ssl_preread, and proxy_pass via $upstream. That uses the $ssl_preread_protocol map block to identify SSH traffic and send that to the local SSH server, otherwise traffic goes to $name.

    $name uses the $ssl_preread_server_name map block, which uses the SNI name to determine which proxy to send the traffic to. Because we specify the hostnames variable, we can use wildcards in our domain matches. Connections for *.plex.direct stream directly to the Plex media server, while those for my domain name are streamed to the HTTPS reverse proxy I defined previously, which handles the TLS termination. Finally, any connection for a domain I don’t recognize gets a lovely 418 I’m a Teapot response code.

  • Bypassing Bell Home Hub 3000 with a media converter and FreeBSD

    I recently moved and decided to have Bell install their Fibe FTTH service. Bell provides an integrated Home Hub 3000 (HH3k from now on) unit to terminate the fibre and provide wifi/router functionality. It’s not terrible as this ISP provided units go and probably relatively serviceable for regular consumer use, but it’s got some limitations that annoy anal retentive geeks like me.

    I wanted to bypass it. It’ll do PPPoE passthrough, so you can mostly bypass it just by plugging your existing router into the HH3k and configuring your PPPoE settings. If you want to you can disable the wifi on the HH3k. You can also use the Advanced DMZ setting to assign a public IP via DHCP to a device you designate.

    But what if you want to bypass it physically and not deal with this bulky unit at all? Turns out you can get a fibre to Ethernet media converter for $40CAD from Amazon, and just use that instead. On your router you’ll need to configure your PPPoE connection to use VLAN35 on the interface connected to the media converter/fibre connection, but if you’re using pfSense or raw FreeBSD like me, this is simple enough.

    Physical Setup:

    1. Buy a media converter. Personally I purchased this product from 10Gtek (I don’t use referral codes or anything).
    2. In the HH3k you’ll find the fibre cable is plugged into a GBIC. Disconnect the fiber cable and you’ll find a little pull-latch on the GBIC you can use to pull it from the HH3k. The GBIC itself is (I believe) authenticated on the Bell network, so don’t break or lose it. Plug the GBIC into the media converter.
    3. Plug the fibre cable into the GBIC.
    4. Plug the Ethernet port of the media converter into the WAN port on your router.

    FreeBSD configuration:

    1. Configure your WAN NICs in /etc/rc.conf:
    vlans_igb0=35
    ifconfig_igb0="inet 192.168.2.254 netmask 255.255.255.0"

    Adjust for your NIC type/number. I found I had to assign an IP address to the root NIC before the PPPoE would work over the VLAN interface. I used an IP from the default subnet used by the HH3k. This way if I ever plug the HH3k back in, I’ll be able to connect to it to manage it.

    2. Update your mpd5.conf to reference your new VLAN interface:

    default:
            load bell
    bell:
            create bundle static BellBundle0
            set bundle links BellLink0
            set ipcp ranges 0.0.0.0/0 0.0.0.0/0
            set ipcp disable vjcomp
            set iface route default
            create link static BellLink0 pppoe
            set auth authname blahblah
            set auth password foobar
            set pppoe iface igb0.35
            set pppoe service "bell"
            set link max-redial 0
            set link keep-alive 10 60
            set link enable shortseq
            set link mtu 1492
            set link mru 1492
            set link action bundle BellBundle0
            open

    And that’s literally it. Bounce your configuration (or your router) and everything should come up. I found the PPPoE connection was effectively instantaneous in this configuration, where it had taken a bit to light up when the HH3k was in the mix.

  • My beets Configuration (Nov. 2016 Edition)

    This is mostly for my own convenience. I recently rebuilt a host for managing my beets library, and these are the packages (both deb and pip) that I needed to install for my particular beets config to work. And since that’s not really very helpful to anyone else without my beets config, I’ve included that as well.

    This should work for both ubuntu trusty and xenial.

    Requirements for beets:

    $ sudo apt-get install python-dev python-pip python-gi libchromaprint-tools imagemagick mp3val flac lame flac gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0 plugins-ugly gir1.2-gstreamer-1.0 libxml2-dev libxslt1-dev zlib1g-dev
    $ sudo pip install beets pyacoustid discogs-client pylast requests bs4 isodate lxml

    I use two plugins not included by default:
    bandcamp
    rymgenre

    Beets config:

    ############################################################################
    ## Beets Configuration file.
    ## ~./config/beets/config.yaml
    #############################################################################
    
    ### Global Options
    library: ~/.config/beets/library.blb
    directory: /mnt/music/
    pluginpath: ~/.config/beets/plugins
    ignore: .AppleDouble ._* *~ .DS_Store
    per_disc_numbering: true
    threaded: yes
    
    # Plugins
    plugins: [
    # autotagger extentions
    chroma,
    discogs,
    # metadata plugins
    acousticbrainz,
    embedart,
    fetchart,
    ftintitle,
    lastgenre,
    mbsync,
    replaygain,
    scrub,
    # path format plugins
    bucket,
    inline,
    the,
    # interoperability plugins
    badfiles,
    # misc plugins
    missing,
    info,
    # other plugins
    bandcamp
    ]
    
    # Import Options
    import:
    copy: true
    move: false
    write: true
    delete: false
    resume: ask
    incremental: false
    quiet_fallback: skip
    none_rec_fallback: skip
    timid: false
    languages: en
    log: ~/beets-import.log
    
    # Path options
    paths:
    # Albums/A/ASCI Artist Name, The/[YEAR] ASCI Album Name, The [EP]/01 - Track Name.mp3
    default: 'Albums/%bucket{%upper{%left{%the{%asciify{$albumartist}},1}}}/%the{%asciify{$albumartist}}/[%if{$year,$year,0000}] %asciify{$album} %aunique{albumartist album year, albumtype label catalognum albumdisambig}/%if{$multidisc,$disc-}$track - %asciify{$title}'
    # Singles/ASCII Artist Name, The - Track Name.mp3
    singleton: 'Singles/%%the{%asciify{$artist}} - %asciify{$title}'
    # Compilations/[YEAR] ASCI Compilation Name, The/01-01 - Track Name.mp3
    comp: 'Compilations/[%if{$year,$year,0000}] %the{%asciify{$album}%if{%aunique, %aunique{albumartist album year, albumtype label catalognum albumdisambig}}}/%if{$multidisc,$disc-}$track - %asciify{$title}'
    # Sountracks/[YEAR] ASCI Soundtrack Name, The/01 - Track Name.mp3
    albumtype:soundtrack: 'Soundtracks/[%if{$year,$year,0000}] %the{%asciify{$album}%if{%aunique, %aunique{albumartist album year, albumtype label catalognum albumdisambig}}}/%if{$multidisc,$disk-}$track - %asciify{$title}'
    
    ### Plugin Options
    
    # Inline plugin multidisc template
    item_fields:
    multidisc: 1 if disctotal > 1 else 0
    
    # Collects all special characters into single bucket
    bucket:
    bucket_alpha:
    - _
    - A
    - B
    - C
    - D
    - E
    - F
    - G
    - H
    - I
    - J
    - K
    - L
    - M
    - N
    - O
    - P
    - Q
    - R
    - S
    - T
    - U
    - V
    - W
    - X
    - Y
    - Z
    bucket_alpha_regex:
    '_': ^[^A-Z]
    
    # Per album genres selected from a custom list
    # My genre-tree.yaml is ever so slightly custom as well
    # I found per-album genres in last.fm could be very misleading.
    lastgenre:
    canonical: ~/.config/beets/genres-tree.yaml
    whitelist: ~/.config/beets/genres.txt
    min_weight: 20
    source: artist
    fallback: 'Unknown'
    
    # rymgenre doesn't run on import, so I use it as a backup
    # for when lastgenre is giving me garbage results.
    rymgenre:
    classes: primary
    depth: node
    
    # Fetch fresh album art for new imports
    fetchart:
    sources: coverart itunes amazon albumart
    store_source: yes
    
    # I want the option to scrub, but don't feel the need to scrub automatically
    scrub:
    auto: no
    
    # Gstreamer is a pain, but still the correct backend
    replaygain:
    backend: gstreamer
    overwrite: yes
    
    acoustid:
    apikey: <API_KEY>
    
    echonest:
    apikey: <API_KEY>
    auto: yes