Category: Networking

Posts related to networking.

  • Configuring DNS and DHCP For A LAN In 2025

    Configuring DNS and DHCP For A LAN In 2025

    14 years ago I described how to configure ISC BIND and DHCP(v6) Server on FreeBSD to get DHCP with local domain updates working on a dual stack LAN. However, ISC DHCP Server went End of Life on October 5th, 2022, replaced with their new Kea DHCP server. I also wouldn’t recommend running a non-filtering DNS resolver for your LAN any longer.

    AdGuard Home

    If you’re reading this blog, you’ve almost certainly heard of Pi-hole. However, I’ve found that I prefer AdGuard Home. AdGuard offers paid DNS filtering apps (I happily pay for AdGuard Pro for my iPhone), however their Home product is open source (GPL3) and free. I wont repeat the official Getting started docs, except to point out that AdGuard Home is available in FreeBSD ports so go ahead and install it with pkg install adguardhome.

    There are some configuration changes we’re going to make that cannot be done in the web UI and have to be done directly in the AdGuardHome.yaml config file. I wonder cover everything in the file, just the interesting bits.

    First, we’re going to be specific about which IPs to bind to, so we don’t accidentally create a public resolver, and also because there are LAN IPs on the router we don’t want to bind to (more on this in just a moment).

    http:
      pprof:
        port: 6060
        enabled: false
      address: 172.16.2.1:3000
      session_ttl: 720h
    ...
    dns:
      bind_hosts:
        - 127.0.0.1
        - ::1
        - 172.16.2.1
        - 2001:db8:ffff:aa02::1

    Your choice of upstream resolver is of course personal preference, but I wanted a non-filtering upstream since I want to the control and visibility into why requests are passing/failing. I’m also Canadian, so I prefer (but don’t require) that my queries stay domestic. I’m also sending requests for my LAN domains to the authoritative DNS server, which you can see is configured on local host IP 127.0.0.53 and on the similarly numbered alias IPs on my LAN interface (hence why I had to be specific about which IPs I wanted AdGuard to bind to).

      upstream_dns:
        - '# public resolvers'
        - https://private.canadianshield.cira.ca/dns-query
        - https://unfiltered.adguard-dns.com/dns-query
        - '# local network'
        - '[/lan.example.com/]127.0.0.53 172.16.2.53 2001:db8:ffff:aa02::53'
    ...
      trusted_proxies:
        - 127.0.0.0/8
        - ::1/128
        - 172.16.2.1/32
    ...
      local_ptr_upstreams:
        - 172.16.2.53
        - 2001:db8:ffff:aa02::53

    Lastly, we’re going to configure the webserver for HTTPS and DNS-over-HTTPS (DoH). I use dehydrated to manage my Let’s Encrypt certs, but any tool will do (and is outside the scope of this doc). The important thing to note is that the web UI will now run on port 8453, and will answer DoH queries.

    tls:
      enabled: true
      server_name: router.lan.example.com
      force_https: true
      port_https: 8453
      port_dns_over_tls: 853
      port_dns_over_quic: 853
      port_dnscrypt: 0
      dnscrypt_config_file: ""
      allow_unencrypted_doh: false
      certificate_chain: ""
      private_key: ""
      certificate_path: /usr/local/etc/dehydrated/certs/router.lan.example.com/fullchain.pem
      private_key_path: /usr/local/etc/dehydrated/certs/router.lan.example.com/privkey.pem
      strict_sni_check: false

    The rest of the configuration should be done to taste in the web UI. Personally, I find this filter list is effective while still having a very low false positive rate:

    • AdGuard DNS filter
    • AdAway Default Blocklist
    • AdGuard DNS popup Hosts filter
    • HaGeZi’s Threat Intelligence Feeds
    • HaGeZi’s Pro++ Blocklist
    • OISD Blocklist Big

    More than that and it just becomes unwieldy.

    BIND

    Good old BIND. It’ll outlive us all. This part is basically unchanged I first described it in 2011, except that I’m going to have BIND listen on 127.0.0.53 and alias IPs I created on my LAN networks (also using the .53 address) by setting this in my /etc/rc.conf:

    ifconfig_igb1="inet 172.16.2.1 netmask 255.255.255.0"
    ifconfig_igb1_ipv6="inet6 2001:db8:ffff:aa02::1 prefixlen 64"
    ifconfig_igb1_aliases="\
      inet 172.16.2.53 netmask 255.255.255.0 \
      inet6 2001:db8:ffff:aa02::53 prefixlen 64"

    Next, create an rndc key with rndc-confgen -a -c /usr/local/etc/namedb/rndc.example.com and configure BIND with the following in /usr/local/etc/named/named.conf (don’t remove the logging or zones at the bottom of the default named.conf).

    "acl_self" {
      127.0.0.1;
      127.0.0.53;
      172.16.2.1;
      172.16.2.53;
      ::1;
      2001:db8:ffff:aa02::1;
      2001:db8:ffff:aa02::53;
    };
    
    acl "acl_lan" {
      10.42.0.0/16;
      10.43.0.0/16;
      172.16.2.0/24;
      2001:db8:ffff:aa02::/64;
      fe80::/10;
    };
    
    options {
      directory             "/usr/local/etc/namedb/working";
      pid-file              "/var/run/named/pid";
      dump-file             "/var/dump/named_dump.db";
      statistics-file       "/var/stats/named.stats";
      allow-transfer        { acl_lan; };
      allow-notify          { "none"; };
      allow-recursion       { "none"; };
      dnssec-validation     auto;
      auth-nxdomain         no;
      recursion             no;
      listen-on             { 127.0.0.53; 172.16.2.53; };
      listen-on-v6          { 2001:db8:ffff:aa02::53; };
      disable-empty-zone "255.255.255.255.IN-ADDR.ARPA";
      disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
      disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
      version "BIND";
    };
    
    include "/usr/local/etc/namedb/rndc.example.com";
    
    controls {
      inet 127.0.0.53 allow { "acl_self"; "acl_lan"; } keys { "rndc.example.com";};
      inet 172.16.2.53 allow { "acl_self"; "acl_lan"; } keys { "rndc.example.com";};
      inet 2001:db8:ffff:aa02::53 allow { "acl_self"; "acl_lan"; } keys { "rndc.example.com";};
    };
    
    include "/usr/local/etc/namedb/named.zones.local";

    The local zones are configured in /usr/local/etc/namedb/named.zones.local:

    acl zone "lan.example.com" {
      type master;
      file "../dynamic/lan.example.com";
      update-policy { grant rndc.example.com zonesub ANY; };
    };
    
    zone "2.16.172.in-addr.arpa" {
      type master;
      file "../dynamic/2.16.172.in-addr.arpa";
      update-policy { grant rndc.example.com zonesub ANY; };
    };
    
    zone "2.0.a.a.f.f.f.f.8.B.D.0.1.0.0.2.ip6.arpa" {
      type master;
      file "../dynamic/2001.0db8.ffff.aa02.ip6.arpa";
      update-policy { grant rndc.example.com zonesub ANY; };
    };

    Here’s a starter zone for lan.example.com:

    $ORIGIN .
    $TTL 1200       ; 20 minutes
    lan.example.com      IN SOA  ns0.lan.example.com. admin.example.com. (
                                    2020138511 ; serial
                                    1200       ; refresh (20 minutes)
                                    1200       ; retry (20 minutes)
                                    2419200    ; expire (4 weeks)
                                    3600       ; minimum (1 hour)
                                    )
                            NS      ns0.lan.example.com.
                            A       172.16.2.53
                            AAAA    2001:db8:ffff:aa02::53
    $ORIGIN lan.example.com.
    router                  A       172.16.2.1
                            AAAA    2001:db8:ffff:aa02::1

    An IPv4 reverse zone:

    $ORIGIN .
    $TTL 1200       ; 20 minutes
    2.16.172.in-addr.arpa IN SOA ns0.lan.example.com. admin.example.com. (
                                    2020051192 ; serial
                                    1200       ; refresh (20 minutes)
                                    1200       ; retry (20 minutes)
                                    2419200    ; expire (4 weeks)
                                    3600       ; minimum (1 hour)
                                    )
                            NS      ns0.lan.example.com.
    $ORIGIN 2.16.172.in-addr.arpa.
    1                       PTR     router.lan.example.com.

    And an IPv6 reverse zone:

    $ORIGIN .
    $TTL 1200       ; 20 minutes
    2.0.a.a.f.f.f.f.8.B.D.0.1.0.0.2.ip6.arpa IN SOA ns0.lan.example.com. mikemacleod.gmail.com. (
                                    2020049273 ; serial
                                    1200       ; refresh (20 minutes)
                                    1200       ; retry (20 minutes)
                                    2419200    ; expire (4 weeks)
                                    3600       ; minimum (1 hour)
                                    )
                            NS      ns0.lan.example.com.
    $ORIGIN 0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.a.a.f.f.f.f.8.B.D.0.1.0.0.2.ip6.arpa.
    1.0                     PTR     router.lan.example.com.

    Kea DHCP Server

    The final piece to this is the Kea DHCP server. It’s still from ISC, but this is a from-scratch implementation of DHCP and DHCPv6 that to use modern designs and tools. We won’t be using many of the new bells and whistles, but there’s a couple things we can do now that we couldn’t with ISC DHCP.

    The first thing you’ll notice is that the Kea config files are JSON, and there are four of them. First up is kea-dhcp4.conf, where we configure our IPv4 DHCP options and pool, and also the options necessary to enable dynamic updating of our LAN domain via RFC2136 DDNS updates. Note that because I had an existing zone that had been updated by ISC DHCP and other stuff, I set "ddns-conflict-resolution-mode": "no-check-with-dhcid". You can find more info here.

    {
      "Dhcp4": {
        "ddns-send-updates": true,
        "ddns-conflict-resolution-mode": "no-check-with-dhcid",
        "hostname-char-set": "[^A-Za-z0-9.-]",
        "hostname-char-replacement": "x",
        "interfaces-config": {
          "interfaces": [
            "igb1/172.16.2.1"
          ]
        },
        "dhcp-ddns": {
          "enable-updates": true
        },
        "subnet4": [
          {
            "id": 1,
            "subnet": "172.16.2.0/24",
            "authoritative": true,
            "interface": "igb1",
            "ddns-qualifying-suffix": "lan.example.com",
            "pools": [
              {
                "pool": "172.16.2.129 - 172.16.2.254"
              }
            ],
            "option-data": [
              {
                "name": "routers",
                "data": "172.16.2.1"
              },
              {
                "name": "domain-name-servers",
                "data": "172.16.2.1"
              },
              {
                "name": "domain-name",
                "data": "lan.example.com"
              },
              {
                "name": "ntp-servers",
                "data": "172.16.2.1"
              }
            ],
            "reservations": [
              {
    
                "hw-address": "aa:bb:cc:dd:ee:ff",
                "ip-address": "172.16.2.2",
                "hostname": "foobar"
              }
            ]
          }
        ],
        "loggers": [
          {
            "name": "kea-dhcp4",
            "output-options": [
              {
                "output": "syslog"
              }
            ],
            "severity": "INFO",
            "debuglevel": 0
          }
        ]
      }
    }

    The kea-dhcp6.conf file is basically identical, except IPv6 flavoured. One nice thing about Kea is you can set a DHCPv6 reservation by MAC address, which is something you could not do with ISC DHCPv6 Server.

    {
      "Dhcp6": {
        "ddns-send-updates": true,
        "ddns-conflict-resolution-mode": "no-check-with-dhcid",
        "hostname-char-set": "[^A-Za-z0-9.-]",
        "hostname-char-replacement": "x",
        "dhcp-ddns": {
          "enable-updates": true
        },
        "interfaces-config": {
          "interfaces": [
            "igb1"
          ]
        },
        "subnet6": [
          {
            "id": 1,
            "subnet": "2001:db8:ffff:aa02::/64",
            "interface": "igb1",
            "rapid-commit": true,
            "ddns-qualifying-suffix": "lan.example.com",
            "pools": [
              {
                "pool": "2001:db8:ffff:aa02:ffff::/80"
              }
            ],
            "option-data": [
              {
                "name": "dns-servers",
                "data": "2001:db8:ffff:aa02::1"
              }
            ],
            "reservations": [
              {
                "hw-address": "aa:bb:cc:dd:ee:ff",
                "ip-addresses": [
                  "2001:db8:ffff:aa02::2"
                ],
                "hostname": "foobar"
              }
              }
            ]
          }
        ],
        "loggers": [
          {
            "name": "kea-dhcp6",
            "output-options": [
              {
                "output": "syslog"
              }
            ],
            "severity": "INFO",
            "debuglevel": 0
          }
        ]
      }
    }

    Lastly, we have kea-dhcp-ddns.conf, which configures how the zones will actuall be updated. Note that I’m connecting to BIND on 127.0.0.53.

    {
      "DhcpDdns": {
        "tsig-keys": [
          {
            "name": "rndc.example.com",
            "algorithm": "hmac-sha256",
            "secret": "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz"
          }
        ],
        "forward-ddns": {
          "ddns-domains": [
            {
              "name": "lan.example.com.",
              "key-name": "rndc.example.com",
              "dns-servers": [
                {
                  "ip-address": "127.0.0.53",
                  "port": 53
                }
              ]
            }
          ]
        },
        "reverse-ddns": {
          "ddns-domains": [
            {
              "name": "2.16.172.in-addr.arpa.",
              "key-name": "rndc.example.com",
              "dns-servers": [
                {
                  "ip-address": "127.0.0.53",
                  "port": 53
                }
              ]
            },
            {
              "name": "2.0.a.a.f.f.f.f.8.B.D.0.1.0.0.2.ip6.arpa.",
              "key-name": "rndc.example.com",
              "dns-servers": [
                {
                  "ip-address": "127.0.0.53",
                  "port": 53
                }
              ]
            }
          ]
        },
        "loggers": [
          {
            "name": "kea-dhcp-ddns",
            "output-options": [
              {
                "output": "syslog"
              }
            ],
            "severity": "INFO",
            "debuglevel": 0
          }
        ]
      }
    }

    Extra Credit: Mobile DNS over HTTPS (DoH)

    I mentioned earlier that I pay for AdGuard Pro on my phone. Part of why I do that is it uses the MDM API in iOS to let you force your DNS to a DoH provider, including a custom one. Perhaps one you’re hosting yourself.

    I’m already running an nginx reverse proxy on my router, so let’s get mobile DoH setup. This is a simplified configuration and you’ll need to ensure you’ve got HTTPS properly configured, which is (again) outside the scope of this post.

    Note that I proxy the request to router.lan.example.com which will resolve to the LAN IP 172.16.2.1 rather than localhost, because we configured AdGuard Home to run it’s HTTP server on 172.16.2.1.

        server {
            listen 443 ssl;
            server_name  dns.example.com;
            location / {
                return 418;
            }
            location /dns-query {
                proxy_pass https://router.lan.example.com:8453/dns-query;
                proxy_set_header X-Real-IP  $proxy_protocol_addr;
                proxy_set_header X-Forwarded-For $proxy_protocol_addr;
            }
        }

    Conclusion

    That should do it. You’ve now got filtered DNS resolution for the LAN. You’ve got an authoritative LAN domain. You’ve got a modern DHCP service. And you’ve even got filtered DNS resolution when you’re out of the house.

  • Zero Trust K3s Network With Cilium

    Zero Trust K3s Network With Cilium

    I wanted to implement full zero-trust networking within my k3s cluster which uses the Cilium CNI, which has custom CiliumClusterwideNetworkPolicy and CiliumNetworkPolicyresources, which extend what is possible with standard Kubernetes NetworkPolicy resources.

    Cilium defaults to allowing traffic, but if a policy is applied to an endpoint, it switches and will deny any connect not explicitely allowed. Note that this is direction dependent, so ingress and egress are treated separately.

    Zero trust policies require you to control traffic in both directions. Not only does your database need to accept traffic from your app, but your app has to allow the connection to the database.

    This is tedious, and if you don’t get it right it will break your cluster and your ability to tell what you’re missing. So I figured I’d document the policies required to keep your cluster functional.

    Note that my k3s cluster has been deployed with --disable-network-policy, --disable-kube-proxy, --disable-servicelb, and --disable-traefik, because these services are provided by Cilium (or ingress-nginx, in the case of traefik).

    Lastly, while the policies below apply to k3s, they’re probably a good starting point for other clusters – the specifics will be different, but you’re always going to want to allow traffic to your DNS service, etc.

    Hubble UI

    Before attempting any network policies, ensure you’ve got hubble ui and hubble observe working. You should verify that the endpoints and ports used in the policies below match your cluster.

    Cluster Wide Policies

    These policies are applied cluster wide, without regard for namespace boundaries.

    Default Deny

    Does what it says on the tin.

    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "default-deny"
    spec:
      description: "Empty ingress and egress policy to enforce default-deny on all endpoints"
      endpointSelector:
        {}
      ingress:
      - {}
      egress:
      - {}

    Allow Health Checks

    Required to allow cluster health checks to pass.

    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "health-checks"
    spec:
      endpointSelector:
        matchLabels:
          'reserved:health': ''
      ingress:
        - fromEntities:
          - remote-node
      egress:
        - toEntities:
          - remote-node

    Allow ICMP

    ICMP is useful with IPv4, and absolutely necessary for IPv6. This policy allows select ICMP and ICMPv6 request types globally, both within and outside the cluster.

    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "allow-icmp"
    spec:
      description: "Policy to allow select ICMP traffic globally"
      endpointSelector:
        {}
      ingress:
      - fromEntities:
        - all
      - icmps:
        - fields:
          - type: EchoRequest
            family: IPv4
          - type: EchoReply
            family: IPv4
          - type: DestinationUnreachable
            family: IPv4
          - type: TimeExceeded
            family: IPv4
          - type: ParameterProblem
            family: IPv4
          - type: Redirect 
            family: IPv4
          - type: EchoRequest
            family: IPv6
          - type: DestinationUnreachable
            family: IPv6
          - type: TimeExceeded
            family: IPv6
          - type: ParameterProblem
            family: IPv6
          - type: RedirectMessage
            family: IPv6
          - type: PacketTooBig
            family: IPv6
          - type: MulticastListenerQuery
            family: IPv6
          - type: MulticastListenerReport
            family: IPv6
      egress:
      - toEntities:
        - all
      - icmps:
        - fields:
          - type: EchoRequest
            family: IPv4
          - type: EchoReply
            family: IPv4
          - type: DestinationUnreachable
            family: IPv4
          - type: TimeExceeded
            family: IPv4
          - type: ParameterProblem
            family: IPv4
          - type: Redirect 
            family: IPv4
          - type: EchoRequest
            family: IPv6
          - type: EchoReply
            family: IPv6
          - type: DestinationUnreachable
            family: IPv6
          - type: TimeExceeded
            family: IPv6
          - type: ParameterProblem
            family: IPv6
          - type: RedirectMessage
            family: IPv6
          - type: PacketTooBig
            family: IPv6
          - type: MulticastListenerQuery
            family: IPv6
          - type: MulticastListenerReport
            family: IPv6

    Allow Kube DNS

    This pair of policies allows the cluster to query DNS.

    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "allow-to-kubedns-ingress"
    spec:
      description: "Policy for ingress allow to kube-dns from all Cilium managed endpoints in the cluster"
      endpointSelector:
        matchLabels:
          k8s:io.kubernetes.pod.namespace: kube-system
          k8s-app: kube-dns
      ingress:
      - fromEndpoints:
        - {}
        toPorts:
        - ports:
          - port: "53"
            protocol: UDP
    ---
    apiVersion: "cilium.io/v2"
    kind: CiliumClusterwideNetworkPolicy
    metadata:
      name: "allow-to-kubedns-egress"
    spec:
      description: "Policy for egress allow to kube-dns from all Cilium managed endpoints in the cluster"
      endpointSelector:
        {}
      egress:
      - toEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: kube-system
            k8s-app: kube-dns
        toPorts:
        - ports:
          - port: "53"
            protocol: UDP

    Kubernetes Services

    These policies are applied to the standard kubernetes services running in the kube-system namespace.

    Kube DNS

    Kube DNS (or Core DNS in some k8s distros) needs to talk to the k8s API server and also to DNS resolvers outside the cluster.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: kube-dns
      namespace: kube-system
    spec:
      endpointSelector:
        matchLabels:
          k8s:io.kubernetes.pod.namespace: kube-system
          k8s-app: kube-dns
      ingress:
      - fromEntities:
        - host
        toPorts:
        - ports:
          - port: "8080"
            protocol: TCP
          - port: "8181"
            protocol: TCP
      egress:
      - toEntities:
        - world
        toPorts:
        - ports:
          - port: "53"
            protocol: UDP
      - toEntities:
        - host
        toPorts:
        - ports:
          - port: "6443"
            protocol: TCP

    Metrics Server

    The metrics service needs to talk to most of the k8s services.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: metrics-server
      namespace: kube-system
    spec:
      endpointSelector:
        matchLabels:
          k8s:io.kubernetes.pod.namespace: kube-system
          k8s-app: metrics-server
      ingress:
      - fromEntities:
        - host
        - remote-node
        - kube-apiserver
        toPorts:
        - ports:
          - port: "10250"
            protocol: TCP
      egress:
      - toEntities:
        - host
        - kube-apiserver
        - remote-node
        toPorts:
        - ports:
          - port: "10250"
            protocol: TCP
      - toEntities:
        - kube-apiserver
        toPorts:
        - ports:
          - port: "6443"
            protocol: TCP

    Local Path Provisioner

    The local path provisioner only seems to talk to the k8s API server.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: local-path-provisioner
      namespace: kube-system
    spec:
      endpointSelector:
        matchLabels:
          k8s:io.kubernetes.pod.namespace: kube-system
          app: local-path-provisioner
      egress:
      - toEntities:
        - host
        - kube-apiserver
        toPorts:
        - ports:
          - port: "6443"
            protocol: TCP

    Cilium Services

    These policies apply to the Cilium services themselves. I deployed mine to the cilium namespace, so adjust as necessary if you deployed Cilium to the kube-system namespace.

    Hubble Relay

    The hubble-relay service needs to talk to all cilium and hubble components in order to consolidate a cluster-wide view.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cilium
      name: hubble-relay
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: hubble-relay
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "4222"
            protocol: TCP
          - port: "4245"
            protocol: TCP
      - fromEndpoints:
        - matchLabels:
            app.kubernetes.io/name: hubble-ui
        toPorts:
        - ports:
          - port: "4245"
            protocol: TCP
      egress:
      - toEntities:
        - host
        - remote-node
        - kube-apiserver
        toPorts:
          - ports:
            - port: "4244"
              protocol: TCP

    Hubble UI

    The hubble-ui provides the tools necessary to actually observe traffic in the cluster.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cilium
      name: hubble-ui
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: hubble-ui
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "8081"
            protocol: TCP
      egress:
      - toEndpoints:
        - matchLabels:
            app.kubernetes.io/name: hubble-relay
        toPorts:
          - ports:
            - port: "4245"
              protocol: TCP
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP

    Cert Manager

    These policies will help if you’re using cert-manager.

    Cert Manager

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cert-manager
      name: cert-manager
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: cert-manager
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "9403"
            protocol: TCP
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP
      - toEntities:
        - world
        toPorts:
          - ports:
            - port: "443"
              protocol: TCP
            - port: "53"
              protocol: UDP

    Webhook

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cert-manager
      name: webhook
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: webhook
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "6080"
            protocol: TCP
      - fromEntities:
          - kube-apiserver
        toPorts:
        - ports:
          - port: "10250"
            protocol: TCP
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP

    CA Injector

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: cert-manager
      name: cainjector
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: cainjector
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP

    External DNS

    This policy will allow external-dns to communicate with API driven DNS services. To update local DNS services via RFC2136 updates, change the world egress port from 443 TCP to 54 UDP.

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: external-dns
      name: external-dns
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: external-dns
      ingress:
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "7979"
            protocol: TCP
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP
      - toEntities:
        - world
        toPorts:
          - ports:
            - port: "443"
              protocol: TCP

    Ingress-Nginx & OAuth2 Proxy

    These policies will be helpful if you use ingress-nginx and oauth2-proxy. Note that I deployed them to their own namespaces, so you may need to adjust if you deployed them to the same namespace.

    Ingress-Nginx

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: ingress-nginx
      name: ingress-nginx
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
      ingress:
      - fromEntities:
          - kube-apiserver
        toPorts:
        - ports:
          - port: "8443"
            protocol: TCP
      - fromEntities:
          - host
        toPorts:
        - ports:
          - port: "10254"
            protocol: TCP
      - fromEntities:
          - world
        toPorts:
        - ports:
          - port: "80"
            protocol: TCP
          - port: "443"
            protocol: TCP
      egress:
      - toEntities:
        - kube-apiserver
        toPorts:
          - ports:
            - port: "6443"
              protocol: TCP
      - toEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: oauth2-proxy
            app.kubernetes.io/name: oauth2-proxy
        toPorts:
        - ports:
          - port: "4180"
            protocol: TCP

    OAuth2 Proxy

    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      namespace: oauth2-proxy
      name: oauth2-proxy
    spec:
      endpointSelector:
        matchLabels:
          app.kubernetes.io/name: oauth2-proxy
      ingress:
      - fromEndpoints:
        - matchLabels:
            k8s:io.kubernetes.pod.namespace: ingress-nginx
            app.kubernetes.io/name: ingress-nginx
        toPorts:
        - ports:
          - port: "4180"
            protocol: TCP
      - fromEntities:
        - host
        toPorts:
        - ports:
          - port: "4180"
            protocol: TCP
      egress:
      - toEntities:
        - world
        toPorts:
          - ports:
            - port: "443"
              protocol: TCP

    Conclusion

    These policies should get your cluster off the ground (or close to it). You’ll still need to add additional policies for your actual workloads (and probably extend the ingress-nginx one).

  • VDSL + MLPPP + FreeBSD + Xen = Awesome

    I signed up for the new 25Mbps VDSL services that are becoming available through TekSavvy, now that Bell has to provide speed matching profiles to other providers instead of just the staid old 5Mbps profiles they used to offer.

    The techs were done by the time I got home, but one of them was nice enough to install a proper POTS splitter for me, which was nice. According to the person present at the time, he said something to the effect of “I don’t know if I’m supposed to do this, but I think Mike will appreciate it”.

    My router is a Xen virtual machine, running a hardware virtualized FreeBSD instance, with three NICs passed to it using PCI passthrough. I use packet filter for the firewall and traffic shaping, and MPD5 to handle the actual MLPPP tunnel.

    Once I got home I connected the router to the cellpipe modem, and right away the PPPoE came up. Subsequent testing showed that I actually got slightly better performance by configuring mpd to bring up two full tunnels on the single line and then bond them together than I did by having mpd bring up just a single MLPPP enabled tunnel.

    Speedtest Result

    I had been concerned that the virtual machine wouldn’t be up to the task, but it appears that isn’t much of a concern. I haven’t done any testing with new MTU/MRU values yet, so there’s still a possibility of improving performance slightly from here, but I’m already getting pretty much what was promised, so I don’t know how much further it could go.

  • IPv6 Part 9: Configuring A Domain For IPv6 With BIND

    Welcome to part nine of my multipart series on IPv6. In this post I’ll cover how to configure the ISC BIND daemon to serve an authoritative DNS domain over IPv6. The host is running FreeBSD 8.2, but this should apply equally well to any system running the ISC named daemon.

    Just having connectivity over IPv6 isn’t enough; you also have to tell the rest of the world that it can reach you over IPv6. In this post I’ll cover the basics of configuring your domain for IPv6, on the assumption that your name servers and your web servers have IPv6 connectivity. If your name servers do not, Hurricane Electric will host your DNS on IPv6 enabled systems for free.

    The first step is actually administrative in nature. You need to figure out if you can get your IPv6 addresses into the whois record for your domain at your registrar. In my case my registrar did not have support for adding IPv6 glue records in their admin interface, but they were happy to do it manually through a support ticket. They also said they were in the process of adding IPv6 support to their admin interface, so I hopefully I won’t have to bother the support department next time I need to update a record with them. Here’s an example of my whois record:

    $ whois mmacleod.ca
    Domain name: mmacleod.ca
    Domain status: registered
    Creation date: 2009/04/06
    Expiry date: 2012/04/06
    Updated date: 2011/06/17
    
    Registrar:
    Name: DomainsAtCost Corp.
    Number: 45
    
    Name servers:
    ns1.nullpointer.ca 199.48.133.238 2607:fc50:1000:8b00::2
    ns2.nullpointer.ca 208.86.255.157 2001:0470:001d:0619::2

    As you can see, I have glue records for both IPv4 and IPv6 for my name servers. With that out the way, it’s time to make sure that those nameservers actually serve the zones properly.

    I’m using FreeBSD, but this should apply equally well to any system running BIND, just change the paths to the various configuration files to suit your environment. First, we need to edit /etc/namedb/named.conf:

    $ cat named.conf
    options {
    directory "/etc/namedb/working";
    pid-file "/var/run/named/pid";
    dump-file "/var/dump/named_dump.db";
    statistics-file "/var/stats/named.stats";
    
    recursion no;
    allow-query { any; };
    version "0";
    
    listen-on { 203.0.113.238; };
    listen-on-v6 { 2001:0DB8:1000:8b00::2; };
    };
    
    include "/etc/namedb/dnsadmin.key";
    
    controls {
    inet 127.0.0.1 allow { 127.0.0.1; } keys { "dnsadmin";};
    inet ::1 allow { ::1; } keys { "dnsadmin"; };
    
    };
    
    zone "example.com" {
    type master;
    file "../master/example.com";
    };

    This is a very basic named.conf to highlight the few options necessary to get BIND to listen to requests over IPv6 (which is really just the listen-on-v6 option). You are encouraged to read up on BIND administration, as BIND has been associated with a number of attacks over the years, and proper administration of it is very important.

    Next is the configuration of the zone itself. Our example domain will use Google for Domains for email, and host a few of services. Open up /etc/namedb/master/example.com:

    $ cat example.com
    $TTL 1200
    example.com. IN SOA ns1.example.com. [email protected]. (
    2011062702 ; Serial
    1200 ; Refresh
    1200 ; Retry
    2419200 ; Expire
    3600 ) ; Negative Cache TTL
    ;
    
    IN AAAA 2001:0DB8:1000:8b00:0000:0000:0000:0002
    IN A 203.0.113.238
    IN NS ns1.example.com.
    IN NS ns2.example.com.
    IN MX 10 ASPMX.L.GOOGLE.COM.
    IN MX 20 ALT1.ASPMX.L.GOOGLE.COM.
    IN MX 20 ALT2.ASPMX.L.GOOGLE.COM.
    IN MX 30 ASPMX2.GOOGLEMAIL.COM.
    IN MX 30 ASPMX3.GOOGLEMAIL.COM.
    IN MX 30 ASPMX4.GOOGLEMAIL.COM.
    IN MX 30 ASPMX5.GOOGLEMAIL.COM.
    
    $ORIGIN example.com.
    ; A Records
    ns1 IN A 203.0.113.238
    ns2 IN A 203.0.113.157
    www IN A 203.0.113.238
    appsrv-02 IN A 203.0.113.157
    appsrv-03 IN A 203.0.113.158
    
    ; AAAA Records
    ns1 IN AAAA 2001:0DB8:1000:8b00:0000:0000:0000:0002
    ns2 IN AAAA 2001:0DB8:001d:0619:0000:0000:0000:0002
    www IN AAAA 2001:0DB8:1000:8b00:0000:0000:0000:0002
    appsrv-02 IN AAAA 2001:0DB8:001d:0619:0000:0000:0000:0002
    appsrv-03 IN AAAA 2001:0DB8:001d:0619:0000:0000:0000:0003
    
    ; SRV Records
    _sip._tcp IN SRV 1 0 5060 appsrv-03
    _sip._udp IN SRV 1 0 5060 appsrv-03
    
    ; CNAME Records
    sip IN CNAME appsrv-03.example.com.
    mail IN CNAME ghs.google.com.
    calendar IN CNAME ghs.google.com.
    docs IN CNAME ghs.google.com.
    sites IN CNAME ghs.google.com.

    As you can see, the only real difference between this and an IPv4-only domain is the addition of some AAAA records. It’s worth noting that the SRV and CNAME records are IPv4/IPv6 agnostic, since they just point to another hostname. It’s then up to your operating system whether it wants to find an A or AAAA record for that hostname.

    That’s all there really is to configuring an authoritative domain for IPv6.

  • IPv6 Part 8: Configuring DNS And DHCPv6 On An IPv6 Network

    Welcome to part eight of my multipart series on IPv6. In this post I’ll cover how to configure the ISC BIND and DHCP daemons to support dynamic DNS updates from DHCP in DNS on your LAN. The router is running FreeBSD 8.2, but this should apply equally well to any system running the ISC named and dhcpd daemons.

    For all the talk about DNS being the foundation of the Internet, most geeks and system administrators probably know the IP addresses of all the important hosts and services on their network. If you have setup your network for IPv6 and you don’t happen to be an android, then you’ve likely come to the conclusion that navigating to your hosts by IPv6 address is a sucker’s game. It’s time to look at configuring your DHCPv6 and DNS, and making sure they’re talking to each other. For this post I’m going to use the ISC BIND and DHCP daemons; I set them up on FreeBSD, but the choice of OS really shouldn’t matter.

    This post will assume we’re using a single domain for both our IPv4 and our IPv6 addresses. This can be a little tricky, as we will have two DHCP daemons trying to update the DNS configuration and they don’t really like to do that by default. It’s also quite reasonable to setup your main domain as a regular domain, and use two subdomains, one for IPv4 and one for IPv6. If you decide to go that route, you’ll need to configure your two subdomains in BIND, and configure the two instances of dhcpd to use the different domains. Everything else is largely the same though.

    First, configure the BIND daemon. On FreeBSD edit /etc/namedb/named.conf:

    $ cat /etc/namedb/named.conf
    options {
    directory "/etc/namedb/working";
    pid-file "/var/run/named/pid";
    dump-file "/var/dump/named_dump.db";
    statistics-file "/var/stats/named.stats";
    
    listen-on { 127.0.0.1; 192.168.1.1; };
    listen-on-v6 { ::1; 2001:0DB8:f00e:eb00::1; };
    
    forwarders {
    2001:470:20::2;
    };
    };
    
    include "/etc/namedb/rndc.key";
    
    controls {
    inet 127.0.0.1 allow { 127.0.0.1; } keys { "rndc-key";};
    inet 192.168.1.1 allow { 192.168.1.1; } keys { "rndc-key";};
    inet ::1 allow { ::1; } keys { "rndc-key"; };
    inet 2001:0DB8:f00e:eb00::1 allow { 2001:0DB8:f00e:eb00::1; } keys { "rndc-key"; };
    };
    
    zone "." { type hint; file "/etc/namedb/named.root"; };
    
    zone "example.com" {
    type master;
    file "../dynamic/example.com";
    allow-update { key rndc-key; };
    };
    
    zone "1.168.192.in-addr.arpa" {
    type master;
    file "../dynamic/1.168.192.in-addr.arpa";
    allow-update { key rndc-key; };
    };
    
    zone "0.0.b.e.e.0.0.f.8.B.D.0.1.0.0.2.ip6.arpa" {
    type master;
    file "../dynamic/2001:0DB8:f00e:eb00::64.ip6.arpa";
    allow-update { key rndc-key; };
    };

    The above is an edited example. By default the FreeBSD named.conf file also includes a number of intentionally empty and zeroed out zones, such as for reserved address space, example.com, and so forth. I would recommend keeping those, and in general reading up on BIND configuration. BIND has been associated with a number of vulnerabilities and attacks in the past, and proper configuration of BIND can be important for mitigating these risks.

    The important entries in the above configuration include listen-on-v6, which defines which IPv6 addresses your BIND configuration should listen on. You may which to include the link-local addresses associated with your LAN interface. In the control section there are four entries, which are both loopback and the LAN interface in both IPv4 and IPv6. It’s possible to use only the loopback interfaces, though this requires a change to your DHCP configuration.

    I use a forwarder in my configuration. Typically the use of forwarders should be avoided, but in this case the forwarder in question is the Hurricane Electric public DNS server. Hurricane Electric is on Google’s IPv6 whitelist, which means that all of their services will resolve to IPv6 address (AAAA records, specifically). Basically it just means you’ll be able to use Google and YouTube over IPv6.

    Also, there is the rndc key file, which is generated thusly:

    $ rndc-confgen -a -c /etc/namedb/rndc.key
    $ cat /etc/namedb/rndc.key
    key "rndc-key" {
    algorithm hmac-md5;
    secret "";
    };

    That will generate a basic key file, which is included in the named.conf as well as the rndc.conf files. We’ll also use a slightly edited version of it with our DHCP configuration (for some bizarre reason despite both coming from the ISC, the syntax for the two files is just slightly different for each daemon).

    Next up is the example.com forward zone configuration. Because it’s a dynamic zone BIND will handle updating the zone, but it’s up to you to supply the initial zone information. If you need to add entries manually, you can use the nsupdate command. Here’s an example:

    $ cat /etc/namedb/master/example.com
    $TTL 1200 ; 20 minutes
    example.com. IN SOA ns1.example.com. admin.example.com. (
    2011071301 ; serial
    1200 ; refresh (20 minutes)
    1200 ; retry (20 minutes)
    2419200 ; expire (4 weeks)
    3600 ; minimum (1 hour)
    )
    NS ns1.example.com.
    A 192.168.1.1
    AAAA 2001:0DB8:f00e:eb00::1
    
    $ORIGIN example.com.
    ;;;;;;;;;;;;;;;;;;
    ;; IPv4 Records ;;
    ;;;;;;;;;;;;;;;;;;
    ; A Records
    ns1 IN A 192.168.1.1
    gateway IN A 192.168.1.1
    
    ;;;;;;;;;;;;;;;;;;
    ;; IPv6 Records ;;
    ;;;;;;;;;;;;;;;;;;
    ; AAAA Records
    ns1 IN AAAA 2001:0DB8:f00e:eb00::1
    gateway IN AAAA 2001:0DB8:f00e:eb00::1

    This example should get you started. There’s almost no hosts defined, because we want them to all be entered via updates from DHCP. Here are the reverse zones as well:

    $ cat /etc/namedb/dynamic/1.168.192.in-addr.arpa
    $ORIGIN .
    $TTL 1200 ; 20 minutes
    1.168.192.in-addr.arpa IN SOA ns1.example.com. admin.example.com. (
    2011062928 ; serial
    1200 ; refresh (20 minutes)
    1200 ; retry (20 minutes)
    2419200 ; expire (4 weeks)
    3600 ; minimum (1 hour)
    )
    NS ns1.example.com.
    1 PTR gateway.example.com.
    
    $ cat /etc/namedb/dynamic/2001\:0DB8\:f00e\:eb00\:\:64.ip6.arpa
    $ORIGIN .
    $TTL 1200 ; 20 minutes
    0.0.b.e.e.0.0.f.8.B.D.0.1.0.0.2.ip6.arpa IN SOA ns1.example.com. admin.example.com. (
    2011062902 ; serial
    1200 ; refresh (20 minutes)
    1200 ; retry (20 minutes)
    2419200 ; expire (4 weeks)
    3600 ; minimum (1 hour)
    )
    NS ns1.example.com.
    1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 PTR gateway.example.com.

    Just like with the forward zone, they’re pretty basic since we expect BIND and DHCP to handle adding all the hosts. That should cover everything necessary to get BIND off the ground. I haven’t covered configuring rndc, but that’s simple and left as an exercise for the reader.

    Next up is configuring the DHCP daemon. At present, the ISC DHCP daemon can’t run in both IPv4 and IPv6 modes simultaneously. On recent versions of FreeBSD, the isc-dhcpd-server package installs two rc scripts, which are used to run the two instances of the daemon we’ll have to run.

    First up, configure the IPv4 instance of the DHCP server. On FreeBSD, this means editing /usr/local/etc/dhcpd.conf:

    $ cat /usr/local/etc/dhcpd.conf
    #
    # dhcpd.conf
    #
    
    include "/usr/local/etc/dhcpd/common.conf";
    include "/usr/local/etc/dhcpd/secret.key";
    
    option domain-name-servers 192.168.1.1;
    option ntp-servers 192.168.1.10;
    
    zone example.com. {
    key rndc-key;
    }
    
    zone 1.168.192.in-addr.arpa. {
    key rndc-key;
    }
    
    subnet 192.168.1.0 netmask 255.255.255.0 {
    authoritative;
    range 192.168.1.100 192.168.1.200;
    option routers 192.168.1.1;
    }

    The IPv4 and IPv6 instances will share a number of configuration options, which are kept in the /usr/local/etc/dhcpd/common.conf file. This isn’t necessary I just find it easier. Also, if you’ve elected to only allow control connections over the loopback interface, you’ll need to add the option primary localhost; to any zone entry. By default dhcpd uses the SOA record from the zone file, which is the LAN interface address in the above zone files.

    To configure the IPv6 instance, edit /usr/local/etc/dhcpd6.conf:

    $ cat /usr/local/etc/dhcpd6.conf
    #
    # dhcpd.conf
    #
    
    include "/usr/local/etc/dhcpd/common.conf";
    include "/usr/local/etc/dhcpd/secret.key";
    
    option dhcp6.name-servers 2001:0DB8:f00e:eb00::1;
    
    zone example.com. {
    key rndc-key;
    }
    
    zone 0.0.b.e.e.0.0.f.8.B.D.0.1.0.0.2.ip6.arpa. {
    key rndc-key;
    }
    
    subnet6 2001:0DB8:f00e:eb00::/64 {
    range6 2001:0DB8:f00e:eb00::1000 2001:0DB8:f00e:eb00::2000;
    authoritative;
    }

    This file also references both common.conf and secret.key. The secret.key file is almost identical to the rndc.key file we used for BIND, but with some slight syntactical differences. Any by slight, I mean it doesn’t use quotes:

    $ cat /usr/local/etc/dhcpd/secret.key
    key rndc-key {
    algorithm hmac-md5;
    secret ;
    };
    
    $ cat /usr/local/etc/dhcpd/common.conf
    #
    # common.conf
    #
    
    default-lease-time 86400;
    max-lease-time 604800;
    ddns-update-style interim;
    ddns-ttl 1200;
    update-conflict-detection false;
    get-lease-hostnames true;
    use-host-decl-names on;
    option domain-name "example.com";
    option domain-search "example.com";
    update-static-leases on;

    The secret.key is used by dhcpd to authenticate with named to update the DNS zones as required. The common.conf contains the entire shared configuration between the IPv4 and IPv6 instances of the ISC DHCP daemon. Here’s a rundown on the two important entries:

    ddns-update-style interim: This is what tells dhcpd to actually update the DNS.
    update-conflict-detection false: This tells the two instances of dhcpd to be less strict about updating the zones, so that they don’t end up fighting with each other.

    The rest are all standard fare for a dhcpd configuration. That should be everything that is required to get your configuration off the ground. If you watch your syslog (wherever you have it sending dhcpd logging) you should see entries about adding forward and reverse mappings.

    I would recommend testing with Windows Vista or Windows 7 workstations, as they honestly have the best IPv6 network stack out of the box at this time. Mac OS X Lion might have decent DHCPv6 support – I haven’t had a chance to test it yet – but anything earlier won’t really. Linux is a bit of a mixed bag, depending on which distro and configuration you’re using. Hopefully this will be worked on in the near future, and we can come to expect all the main operating systems to be well behaved DHCPv6 clients.