Home

Hi! I'm an undergraduate computer engineering student at Georgia Tech!

Experience

Building a remote controlled LED strip using ESPHome and Home Assistant

April 21, 2021

RGB Led strips are extremely cool and I wanted one. However, I wanted to be able to control the color using my phone, instead of needing to use a dedicated remote control. Since I was already familiar with the Home Assistant platform, I decided the best course of action would be to integrate my LED strip with it.

Running Home Assistant

I run a Kubernetes cluster at home, along with a reliable CI/CD system using Weave Flux. This allows me to define services as yaml files in a git repo and have everything automatically applied and synced. The basic deployment definition for Home Assistant is as follows. In addition, the Hass configurator docker project is a useful tool, as is the ESPHome docker webui.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: home-assistant
  namespace: home-assistant
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: home-assistant-home-assistant
      app.kubernetes.io/name: home-assistant
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: home-assistant-home-assistant
        app.kubernetes.io/name: home-assistant
    spec:
      hostNetwork: true
      containers:
        - image: homeassistant/home-assistant:latest
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /
              port: api
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          name: home-assistant
          ports:
            - containerPort: 8123
              name: api
              protocol: TCP
          readinessProbe:
            failureThreshold: 5
            httpGet:
              path: /
              port: api
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          resources: {}
          securityContext:
            runAsUser: 0
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /config
              name: config
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1111
      terminationGracePeriodSeconds: 30
      volumes:
        - name: config
          persistentVolumeClaim:
            claimName: home-assistant

Soldering pieces

I am using the NodeMCU v3, along with a WS2812B LED strip that I had on hand. Also required is a high amp 5v power supply. These LEDs require 5v, but can also draw a large amount of current when they are all at peak brightness. It is not sufficient to power the strip via the usb port on the NodeMCU. Connect the positive terminal of the power supply to the positive pin of the LED strip and the VIN pin on the NodeMCU. Connect the negative terminal to the ground pin of the LED strip and any ground pin on the NodeMCU. Finally, connect the data pin of the LED to any valid GPIO pin on the NodeMCU. I choose to use pin D1 (GPIO5).

This should be all the hardware setup that is required.

soldered esp32

Configuring the NodeMCU

The NodeMCU is just a standard ESP8266 microcontroller and is compatible with the arduino platform. One possible method is coding everything from scratch. However, I chose to leverage the ESPHome platform to make things much easier. This is the entire config.

esphome:
  name: ledstrip
  platform: ESP8266
  board: nodemcuv2

wifi:
  ssid: "ssid"
  password: "password"

  # Enable fallback hotspot (captive portal) in case wifi connection fails
  ap:
    ssid: "ESPHOME Fallback Hotspot"
    password: "fallback password"

captive_portal:

# Enable logging
logger:

# Enable ESPHome API
api:

ota:

light:
  - platform: fastled_clockless
    chipset: WS2812B
    pin: GPIO5
    num_leds: 75
    rgb_order: GRB
    name: "LED Strip"

There are a couple major pieces.

  • esphome: This stanza provides the global config, including the name, platform, and specific board.
  • wifi: Here is the wifi config. The ESPHome can also present a fallback network, incase it cannot connect to the defined network.
  • captive_portal: This enables the fallback portal.
  • logger: This enables the logger. It can be extremely helpful for debugging.
  • api: This enables the api and is later used by home assistant.
  • ota: This enables the possibility for Over the Air updates. A password can be set here if desired to prevent others from changing the firmware.
  • light: This stanza configures the light and includes things that must be configured based on your LED setup. It includes the chipset, connected pin, and number of LEDs.

ESPHome also provides a number of extra configuration options and can do a lot more than just drive a single LED strip. Vist its website to learn more.

Flashing the device is extremely easy too, requiring only three commands. If this is the first time flashing the specific device with ESPHome, connect it with USB. Otherwise, the device can be updated over the air without any cables at all!

pip3 install esphome
esphome config.yaml compile
esphome config.yaml upload

Connecting it to Home Assistant

ESPHome Integration

Home Assistant has a built in integration which will automatically discover the devices connected to the device once it connects to the device. In the integrations pane, search for ESPHome and enter the IP of your device. Home Assistant may also be able to automatically detect it on your network and display it in the notifications pane.

Connecting to the device

While you're here, you can also add any other integrations you want. Since I will be using HomeKit, I went ahead and enabled the HomeKit integration with the default settings. It will show a notification with a QR code that you can scan using the Home app on an Apple device.

Final integrations in Home Assistant

Controlling with Home Assistant

Using the corresponding pane on the dashboard, all functions of the light strip can now be controlled, including brightness and color.

Home Assistant control interface LED matching the selected color

Controlling with HomeKit

If you setup the HomeKit bridge in the previous section, the LED strip will automatically show up as a RGB light bulb in the Home app. From there, you can control, and it will update in realtime.

Kubernetes on nixOS using k3s (Part 2)

June 5th, 2020

This is outdated! k3s is now packaged in nixpkgs!

In part 1, you should have got k3s installed onto your nixos system. This part talks about running it as a service.

Firstly, remove your swap filesystem. Kubernetes is not intended to run with swap.

Next, disable the firewall. I had quite a bit of trouble running kubernetes along with the nixos firewall and it was easier for me to disable it completely as my VMs are running in an internal network (however, it probably could work with a bit of fiddling). When you disable the nixos firewall, it'll also disable iptables, which kubernetes requires, so it has to be added to the system packages.

  networking.firewall.enable = false;

  environment.systemPackages = with pkgs; [
     k3s iptables
  ];

Finally, the systemd service can be started. The following definition is for the server node. It is crucial that the ExecStartPre stanza activates all the defined kernel modules. kube-proxy depends on many of them and it tries to activate them itself. However, since nixos stores kernel modules in a different location, it will fail to activate them and it will result in many hard to debug errors.

The k3s command can be edited to include the desired cli flags. I disabled traefik and servicelb, as I prefer to use metallb and ingress-nginx.

  systemd.services.k3s = {
     # Unit
     description = "Lightweight Kubernetes";
     documentation = [ "https://k3s.io" ];
     wants = [ "network-online.target" ];
     # Install
     wantedBy = [ "multi-user.target" ];
     # Service
     serviceConfig = {
       Type = "notify";t
       KillMode = "process";
       Delgate = "yes";
       LimitNOFILE = "infinity";
       LimitNPROC = "infinity";
       LimitCORE = "infinity";
       TasksMax = "infinity";
       TimeoutStartSec = "0";
       Restart = "always";
       RestartSec = "5s";
       ExecStartPre = "${pkgs.kmod}/bin/modprobe -a br_netfilter overlay ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack";
       ExecStart = "${pkgs.k3s}/bin/k3s server --tls-san 10.1.2.20 --no-deploy traefik --no-deploy servicelb --token-file /var/keys/k3s_token";         
     };
  };

The systemd config for the agent node is quite similar, the only difference being the k3s command. Again, this can be edited to whatever is required, configuring it the same way you would normally configure k3s.

  systemd.services.k3s-agent = {
     # Unit
     description = "Lightweight Kubernetes";
     documentation = [ "https://k3s.io" ];
     wants = [ "network-online.target" ];
     # Install
     wantedBy = [ "multi-user.target" ];
     # Service
     serviceConfig = {
       Type = "exec";
       KillMode = "process";
       Delgate = "yes";
       LimitNOFILE = "infinity";
       LimitNPROC = "infinity";
       LimitCORE = "infinity";
       TasksMax = "infinity";
       TimeoutStartSec = "0";
       Restart = "always";
       RestartSec = "5s";
       ExecStartPre = "${pkgs.kmod}/bin/modprobe -a br_netfilter overlay";
       ExecStart = "${pkgs.k3s}/bin/k3s agent --server https://10.1.2.20:6443 --token-file /var/keys/k3s_token";         
     };
  };

The kubeconfig file can be found on the master node at /etc/rancher/k3s/k3s.yaml.

Kubernetes on nixOS using k3s (Part 1)

June 4th, 2020

This is outdated! k3s is now packaged in nixpkgs!

k3s is a lightweight kubernetes distribution and works incredibly well. It packages all its dependencies, runs using sqlite in place of etcd (although it can use a number of backends!), and has a much lower memory footprint.

Building the k3s binary from scratch is quite confusing, but since its a static binary, we don't have to worry about this. We can simply fetch the built binary from github and run it. This is not the most nix way of doing things, but it is the easiest. It allows me to get all the benefits of nixos without the complexity of building everything myself or using a more heavy kubernetes distribution.

Create a folder named k3s, and inside it, create a file named default.nix with the following contents.

{ pkgs ? import <nixpkgs> {} }:
pkgs.stdenv.mkDerivation {
  name = "k3s";
  src = pkgs.fetchurl {
    url = "https://github.com/rancher/k3s/releases/download/v1.18.2-rc3%2Bk3s1/k3s";
    sha256 = "812205e670eaf20cc81b0b6123370edcf84914e16a2452580285367d23525d0f";
  };
  phases = [ "installPhase" ];
  installPhase = ''
    mkdir -p $out/bin
    cp $src $out/bin/k3s
    chmod +x $out/bin/k3s
  '';
}

This file declares a custom nix package. This package can then be imported as an overlay to nixpkgs so that you can use it the way that you would use any other nix package. Add the following stanza to the top of your configuration.nix, but replace the path to your k3s folder.

  nixpkgs.overlays = [ 
      (self: super: {
          k3s = super.callPackage ../pkgs/k3s {};
        })
  ];

Overlays and packages can be managed in a multitude of ways and it doesn't really matter what method you use, as long as the package is the same.

Finally, you can add k3s to your system packages in your configuration.nix and you'll have access to the k3s cli tools.

  environment.systemPackages = with pkgs; [
     k3s
  ];

Part 2 talks about running k3s as a service.

Fixing time sync when dualbooting macOS and Windows

May 19th, 2020

Due to differences in how Windows and macOS store time on the hardware clock, the time will be incorrect when dualbooting between the two OSes (unless you live in the UTC timezone!). Fixing it is just a matter of syncing the clock when the OS starts.

Windows

Right click the start button and click run. Enter services.msc and click enter to open the services manager. Find the Windows time service named w32time, doubleclick it, and change the startup type to be automatic.

MacOS

Put the following plist in the file /Library/LaunchDaemons/com.zerowidth.launched.timesync.plist.

You will likely have to use sudo nano /Library/LaunchDaemons/com.zerowidth.launched.timesync.plist to edit the file as root.

<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>Label</key>
	<string>com.zerowidth.launched.timesync</string>
	<key>ProgramArguments</key>
	<array>
		<string>sh</string>
		<string>-c</string>
		<string>sntp -sS time.apple.com</string>
	</array>
	<key>RunAtLoad</key>
	<true/>
</dict>
</plist>

This plist was created using the launchd plist generator over at zerowidth. It runs the command sntp -sS time.apple.com on boot, which syncs the local time with the Apple time servers.

Using a Yubikey for SSH on macOS

May 18th, 2020

SSH 8.2 introduced support for using any U2F key in place of a private key file. Using it on macOS with full support for ssh-agent is a bit more complex.

Generating the keys

  1. You must choose between ed25519-sk and ecdsa-sk. Try ed25519-sk (Options 1 or 3) first. If it does not work due to device incompatibilities, fall back on ecdsa-sk (Options 2 or 4)

  2. You must choose if you want to store the key handle as a resident key on the device. If you want to, use options 1 or 2. If not, use options 3 or 4.

    A U2F attestation requires a key handle to be sent to the device. When generating the key, ssh-keygen will create private and public key files that look similar to normal ssh key. The private key file is actually a key handle that cannot be used without the hardware token, however, the hardware token can also not be used without the key handle.

    A resident key solves this problem by storing the key handle on the device. However, your key may or may not support it and only a limited number of resident keys may be stored on a device. Additionally, it may reduce the security of your ssh key as they could use it if they steal the hardware device. For this reason, a good pin is important.

    It is your choice whether to use a resident key. If you do, you can load it directly to the ssh-agent using ssh-add -K, or write the key handle and public key to disk using ssh-keygen -K

ssh-keygen -t ed25519-sk -O resident # 1
ssh-keygen -t ecdsa-sk -O resident   # 2
ssh-keygen -t ed25519-sk             # 3
ssh-keygen -t ecdsa-sk               # 4

Updating SSH

SSH v8.2 is required to use a security key. Install it with brew.

brew install openssh

You can specifiy the path to the private key handle in your ssh config. Otherwise, you can configure the ssh-agent.

ssh-agent on macOS

To be used with a security key, the ssh-agent must be on v8.2, which the system default is not.

First, disable the macOS default ssh-agent for your user.

launchctl disable user/$UID/com.openssh.ssh-agent

Next, add a new launchd service for your ssh-agent. Add the following file to ~/Library/LaunchAgents/com.zerowidth.launched.ssh_agent.plist

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>Label</key>
	<string>com.zerowidth.launched.ssh_agent</string>
	<key>ProgramArguments</key>
	<array>
		<string>sh</string>
		<string>-c</string>
		<string>/usr/local/bin/ssh-agent -D -a ~/.ssh/agent</string>
	</array>
	<key>RunAtLoad</key>
	<true/>
</dict>
</plist>

And load it with launchctl load -w ~/Library/LaunchAgents/com.zerowidth.launched.ssh_agent.plist.

In your .bashrc or .zshrc, set SSH_AUTH_SOCK="~/.ssh/agent"

This plist was created using the launchd plist generator over at zerowidth. It runs the command /usr/local/bin/ssh-agent -D -a ~/.ssh/agent. -D prevents ssh-agent from forking, and -a ~/.ssh/agent directs the agent to create a socket file at that location that is referenced in $SSH_AUTH_SOCK.

Storing keys in the keyring

The following stanza can be adapted and placed in ~/.ssh/config. It removes the need to manually ssh-add keys with nonstandard names and stores key passwords if set in the macOS keyring.

Host *
  IgnoreUnknown UseKeychain
  UseKeychain yes
  AddKeysToAgent yes
  IdentityFile ~/.ssh/id_ecdsa_sk
  IdentityFile ~/.ssh/id_ed25519_sk

The first two lines direct ssh to use the macOS keychain to store passwords. The third automatically adds keys that are used to the agent and the last two specify additional keys to use. All of this can also be configured on a per host basis.

Using NixOS with CI/CD

May 17th, 2020

Nix, NixOS, and the surrounding projects are a great set of tools that can be used to manage environments and machines declaratively.

I wanted to use NixOS to manage all my VMs declaratively. There are two parts: deploying the NixOS virtual machine and keeping it up to date.

Deploying a NixOS virtual machine

The files for this section are in this repository. I use packer to create a virtual machine and then convert it into a template. The general steps are as follows.

  1. Create a NixOS virtual machine

    This is done however you want to do it. I use Packer and the vSphere provisioner to create a new virtual machine that boots the NixOS minimal install.

  2. Create the partitions

    The general partitioning instructions from the manual can be followed, with some tweaks. Instead of creating the swap at the end of the disk and the root partition sandwiched in between, I the swap near the beginning of the disk. The commands that packer runs for this step listed here.

    NixOS has a configuration option called boot.growPartition which will grow the root partition on boot. By placing the root partition last, this option allows the root partition to automatically grow. Combined with the autoResize key set to true in the filesystem configuration, I can increase the size of the root partition in vSphere, and NixOS will take care of everything else.

    This is very important for a template, as it allows the resulting VM to have an arbitrary amount of storage.

  3. Configure the machine

    I use this config file that I created by hand using nixos-generate-config and other customizations. It uses labels to mount filesystems so that it can apply generally to all the virtual machines (and generally helps clarity).

    Additionally, it creates a deploy user and gives it passwordless sudo access and adds it to the group of trusted users. This user essentially has the permissions of the root user, so it is important to keep the corresponding ssh key safe. This user is important when using CI/CD.

    Lastly, it adds the virtual machine to a private ZeroTier network, a private mesh network that spans all my virtual machines and my desktop. Once the machine is built with nixos-rebuild switch, sudo rm -r /var/lib/zerotier-one/ must be run to clear the identity of the ZeroTier client so that it is not carried over to the VMs that are created from the template.

  4. Convert it to a template

    Packer does this for me.

Creating a virtual machine simply involves deploying the template and authorizing it to connect to ZeroTier in the ZeroTier web UI. I wrote a program that gives each ZeroTier node a url to make my life easier. The code is in a git repository but there's not much documentation at the moment.

Configuring NixOS with CI/CD

I use Github Actions due to its integration with Github and support for self hosted runners, although any other platform can be used with some tweaking.

I created a self-hosted runner using the instructions provided by Github on an ubuntu virtual machine. I installed nix, jq, docker, and node, although the last two can be skipped if you are not using docker or javascript based actions (this action is simply a shell script but I also have other actions running).

I wrote the following script to deploy the configurations to my NixOS virtual machines, however, NixOps can also be used. NixOps is a better solution if this is going to be used in production, but I wanted to learn more about Nix so I wrote my own naive implementation. It is mostly based off the steps provided in this post.

#!/bin/bash
set -e

if [ $# != 1 ]; then
    echo "Incorrect number of arguments."
    echo "Use as ./nix-deploy.sh path-to-hosts"
    exit 1
fi

for host in $(cat $1 | jq -r '.[].name'); do

    mkdir -p build/${host}
    cd build/${host}

    /nix/var/nix/profiles/default/bin/nix-build --attr system -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/885a6658073f6c5e9a4b37f131ce870a41a4ce7d.tar.gz ../../${host}.nix;

    /nix/var/nix/profiles/default/bin/nix-copy-closure --to --gzip "deploy@${host}.zt.example.com" ./result;

    ssh "deploy@${host}.zt.example.com" sudo $(realpath result/bin/switch-to-configuration) switch;
    ssh "deploy@${host}.zt.example.com" sudo nix-env --profile /nix/var/nix/profiles/system --set $(realpath result);

    cd ../../

done;

The hosts are defined in a JSON file that currently defined as follows, but extra keys and configuration can easily be added.

[
    {
        "name": "host1"
    },
    {
        "name": "host2"
    }
]

The files for each virtual machine are similar to the following and are standard nix files.

import <nixpkgs/nixos> {
  system = "x86_64-linux";

  configuration = {
    imports = [
      ./configuration.nix
    ];

    networking.hostName = "host1";

  };
}

How the script works

For each item in the JSON array, the name key is read and assigned to the variable host, and the following steps are run.

  1. Switch to a unique temporary directory.

  2. Build the NixOS config. It searches for a file called ${host}.nix, where ${host} is defined as above.

    The -I flag is used to pin nixpkgs, mostly because I don't understand nix enough to have a better way to pin nixpkgs, however, it works well enough.

  3. The built config is copied to the NixOS VM using nix-copy-closure using ssh. I use an automatically generated domain using ZeroTier, but the IP address could potentially be defined in the JSON file. The ssh key that was added to the template lives on the runner.

  4. The NixOS VM is switched to the new configuration. Inside the result symlink created by the build step, there is a switch that can be called. As the real path of the symlink is the same on the CI/CD machine and the NixOS VM due to the magic of Nix, making life quite easy.

Using everything with GitHub Actions

I use the following workflow definition. The workflow is run on a self-hosted runner and only run when anything in the nixos folder is changed. All my nix configuration files and the NixOS deploy script live in this folder.

name: Nixos Deploy

# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
  push:
    branches: [ prod ]
    paths:
      - 'nixos/**'

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  # This workflow contains a single job called "deploy"
  deploy:
    # The type of runner that the job will run on
    runs-on: [self-hosted, nix] 

    # Steps represent a sequence of tasks that will be executed as part of the job
    steps:
    # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
    - uses: actions/checkout@v2

    # Runs a set of commands using the runners shell
    - name: Build Nix Machine
      run: |
        cd $GITHUB_WORKSPACE/nixos
        ./nix-deploy.sh hosts.json