blog/content/zosimos/workflow.md
2025-03-31 22:02:45 -04:00

9.8 KiB

+++ title = "ZosimOS Devlog: Workflow Setup" date = 2025-03-31T12:00:00-04:00 tags = ["osdev", "zig", "raspberry pi"] +++

Programming my own OS has been a dream of mine for years, but my attempts always end up going the same way: writing some code that can run in a virtual machine, learning some interesting stuff about x86_64, and then giving up before I even reach userspace. Now I find myself called again by the siren song, but I'm determined to not go down the same road again ... so I'm doing it on an arm machine this time. I ended up settling on the Raspberry Pi 4B, since the price was reasonable and the large community makes it easy to find answers to any question I might have. But before I can set out to build the greatest OS ever, I need to set up a good workflow for building and testing.

Building the Kernel

For this go around, I decided to write my kernel in Zig. I've gotten too used to the conveniences of a modern language to go back to C, and while I love Rust dearly I've found myself frustrated by having to use shell scripts and make to take care of the parts of getting to a bootable image that Cargo doesn't concern itself with. Zig's build system is flexible enough to handle the whole process itself, and while nothing in this post is particularly difficult, I have reason to believe it'll keep up even as the complexity increases.

I won't go into detail about the actual code since 1. it's extremely trivial and 2. it's not the main focus of this post. All that matters is that it consists of some Zig code, an assembly stub, and a linker script to put it all together. In our build.zig file we therefore write:

const std = @import("std");

pub fn build(b: *std.Build) void {
    const target = b.standardTargetOption(.{});
    const optimize = b.standardOptimizeOption(.{});

    const kernel = b.addExecutable(.{
        .name = "kernel",
        .root_source_file = b.path("src/main.zig"),
        .target = target,
        .optimize = optimize,
    });
    kernel.setLinkerScript(b.path("src/Link.ld"));
    kernel.addAssemblyFile(b.path("src/startup.s"));

    b.installArtifact(kernel);
}

As it is, this will try to build our kernel for the host machine and OS. To get a freestanding binary we need to change target:

const target = b.resolveTargetQuery(.{
    .cpu_arch = .aarch64,
    .os_tag = .freestanding,
});

Now if we run zig build we'll see our compiled kernel in zig-out/bin/kernel. But we're not done yet. This is an ELF file, and the Pi only knows how to boot flat binaries. We'll still keep the ELF around for debugging, but we add the following to create a flat binary:1

const kernel_bin = b.addObjCopy(kernel.getEmittedBin(), .{
    .format = .bin,
    .basename = "kernel.img",
});
const install_kernel = b.addInstallBinFile(
kernel_bin.getOutput(),
"kernel8.img",
);
b.getInstallStep().dependOn(&install_kernel.step);

Before we can boot this kernel image though, we need some other stuff. The Pi is a little strange in that it reads most of its firmware from the boot drive instead of from flash on the board. Normally we'd just download these firmware files ourselves, but we can do better!

Fetching Firmware

A great thing about the Zig package manager is that it can be used to fetch any dependencies, not just Zig packages. In this case we're going to use it to grab the firmware files for the Pi so that we can reference it in our build script. If we run:

$ zig fetch --save=rpi_firmware 'https://github.com/raspberrypi/firmware/archive/refs/tags/1.20250305.tar.gz'

then the latest (at time of writing) release of the firmware will be added to our project as a dependency. This also provides us a way to ensure the integrity of the firmware (at least that it hasn't changed since we first fetched it) since it's saved alongside a hash. Let's adjust the build script to place all the files we need2 in a boot directory:

const kernel_bin = b.addObjCopy(kernel.getEmittedBin(), .{
    .format = .bin,
    .basename = "kernel.img",
});
const install_kernel = b.addInstallFile(
kernel_bin.getOutput(),
"boot/kernel8.img",
);
b.getInstallStep().dependOn(&install_kernel.step);

const firmware = b.dependency("rpi_firmware", .{});
b.installDirectory(.{
    .source_dir = firmware.path("boot"),
    .install_dir = .prefix,
    .install_subdir = "boot",
    .include_extensions = &.{
        "start4.elf",
        "start4db.elf",
        "fixup4.dat",
        "bcm2711-rpi-4-b.dtb",
    },
});

Network Booting

Having to repeatedly remove the memory card, transfer a new kernel to it, and put it back in sounds like a massive pain. Luckily, the Pi4B supports network booting from a tftp server. The simplest way to set this up would be to install the tftp-hpa package and then just run the server normally, having it serve the zig-out/boot folder. But I don't like having daemons that I'm only going to use for a single project installed on my system, so for no real reason other than aesthetics I'm going to be running the tftp server inside a container. The Dockerfile is reproduced below in its entirety:

# docker/tftp/Dockerfile

FROM alpine:3

RUN apk add --no-cache tftp-hpa

ENTRYPOINT ["in.tftpd"]

This is combined with the following compose.yaml:

services:
  tftp:
    build: ./docker/tftp
    ports:
      - 69:69/udp
    volumes:
      - ./zig-out/boot:/data:ro
    command: --verbose --foreground --secure /data

running docker compose up should now start the tftp server properly.3

Lastly, we have to configure the Pi itself. Network boot config info is stored in the board's EEPROM, so boot into Linux on the Pi and run

# rpi-eeprom-config --edit

This will bring up a text editor where you can edit the EEPROM variables in a .ini-like format. We'll write the following config:

[all]
BOOT_UART=1
BOOT_ORDER=0xf12

TFTP_PREFIX=1
TFTP_IP=x.y.z.w

where TFTP_IP should be set to the IP of your tftp server. All of these options are documented here if you want to know what specifically they do.

Now, assuming it has a connection, our Pi should successfully boot over the network! But since all our so-called kernel does at this point is spin forever, how can we tell that anything's even happening?

Remote Debugging

The easiest choice would probably be to make the program actually do something, like blink the activity LED, but I've decided to set up remote debugging instead. I'm using the FT2232H mini module for this, since it lets me do both UART and JTAG over a single USB connection on my dev machine. For an explanation of how to hook it up to the Pi, see this blog post. Like the author of that post I'll be using OpenOCD in a docker container to connect to the board, but I went about it a little differently. Since 2021, OpenOCD has included a config for the Pi4B by default, so I only had to write the following short config for the FT2232H:

# docker/openocd/interface/ft2232h.cfg

adapter driver ftdi

ftdi device_desc "FT2232H MiniModule"
ftdi vid_pid   0x0403 0x6010

ftdi layout_init 0x0000 0x000b

ftdi channel 0

I'm extremely new to OpenOCD and JTAG in general, so for all I know this might be terribly broken in some way I just haven't noticed yet, but it's been working so far. The Dockerfile is then as follows:

# docker/openocd/Dockerfile

FROM alpine:3

RUN apk add --no-cache openocd

ADD interface/ft2232h.cfg /usr/share/openocd/scripts/interface/

ENTRYPOINT ["openocd"]

And we add the following entry under services in compose.yaml:

openocd:
  build: ./docker/openocd
  ports:
    - 6666:6666
    - 4444:4444
    - 3333:3333
  devices:
    - "/dev/bus/usb:/dev/bus/usb"
  command: -f interface/ft2232h.cfg -f board/rpi4b.cfg -c "bindto 0.0.0.0"

Lastly, we need to add a file called config.txt with the following contents:

[all]
gpio=22-27=np
enable_jtag_gpio=1
enable_uart=1
uart_2ndstage=1

and make sure to install it in our build.zig:

b.installFile("config.txt", "boot/config.txt");

Now we should be able to attach to /dev/ttyUSB0 to see the UART messages when we reboot and connect to OpenOCD's GDB server to debug the running kernel.

What's Next?

For the project in general, my next goal is to set up UART communication, and then hopefully some physical memory management with ideas from this paper. As far as the specific subject of this post goes, I want to work on making the startup experience nicer. As it stands right now, OpenOCD is being started at the same time as the tftp server, when the board is never ready, so the container always has to be restarted. It would be good to have it wait until after the board is booted to start trying to connect, but I don't have any great ideas on how to pull that off right now.


  1. The filename kernel8.img is important since it tells the board to boot up in 64 bit mode. This can be overridden in config.txt if you really want a different filename. ↩︎

  2. I determined this by trial and error, but it's probably documented somewhere I missed. ↩︎

  3. Since tftpd only logs to syslog and the container won't normally have a syslogd running in it, you'll have to modify the setup if you want to see logs. I initially did this during testing by using busybox's syslogd in the container, but found that it made shutdown times very long, which wasn't ideal when frequently restarting the container. ↩︎