wires

summary refs log tree commit diff
path: root/content/zosimos/workflow.md
blob: 24b23783b05b79ea04ace1cce9313c766bff1125 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
+++
title = "ZosimOS Devlog: Workflow Setup"
date = 2025-03-31T12:00:00-04:00
tags = ["osdev", "zig", "raspberry pi"]
+++

Programming my own OS has been a dream of mine for years, but my attempts always
end up going the same way: writing some code that can run in a virtual machine,
learning some interesting stuff about x86_64, and then giving up before I even
reach userspace. Now I find myself called again by the siren song, but I'm
determined to not go down the same road again... so I'm doing it on an arm
machine this time. I ended up settling on the Raspberry Pi 4B, since the price
was reasonable and the large community makes it easy to find answers to any
question I might have. But before I can set out to build the greatest OS ever,
I need to set up a good workflow for building and testing.

## Building the Kernel

For this go around, I decided to write my kernel in Zig. I've gotten too used to
the conveniences of a modern language to go back to C, and while I love Rust
dearly I've found myself frustrated by having to use shell scripts and make to
take care of the parts of getting to a bootable image that Cargo doesn't concern
itself with. Zig's build system is flexible enough to handle the whole process
itself, and while nothing in this post is particularly difficult, I have reason
to believe it'll keep up even as the complexity increases.

I won't go into detail about the actual code since 1\. it's extremely trivial
and 2\. it's not the main focus of this post, but you can find it [here][code].
All that matters is that it consists of some Zig code, an assembly stub, and
a linker script to put it all together. In our `build.zig` file we therefore
write:

```zig
const std = @import("std");

pub fn build(b: *std.Build) void {
    const target = b.standardTargetOption(.{});
    const optimize = b.standardOptimizeOption(.{});

    const kernel = b.addExecutable(.{
        .name = "kernel",
        .root_source_file = b.path("src/main.zig"),
        .target = target,
        .optimize = optimize,
    });
    kernel.setLinkerScript(b.path("src/Link.ld"));
    kernel.addAssemblyFile(b.path("src/startup.s"));

    b.installArtifact(kernel);
}
```

As it is, this will try to build our kernel for the host machine and OS.
To get a freestanding binary we need to change `target`:

```zig
const target = b.resolveTargetQuery(.{
    .cpu_arch = .aarch64,
    .os_tag = .freestanding,
});
```

Now if we run `zig build` we'll see our compiled kernel in `zig-out/bin/kernel`.
But we're not done yet. This is an ELF file, and the Pi only knows how to boot
flat binaries. We'll still keep the ELF around for debugging, but we add the
following to create a flat binary:[^2]

```zig
const kernel_bin = b.addObjCopy(kernel.getEmittedBin(), .{
    .format = .bin,
    .basename = "kernel.img",
});
const install_kernel = b.addInstallBinFile(
kernel_bin.getOutput(),
"kernel8.img",
);
b.getInstallStep().dependOn(&install_kernel.step);
```

Before we can boot this kernel image though, we need some other stuff. The Pi is
a little strange in that it reads most of its firmware from the boot drive
instead of from flash on the board. Normally we'd just download these firmware
files ourselves, but we can do better!

[^2]: The filename `kernel8.img` is important since it tells the board to boot
    up in 64 bit mode. This can be overridden in [`config.txt`][7] if you really
    want a different filename.

[1]: https://git.wires.systems/wires/zosimos/src/commit/8466e9b4d2fbca85d53d8dadc87914b4766c43de/src
[code]: https://git.wires.systems/wires/zosimos/src/commit/b6b96f651f060ae6cff9e4e184799bb354ce6d07
[7]: https://www.raspberrypi.com/documentation/computers/config_txt.html

## Fetching Firmware

A great thing about the Zig package manager is that it can be used to fetch any
dependencies, not just Zig packages. In this case we're going to use it to grab
the firmware files for the Pi so that we can reference it in our build script.
If we run:

```console
$ zig fetch --save=rpi_firmware 'https://github.com/raspberrypi/firmware/archive/refs/tags/1.20250305.tar.gz'
```

then the latest (at time of writing) release of the firmware will be added to
our project as a dependency. This also provides us a way to ensure the integrity
of the firmware (at least that it hasn't changed since we first fetched it)
since it's saved alongside a hash. Let's adjust the build script to place all
the files we need[^3] in a `boot` directory:

[^3]: I determined this by trial and error, but it's probably documented
    somewhere I missed.

```zig
const kernel_bin = b.addObjCopy(kernel.getEmittedBin(), .{
    .format = .bin,
    .basename = "kernel.img",
});
const install_kernel = b.addInstallFile(
kernel_bin.getOutput(),
"boot/kernel8.img",
);
b.getInstallStep().dependOn(&install_kernel.step);

const firmware = b.dependency("rpi_firmware", .{});
b.installDirectory(.{
    .source_dir = firmware.path("boot"),
    .install_dir = .prefix,
    .install_subdir = "boot",
    .include_extensions = &.{
        "start4.elf",
        "start4db.elf",
        "fixup4.dat",
        "bcm2711-rpi-4-b.dtb",
    },
});
```

## Network Booting

Having to repeatedly remove the memory card, transfer a new kernel to it, and
put it back in sounds like a massive pain. Luckily, the Pi4B supports network
booting from a tftp server. The simplest way to set this up would be to install
the `tftp-hpa` package and then just run the server normally, having it serve
the `zig-out/boot` folder. But I don't like having daemons that I'm only going
to use for a single project installed on my system, so for no real reason other
than aesthetics I'm going to be running the tftp server inside a container. The
Dockerfile is reproduced below in its entirety:

```dockerfile
# docker/tftp/Dockerfile

FROM alpine:3

RUN apk add --no-cache tftp-hpa

ENTRYPOINT ["in.tftpd"]
```

This is combined with the following `compose.yaml`:

```yaml
services:
  tftp:
    build: ./docker/tftp
    ports:
      - 69:69/udp
    volumes:
      - ./zig-out/boot:/data:ro
    command: --verbose --foreground --secure /data
```

running `docker compose up` should now start the tftp server properly.[^1]

[^1]: Since tftpd only logs to syslog and the container won't normally have
    a syslogd running in it, you'll have to modify the setup if you want to see
    logs. I initially did this during testing by using busybox's syslogd in the
    container, but found that it made shutdown times very long, which wasn't
    ideal when frequently restarting the container.

Lastly, we have to configure the Pi itself. Network boot config info is stored
in the board's EEPROM, so boot into Linux on the Pi and run


```console
# rpi-eeprom-config --edit
```

This will bring up a text editor where you can edit the EEPROM variables in
a `.ini`-like format. We'll write the following config:

```ini
[all]
BOOT_UART=1
BOOT_ORDER=0xf12

TFTP_PREFIX=1
TFTP_IP=x.y.z.w
```

where `TFTP_IP` should be set to the IP of your tftp server. All of these
options are documented [here][2] if you want to know what specifically they do.

[2]: https://www.raspberrypi.com/documentation/computers/raspberry-pi.html

Now, assuming it has a connection, our Pi should successfully boot over the
network! But since all our so-called kernel does at this point is spin forever,
how can we tell that anything's even happening?

## Remote Debugging

The easiest choice would probably be to make the program actually do something,
like blink the activity LED, but I've decided to set up remote debugging
instead. I'm using the [FT2232H mini module][3] for this, since it lets me do
both UART and JTAG over a single USB connection on my dev machine. For an
explanation of how to hook it up to the Pi, see [this blog post][4]. Like the
author of that post I'll be using OpenOCD in a docker container to connect to
the board, but I went about it a little differently. Since 2021, OpenOCD has
included a config for the Pi4B by default, so I only had to write the following
short config for the FT2232H:

```tcl
# docker/openocd/interface/ft2232h.cfg

adapter driver ftdi

ftdi device_desc "FT2232H MiniModule"
ftdi vid_pid   0x0403 0x6010

ftdi layout_init 0x0000 0x000b

ftdi channel 0
```

I'm extremely new to OpenOCD and JTAG in general, so for all I know this might
be terribly broken in some way I just haven't noticed yet, but it's been working
so far. The Dockerfile is then as follows:

```dockerfile
# docker/openocd/Dockerfile

FROM alpine:3

RUN apk add --no-cache openocd

ADD interface/ft2232h.cfg /usr/share/openocd/scripts/interface/

ENTRYPOINT ["openocd"]
```

And we add the following entry under `services` in `compose.yaml`:

```yaml
openocd:
  build: ./docker/openocd
  ports:
    - 6666:6666
    - 4444:4444
    - 3333:3333
  devices:
    - "/dev/bus/usb:/dev/bus/usb"
  command: -f interface/ft2232h.cfg -f board/rpi4b.cfg -c "bindto 0.0.0.0"
```

Lastly, we need to add a file called `config.txt` with the following contents:

```ini
[all]
gpio=22-27=np
enable_jtag_gpio=1
enable_uart=1
uart_2ndstage=1
```

and make sure to install it in our `build.zig`:

```zig
b.installFile("config.txt", "boot/config.txt");
```

Now we should be able to attach to `/dev/ttyUSB0` to see the UART messages when
we reboot and connect to OpenOCD's GDB server to debug the running kernel.

[3]: https://ftdichip.com/products/ft2232h-mini-module/
[4]: https://vinnie.work/blog/2021-04-02-ft2232h-rpi4#wiring-up-the-hardware

## What's Next?

For the project in general, my next goal is to set up UART communication, and
then hopefully some physical memory management with ideas from [this paper][5].
As far as the specific subject of this post goes, I want to work on making the
startup experience nicer. As it stands right now, OpenOCD is being started at
the same time as the tftp server, when the board is never ready, so the
container always has to be restarted. It would be good to have it wait until
after the board is booted to start trying to connect, but I don't have any great
ideas on how to pull that off right now.

[5]: https://www.usenix.org/system/files/atc23-wrenger.pdf