
My freezer in a CMS: How caffeine tracking turned into a data collection

The devices in our pockets already hold most of our lives. Treating those moments as data turns a day into something readable, caffeine and sleep, runs and focus, small entries that add up to form patterns. By separating what a thing is from when it happens, the story of a week becomes visible without digging through apps.
I rebuilt the dashboard with a simple idea: content lives where content belongs, events live where events belong. Coffee facts—name, roaster, origin, variety, notes, photo, all of them sit in Contentful. Events, each brew, each run, each daily log, they are rows in Neon. The separation is small in code and large in effect, logging is fast, queries are clean, and the UI finally looks like it was built for my site and not duct-taped to it. The mental model is editorial first, not schema first.
My freezer is full of specialty coffee I bring back from work trips; each bag has a story. That belongs in a CMS where I can browse, edit, and reuse those facts. Drinking a coffee is an event: timestamp, brew method, volume, optionally a pointer to the bag in Contentful. That belongs in a database. After that, the dashboard pieces fell into place. The coffee-origins donut pulls country from Contentful without gymnastics. The caffeine graph lives on its own, modeled from events + a small body profile.
The caffeine model is the anchor. Years ago I wrote an iOS app that estimated caffeine blood levels. The code is still somewhere on the internet. I read a lot about half-life, distribution volume, and personal factors. This version keeps the good parts and loses the friction. Intake is just events. Body parameters live in a tiny profile. The model renders intake vs. body load on the same line chart, aligned from 00:00 to 24:00 with carry-over from late-night shots.
1export function modelCaffeine(
2 events: { timeISO: string; method: string; amount_ml: number; coffee_cf_id?: string | null }[],
3 body: { weight_kg?: number; vd_L_per_kg?: number; half_life_hours?: number; bioavailability?: number; sensitivity?: number },
4 opts: { startMs: number; endMs: number; gridMinutes?: number; alignToHour?: boolean; halfLifeHours?: number } = {}
5) {
6 const grid = opts.gridMinutes ?? 60; // hour ticks
7 const halfLife = opts.halfLifeHours ?? body.half_life_hours ?? 5;
8 const k = Math.log(2) / (halfLife * 60); // first-order elimination per minute
9 const vd = (body.vd_L_per_kg ?? 0.7) * Math.max(40, body.weight_kg ?? 70); // L
10 const F = body.bioavailability ?? 1;
11 const S = body.sensitivity ?? 1;
12
13 // Include lookback so a 23:59 espresso yesterday shows up this morning.
14 const lookbackMs = Math.max(24, Math.ceil(halfLife * 4)) * 60 * 60 * 1000;
15 const t0 = opts.startMs - lookbackMs;
16 const t1 = opts.endMs;
17
18 // Prepare a minute-resolution impulse train of intake (mg/min).
19 const impulses: Array<{ t: number; mg: number }> = [];
20 for (const e of events) {
21 const t = Date.parse(e.timeISO);
22 if (t < t0 || t > t1) continue;
23 const mg = estimateMgFromMethod(e.method, e.amount_ml) * F * S;
24 impulses.push({ t, mg });
25 }
26
27 // Sample on a fixed grid; convolve impulses with e^{-kΔt}.
28 const points: Array<{ timeISO: string; intake_mg: number; body_mg: number }> = [];
29 for (let ms = opts.startMs; ms <= t1; ms += grid * 60 * 1000) {
30 const intake = impulses.filter(p => sameBucket(p.t, ms, grid)).reduce((s, p) => s + p.mg, 0);
31 const carry = impulses.reduce((sum, p) => p.t <= ms ? sum + p.mg * Math.exp(-k * ((ms - p.t) / (60 * 1000))) : sum, 0);
32 const bodyMg = vd > 0 ? carry : carry; // visualize mg; blood conc. would divide by vd
33 points.push({ timeISO: new Date(ms).toISOString(), intake_mg: intake, body_mg: bodyMg });
34 }
35
36 return alignToHour(points, grid);
37}
38
39// […] helpers: estimateMgFromMethod, sameBucket, alignToHour
40
With that in place, the scatter of “yesterday’s caffeine vs today’s sleep score” became a one-liner to assemble. It looks like a cloud you can actually reason about. I’m still entering my personal sleep score based on rudimentary data from the Apple Watch sleep phases, but once I switch to a Garmin I’ll feed its score and see whether the story changes. The dashboard grew up in other ways, too. I moved the map from the homepage into the travel section and wired it like any other card, visited countries, last visits, and a live location marker that’s keyboard-focusable and labeled. The panels are lighter and consistent with the rest of the site. The client/server split removes hydration lumps. Tremor sits on top and does its job, legends are tidy, the data is typed, and there are small loading skeletons so the page feels responsive. These details aren’t glamorous, but they compound and make the change closer to "production ready".
The project remains API-first. The settings page and automations speak to the same endpoints that persist events and profiles. This keeps viewer and editor paths aligned and prevents divergent logic. It also prepares the ground for model-driven features. Contentful keeps getting richer as I drink through the freezer, with grind settings, brew recipes, ratings, and provenance, things that are inherently editorial. Every day I am collecting additional data points. That boundary is where Vercel’s AI SDK makes sense: it might be able to make comprehensive statements on my behavior and its implications. Two different sources; one story that’s easy for humans to read. If that sounds like overkill for caffeine and running, it isn’t. The point is that small, boring decisions add up. I used this as a reason to make the code of this website open source. You can find it on GitHub.
I'm a coffee-nerd builder who likes numbers that tell a story. This project was in parts for an interview showcase and a blueprint I’ll reuse and especially use.