Luke Davis

Fixing a Timezone Bug in Umami

Umami is a privacy-focused analytics platform I self-host on my homelab. It has a “Traffic” heatmap that shows when visitors are active - a grid of hours vs days of the week. Useful for understanding traffic patterns.

Except mine was wrong.

I noticed activity showing up at 3am when I knew I’d been browsing my own site at 9pm. A 6-hour offset. I’m in Central US (UTC-6), so the math was obvious: the heatmap was displaying UTC times instead of my local timezone.

Traffic heatmap showing activity at wrong times - 10pm showing as 4am

The strange part? The API request included the correct timezone parameter. The frontend was sending timezone=America/Chicago. So the bug had to be somewhere between receiving that request and returning the data.

Finding the code

I asked Claude Code to find the endpoint that powers the Traffic heatmap. The component fetched data from /api/websites/[id]/sessions/weekly, so that’s where we started.

Umami is a Next.js app. The route handler lived at src/app/api/websites/[websiteId]/sessions/weekly/route.ts. It accepted the timezone parameter from the request and passed it along to a query function called getWeeklyTraffic().

That query function had two implementations - one for PostgreSQL, one for ClickHouse. Umami supports both databases, so queries are written twice. I use PostgreSQL.

Spotting the bug

Comparing the two implementations side by side made the bug obvious:

ClickHouse (correct):

async function clickhouseQuery(websiteId: string, filters: QueryFilters) {
  const { timezone = 'utc' } = filters;
  // ... uses timezone in the query
}

PostgreSQL (bug):

async function relationalQuery(websiteId: string, filters: QueryFilters) {
  const timezone = 'utc';
  // ... ignores the timezone from filters
}

The ClickHouse version destructures the timezone from the filters object, falling back to UTC if not provided. The PostgreSQL version just hardcodes UTC and ignores whatever timezone the API received.

The frustrating part? The SQL function that generates the time buckets (getDateWeeklySQL) already supported timezone conversion. It accepts a timezone parameter and uses PostgreSQL’s AT TIME ZONE clause. The infrastructure was there - someone just forgot to wire it up.

The fix

One line:

// Before
const timezone = 'utc';

// After
const { timezone = 'utc' } = filters;

That’s it. Destructure instead of hardcode.

Deploying the fork

I could have submitted a PR to upstream Umami and waited for a release. But I wanted this fixed now, and maintaining a fork for a one-line change seemed manageable.

I forked the repo, committed the fix, and set up a GitHub Action to build and push the Docker image on every push to master. The workflow includes a verification step that greps for the fix before building:

- name: Verify timezone fix
  run: |
    grep -q "const { timezone = 'utc' } = filters" src/queries/sql/getWeeklyTraffic.ts || {
      echo "ERROR: Timezone fix is missing"
      exit 1
    }

This protects against upstream merges accidentally reverting the fix. If I merge in new Umami releases and the grep fails, the build fails. I’ll know immediately.

The image pushes to GitHub Container Registry as a private package. My homelab server needed to authenticate to pull it - I used 1Password CLI to inject the PAT without exposing it in shell history:

echo '{{ op://Private/ghcr-token/token }}' | op inject | ssh server "docker login ghcr.io -u me --password-stdin"

Then a simple docker compose up -d to deploy the new image.

Verification

I used Playwright MCP to open the dashboard and take a screenshot. The heatmap now shows activity at the correct local times - evening activity appears in the evening rows, not shifted to early morning.

Traffic heatmap showing correct local times

Bug fixed. From “this looks wrong” to deployed fix: under an hour.

What made this fast

The fix itself was trivial - one line. Finding it was the challenge.

Umami’s codebase isn’t huge, but it’s unfamiliar to me. I don’t work in it daily. Tracing from a UI component to an API route to a query function to a SQL generator would normally involve a lot of grepping and file-hopping and “wait, where does this get called from?”

Claude Code collapsed that process. I described the symptom, it found the endpoint, traced the code path, and compared implementations. Within minutes I was looking at the bug.

The fork maintenance story is also better than it used to be. GitHub Actions means I don’t need to build locally. The grep verification means I can merge upstream changes without worrying about silent regressions. If I eventually submit a PR and it gets merged, I can switch back to the official image with one line change in my compose file.

For now, the fork works. And I can see when my visitors actually show up.