Office Hours: Ask a Senior Engineer on Fridays from 9am to 3pm CEST

I’m starting something simple to help fellow engineers.

Over the last few months, people reached out with questions about direction, strategy, and “what actually matters” when trying to get hired in Europe/Sweden. It was in response to an off-the-cuff post during the summer (the Hug-a-Platform-Engineer one).

I’m grateful actually for some of the humor getting lost in translation! It’s opened up an opportunity to do something new.

So, I’m starting a weekly series: Office Hours: Ask a Senior Engineer on Fridays from 9am to 3pm CEST.

If you’re a junior engineer, a career-switcher, mid-and-stuck, senior-and-stuck, or just trying to break into engineering in the EU, drop one question below.

Anything about:
* skill focus
* job search strategy
* what EU/Sweden roles really require
* technical direction
* getting unstuck

To make it easier, here’s an example:

Question
“I’m a platform engineer outside of Europe. How do I get noticed by EU teams?”

Mini-answer
Make one small public project that demonstrates reliability work — something like a simple service with a playbook + basic telemetry. Show deployment with CI and basic secrets management. Hiring managers trust what they can see, and you can build credibility in 2–4 weeks this way.

So, that’s it!

What’s one question you want clarity on?

I’ll pick one each week, and I’ll answer it honestly and practically — no grift, no sales pitch.

Verifying Android Bundles Before Publishing

I posted last week about writing some scripts to sanity-check and validate Android bundles before uploading to Google Play Store (and having them rejected!).

I’ve made the scripts – one for AABs and APKs – open source as public gists.

curl -s https://gist.githubusercontent.com/minademian/71a5a3d0243496ce6a2a49956c01e4cd/raw/05189c8da3c8833595546a68325a3617cbd1944f/verify_aab_release.sh | bash
curl -s https://gist.githubusercontent.com/minademian/6e841c8e1a84308c6b3dc937a2d4a4cd/raw/4768df88d3d3111c134700c0637ed030a97af3a9/verify_apk_release.sh | bash

You can take it one step further and add it to your precommit hook, using the hook management tooling of your choice.

It will look like this:

yarn test || exit 1 # script in package.json: "bash./scripts/verify_aab_release.sh", yarn validate-packages || exit 1 npx lint-staged || exit 1

Meet My Second-Born: yarn-shell-completion!

Continuing my foray into OSS, I’ve been spending more time with my first (tech) love of the command-line and am teaching myself how to write shell completion scripts.

Second-born: yarn-shell-completion! https://github.com/ursine-code/yarn-shell-completion

Big h/t to @Felipe Contreras on YouTube for inspiring me to delve into this area of engineering and making it easy(-ish) to get into!



Upcoming Project: JIRA Worklogs Microsoft Teams Bot

I’ve been working on a concept over the last month, to explore a different way of inputting time reports on Fridays with Atlassian’s JIRA Worklogs module.

To exercise my system design thinking and writing muscles, I put together a design for my solution. The solution is running locally, but still not working with an actual live Teams instance. Debugging Microsoft Teams Apps Marketplace is… painful.

Code is here, demo coming soon!

Tl;DR – It’s Not As Bad As They Say!

“Contrary to conventional wisdom, the 2023 ABS (which produced 2022 data) found that adoption of technology, including AI, did not change overall worker numbers.

Businesses most often reported their “number of workers did not change overall” between 2020 and 2022 after adopting any of the five technologies the ABS tracked: AI, specialized software, robotics, cloud-based tech or specialized equipment.”

Cutting through all the hype on AI, a take from the recent US census report.

A Team of One Plus Two

I thought LLMs were just Copilot.
This week I discovered agents — and it changed how I code.

Instead of autocomplete, I had Claude Sonnet 4 troubleshooting bugs with me in real time. I redirected it when it drifted, or followed when it made sense. At one point, I asked it to output all its decisions into Markdown — suddenly I had a log of learnings I could share with the team.

A coworker spun up an Model Context Protocol (MCP server), so we all could mine knowledge about our project through hundreds of Confluence pages. Suddenly I wasn’t a solo engineer anymore — I was a team of 1 human + 2 agents.

The result? Faster debugging, better knowledge capture, and new side project ideas. I’m thinking, how could I implement a MCP server to mine Google Drive folders? Or, OneDrive? And hook them into apps or blogging platforms like this one WordPress?

So many possibilities! I’m excited! Hoohaa.

Test Runners in Multi-Stage Docker Builds

While working with Docker builds, you may already be familiar with building images and creating containers:

$ docker build --build-arg FOO=BAR FOUX=BARS -t my-docker-image:latest .
$ docker run -d -p 8088:8080 my-docker-image

This would build and create a container for the following Dockerfile:

FROM node:20-slim AS builder
WORKDIR /app
COPY . .
RUN yarn install --immutable --immutable-cache --check-cache
RUN yarn build:prod

FROM builder AS test
RUN yarn test:integration

FROM node:20-slim AS final
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/yarn.lock ./yarn.lock
RUN yarn install --production --frozen-lockfile
ENTRYPOINT ["node", "./dist/src/server.js"]

Need to run just the test layer in CI?

Enter the --target flag:

$ docker build --target=test -t my-api:test .

The --target flag will build the Dockerfile from the top to the specified stage, and then stop — skipping any further layers like the final production image. This is ideal when you only want to run tests in a CI/CD pipeline without building the full image.

CI/CD Example

- name: Build Docker image for testing
  run: docker build --target=test -t my-api:test .

- name: Run integration tests in container
  run: docker run --rm my-api:test

Benefits of this approach

  1. You maintain clean image separation and avoid adding test-only dependencies in the production image.
  2. Caching becomes smoother since your pipeline will cache the build layers and skip test runs in the production builds.
  3. It becomes clearer during debugging if the integration (or whatever test suite you want to run), since the test logs are isolated from the app runtime.

“… When You Get A Chance?”

Before I went on summer vacation from work, I was asking a coworker for help with an urgent PR. I needed to merge the frontend component in order to get the whole feature out. (It was a full-stack project.)

I shot off “when you get a chance” at the end of the message.

And I waited. And waited. And waited.

When I checked back to see if they had read the message or not, that little qualification, the throw-away prefix, glared at me.

And it hit me. Well… I said to him, when you get a chance. So, right now he can’t look at it.

So, I sent off two messages in a single message – this is urgent and WHEN YOU GET A CHANCE NO PRESSURE.

I went back and edited the message, apologizing for my lack of clarity and double messaging. Then, I made it clear that I would appreciate his help with him as soon as he can.

I got the approval I needed to move forward.

A really fascinating moment in how we communicate and how throwaway phrases can betray our intentions.

The CI Failure That Could Have Cost Us 42,000 SEK/Year — Until a 2-Line Fix

This week, I merged a PR into develop that looked clean locally — only to have it fail in CI after merge.

The reason?
My dev build passed, but there was no local process or a step in our feature branch pipeline that attempted a production build before committing and/or merging.
A small TypeScript mismatch slipped through — and CI caught it only after it was merged.

Root cause

A type import didn’t get checked in the local dev build.

tsc –build in production mode caught it — but that step wasn’t part of local workflows.

Our develop branch builds from scratch using yarn build:prod.

Fix

I added a yarn build:prod step to our Husky pre-commit hook.
Now, prod build errors fail before the commit even hits GitHub.

Impact — Quantified

1. Engineer Time Lost to Debugging

Avg 1 engineer loses ~30 min diagnosing unexpected CI failure
Happens ~2x/month in teams of 3–5
= 1 hour/month x 1,500 SEK = 1,500 SEK/month

Annualized: 18,000 SEK/year just in lost debugging time

Reduced CI re-runs

  • 5 avoidable failed builds/month + 15 min of rework
    → 2,000 SEK/month = 24,000 SEK/year

Compute cost reduction

  • ~75 CI minutes/month avoided
    → 4.5 SEK/month = ~54 SEK/year

Total estimated savings: ~42,054 SEK/year from a 2-line config change.

____

So, The earlier the feedback, the cheaper the fix. If you’ve been relying on dev builds alone before merging, double check what your CI’s really doing. You might catch more with less.

Verified by ExactMetrics