Milestone: AI Chapter Champion at Sinch!

My work on spec-driven development with Claude Code was selected for Sinch’s monthly AI Champions showcase, representing one of five chapters across the company.

Really, a career milestone!

I used Claude Code to help build a new component for our design system. The approach was to write a plain-language prescriptive spec before touching any code. It covered behavior constraints, design tokens, implementation checklist, and then explicit instructions to follow existing codebase conventions rather than invent new patterns.

Result – a design system component generated in ~30 minutes, with another 30 minutes of fixes. MR review comments were about missed requirements, not code style. That’s a meaningful shift in where human attention goes during review.

Next Steps – formalizing the skills generalized from this task and looking into sharing them with the wider community.

The bigger takeaway so far: the quality ceiling for AI-assisted engineering is set by the spec, not the model. Prescriptive plain-language constraints dramatically outperform ad-hoc prompting.

static-deploy-kit: A Reusable CI/CD Framework for Next.js

I’ve open-sourced static-deploy-kit, a production-ready CI/CD framework that handles the complete deployment lifecycle for Next.js projects. It supports automatic semantic versioning, PR preview environments, and SFTP deployment with instant rollback capability.

Announcing static-deploy-kit

Features

– Three deployment contexts: production, releases, and PR sandboxes

– Automatic version bumping from PR markers (`[major]`, `[minor]`, `[patch]`)

– Smart test skipping for infrastructure-only changes

– Preview URLs automatically posted to PRs

– Backup-first deployment with symlink-based rollback

**Call to Action:**

Check out the repository, star it if useful, and feel free to open issues or PRs.

Building a Route Optimization Engine in 4.5 Hours: An LLM-Assisted Hackathon Post-Mortem

This is a technical post-mortem analyzing constraint-driven development with AI coding assistants.

Building a Route Optimization Engine in 4.5 Hours: An LLM-Assisted Hackathon Post-Mortem

What This Document Covers

This is a detailed analysis of building production-grade code under time constraints:

  • Constraint design for AI coding assistants
  • Decision-making process during rapid development
  • AWS Location Service API integration
  • Performance optimization and UX decisions
  • Honest assessment of what worked and what didn’t

Estimated reading time: 15-20 minutes

Who This Is For

  • Engineers evaluating AI coding assistants for production work
  • Technical leads designing development workflows with LLMs
  • Anyone building route optimization or mapping systems
  • Developers interested in constraint-driven development approaches
  • Teams looking to understand when AI assistance helps vs. hinders

The document includes the actual constraint prompt we used, architectural decisions with reasoning, and specific examples of where the LLM excelled (API integration) and where it failed (domain knowledge, architectural vision).

Questions or improvements? Open an issue on the repo or e-mail me.

Office Hours: Ask a Senior Engineer on Fridays from 9am to 3pm CEST

I’m starting something simple to help fellow engineers.

Over the last few months, people reached out with questions about direction, strategy, and “what actually matters” when trying to get hired in Europe/Sweden. It was in response to an off-the-cuff post during the summer (the Hug-a-Platform-Engineer one).

I’m grateful actually for some of the humor getting lost in translation! It’s opened up an opportunity to do something new.

So, I’m starting a weekly series: Office Hours: Ask a Senior Engineer on Fridays from 9am to 3pm CEST.

If you’re a junior engineer, a career-switcher, mid-and-stuck, senior-and-stuck, or just trying to break into engineering in the EU, drop one question below.

Anything about:
* skill focus
* job search strategy
* what EU/Sweden roles really require
* technical direction
* getting unstuck

To make it easier, here’s an example:

Question
“I’m a platform engineer outside of Europe. How do I get noticed by EU teams?”

Mini-answer
Make one small public project that demonstrates reliability work — something like a simple service with a playbook + basic telemetry. Show deployment with CI and basic secrets management. Hiring managers trust what they can see, and you can build credibility in 2–4 weeks this way.

So, that’s it!

What’s one question you want clarity on?

I’ll pick one each week, and I’ll answer it honestly and practically — no grift, no sales pitch.

Verifying Android Bundles Before Publishing

I posted last week about writing some scripts to sanity-check and validate Android bundles before uploading to Google Play Store (and having them rejected!).

I’ve made the scripts – one for AABs and APKs – open source as public gists.

curl -s https://gist.githubusercontent.com/minademian/71a5a3d0243496ce6a2a49956c01e4cd/raw/05189c8da3c8833595546a68325a3617cbd1944f/verify_aab_release.sh | bash
curl -s https://gist.githubusercontent.com/minademian/6e841c8e1a84308c6b3dc937a2d4a4cd/raw/4768df88d3d3111c134700c0637ed030a97af3a9/verify_apk_release.sh | bash

You can take it one step further and add it to your precommit hook, using the hook management tooling of your choice.

It will look like this:

yarn test || exit 1 # script in package.json: "bash./scripts/verify_aab_release.sh", yarn validate-packages || exit 1 npx lint-staged || exit 1

Meet My Second-Born: yarn-shell-completion!

Continuing my foray into OSS, I’ve been spending more time with my first (tech) love of the command-line and am teaching myself how to write shell completion scripts.

Second-born: yarn-shell-completion! https://github.com/ursine-code/yarn-shell-completion

Big h/t to @Felipe Contreras on YouTube for inspiring me to delve into this area of engineering and making it easy(-ish) to get into!