Milestone: AI Chapter Champion at Sinch!

My work on spec-driven development with Claude Code was selected for Sinch’s monthly AI Champions showcase, representing one of five chapters across the company.

Really, a career milestone!

I used Claude Code to help build a new component for our design system. The approach was to write a plain-language prescriptive spec before touching any code. It covered behavior constraints, design tokens, implementation checklist, and then explicit instructions to follow existing codebase conventions rather than invent new patterns.

Result – a design system component generated in ~30 minutes, with another 30 minutes of fixes. MR review comments were about missed requirements, not code style. That’s a meaningful shift in where human attention goes during review.

Next Steps – formalizing the skills generalized from this task and looking into sharing them with the wider community.

The bigger takeaway so far: the quality ceiling for AI-assisted engineering is set by the spec, not the model. Prescriptive plain-language constraints dramatically outperform ad-hoc prompting.

Building a Route Optimization Engine in 4.5 Hours: An LLM-Assisted Hackathon Post-Mortem

This is a technical post-mortem analyzing constraint-driven development with AI coding assistants.

Building a Route Optimization Engine in 4.5 Hours: An LLM-Assisted Hackathon Post-Mortem

What This Document Covers

This is a detailed analysis of building production-grade code under time constraints:

  • Constraint design for AI coding assistants
  • Decision-making process during rapid development
  • AWS Location Service API integration
  • Performance optimization and UX decisions
  • Honest assessment of what worked and what didn’t

Estimated reading time: 15-20 minutes

Who This Is For

  • Engineers evaluating AI coding assistants for production work
  • Technical leads designing development workflows with LLMs
  • Anyone building route optimization or mapping systems
  • Developers interested in constraint-driven development approaches
  • Teams looking to understand when AI assistance helps vs. hinders

The document includes the actual constraint prompt we used, architectural decisions with reasoning, and specific examples of where the LLM excelled (API integration) and where it failed (domain knowledge, architectural vision).

Questions or improvements? Open an issue on the repo or e-mail me.