Automating OpenApi Spec Validation with Gradle and Spring Boot

Recently, the front-end team I’m on really saw how the back-end team was swamped. During the sprint retro, we expressed that we wanted to help out any way we could. So, with their help, we got set up in IntelliJ and back-end’s Spring Boot application. My team ended up getting wrapped up in our main projects. I put aside some time on my own to look at back-end’s backlog. After touching base with them and seeing where I could best help them, I picked up a technical debt ticket around automating the validation of the OpenAPI spec.

It was really a great adventure and learning opportunity! I had never worked before with Gradle and Spring Boot in any professional capacity, but I know my way around Gradle now. 😀

The requirements were:

  • the validation would be automated
  • it could be integrated into the CI/CD flow with GitHub actions

At first, I fell down a deep rabbit hole with trying to implement a custom Gradle Task with plain Java files. The Gradle 8.4 documentation actually pointed people to write them in Groovy or Kotlin, but I think I must have hit a random blog through a Google search, and gotten stuck there. I’m sure it could have worked, but I had already surpassed my initial timebox.

Then, I tried to solve it the simplest way, by using the springdoc-OpenAI Gradle plugin. But that required to run the application, and we wanted to avoid that in the CI/CD flow.

Finally, I went back to the original concept of custom Gradle Tasks, but kept it simple with plain Groovy. Then, I found the winning combination. A blog suggested validating the OpenAPI spec in test cases. The final solution was as follows:

  1. Writing custom Gradle Tasks in plain Groovy that executed test cases
  2. This case would manually generate the OpenAPI spec from the code
  3. Then executing a local shell script that loads a npm library to manually validate the generated spec.

It’s lightweight, it can run easily in a CI/CD flow, and implementing it with straight Groovy keeps the `build.gradle` file light.

“Final product!”

I added the custom task to the gradlew command in the appropriate GitHub Action workflow file.

I added the variables extraction.api-spec.json and springdoc.api-docs.path to the application.properties file.

build.gradle

tasks.register("validateOpenApiDocs", Exec) {

    group = "documentation"

    description = "Validates locally generated OpenApi spec"

    def stdout = new ByteArrayOutputStream()

    ignoreExitValue true

    doFirst() {

        println "Validating generated Open API docs..."

    }

    commandLine './validate-docs.sh'

    doLast() {

        ObjectMapper mapper = new ObjectMapper();

        JsonNode taskResult = mapper.readTree(stdout.toString())

        if (taskResult.valid.equals(false)) {

            println "FAILED"

            println taskResult.errors

        } else {

            println "OpenAPI spec validation passed!"

        }

    }

}

validate-docs.sh

#!/bin/bash

npx -p @seriousme/openapi-schema-validator validate-api docs.json

ApiSpecJsonFileExtractor.java

@SpringBootTest

@ActiveProfiles("test")

public class ApiSpecJsonFileExtractor {

  @Value("${extraction.api-spec.json}")

  String filename;

  @Value("${springdoc.api-docs.path}")

  String apiDocJsonPath;

<snip>

  MockMvc mvc;

  @BeforeEach

  public void setup() {

    mvc = MockMvcBuilders.webAppContextSetup(context).apply(springSecurity()).build();

  }

  @Test

  void extractApiSpecJsonFile() throws Exception {

    File file = new File(filename);

    Path filePath = file.toPath();

    if (file.exists()) {

      Assertions.assertThat(file.isFile()).isTrue();

    } else {

      Path path = file.getParentFile().toPath();

      if (Files.notExists(path)) {

        Files.createDirectory(path);

      }

      if (Files.notExists(filePath)) {

        Files.createFile(file.toPath());

      }

    }

    mvc.perform(MockMvcRequestBuilders.get(apiDocJsonPath))

        .andDo(

            result ->

                Files.write(file.toPath(), result.getResponse().getContentAsString().getBytes()));

  }

}

Mastering TypeScript Errors

Despite working with TypeScript professionally since 2019, I’ve been approaching TypeScript errors with a great deal of trepidation and fear. There’s so much information being returned. Although it’s not as much as Java stack traces in volume, I’ve struggled to make sense of TypeScript error traces and know how to parse them. Whereas most of the time I could resolve most errors with varying amounts of brute force or logic, sometimes a TypeScript error trace can really leave me clueless.

So, this week presented me with a golden opportunity to face some fears and turn trepidation into budding confidence. I was working with the react-select package and some custom components in the codebase. I was making some modifications and I was confronted with the Great Wall of Error.

Something like these…

Instead of freaking out at the Great Wall, I decided to break it down and make sense of it. I need to spend more time on the TypeScript website because the documentation on understanding errors is really excellent.

This part really started to solve the primordial puzzle for me:

“Each error starts with a leading message, sometimes followed by more sub-messages. You can think of each sub-message as answering a “why?” question about the message above it.” (emphasis mine)

If we take the second screenshot as an example, the way to parse the error trace seems to be as follows:

  1. TypeScript will highlight the offending line with a red squiggly line.
  2. Top message in the error trace is the leading message. In this case, it’s Type ‘Record<string | number | symbol, string | string[] | Record<string | number | symbol, string>>’ is not assignable to type ‘RecursiveObject<string | number>’.
  3. Why? Because Index signatures are incompatible.
  4. Why? Because Type ‘string | string[] | Record<string | number | symbol, string>’ is not assignable to type ‘RecursiveProperty<string | number>’.

And so on.

I’m going to be taking these insights into my work next week and side projects.

So, thank you TypeScript website!

More About Code Reviews, Impostor Syndrome, and Growing from Code Reviews

Following on from my last post, I saw a co-worker gracefully apply the principles of sound code reviews. It was comforting to see someone else on the same path as me. I observed myself while reading his comments. I didn’t feel attacked or questioned. His tone was curious and thoughtful.

I’ve observed this week that seeing code reviews as a part of engineering work, equal in value to coding and delivery, is starting to influence me. My thinking used to be upon submitting pull requests:


This meets the requirements.
This is as elegant and succinct as I am aware.
This passes tests.


Then, I would feel ashamed when comments come in about did you consider this? Why did you choose that? My internal dialogue became, I should have known this.

But the truth is that I often don’t have the full picture in my head of the entire codebase, or more experience, or the unique insights of another engineer. I only know what I’m operating on right now… if I’m not growing.

It got me thinking about a recent newsletter by The Hybrid Hacker entitled Dealing with Impostor Syndrome in the Engineering World. It was a uncomfortable, unsettling read in that it hit home deeply. I put exceedingly high expectations and requirements on myself to have thought of everything before a code review. And that’s simply impossible.

However, I can start to learn what my coworkers pick up on and ask questions about. Mining that can make me more thoughtful and compel me to improve. Learn more. Explore more. Skill up.

My wish with my new role was that I would get to work with senior engineers, to really level up my game, to sharpen steel against steel. And that has been fulfilled. For that, I’m grateful!

Getting Code Reviews Right

This is a few days late, but wanted to write it anyway to keep the discipline and routine going.

I subscribe to several really high-quality newsletters and one post last week was of immense use. It was Exactly What to Say in Code Reviews by Jordan Cutler.

Getting the tone, content, and style right in code reviews is a significant part of collaborating in engineering teams. It involves many trial-and-error iterations, a lot of self-reflection, and an understanding of the culture of the team you’re on.

So, Jordan’s post was like getting a well-constructed, time-tested, and solid toolbox for Christmas, for this part of engineering work.

I applied some of the principles while doing code reviews last week. I actually brought up the post and put it side-by-side with the pull requests.

Judging by my coworkers’ replies and comments, I knew that I had carried out some effective strategies.

Thank you, Jordan!

Examining What Is Important To Work On In Software Projects

I observed this week several vectors acting on my thinking. It’s important to focus on the product and the overall mission, as I said last week. That’s the main vector. But then, you look at the active sprint, and you look at what else is left. Low priority tasks left there as low-hanging fruit. Second vector comes in – I want to work on something that is valuable to the team and organization, something that advances the main vector.

So, I look at the backlog and look at the bigger tasks, not prioritized in the current sprint. There are still tickets there, needed to get the release over the finish line. Bigger tasks like API integrations or outstanding features. So, then you weigh up the first two vectors. Something valuable, preferably not low-hanging fruit, and something that advances the product. By my own internal logic, picking up something from the backlog makes sense then.

And this is where internal logic is not not sound, but defies a third important vector team norms and culture. What does the team do? What is accepted in the team? And there, internal logic – even if sound – can come up short with team norms. In some teams, it’s acceptable and encouraged, on some level, to pick up tasks from the backlog. But in others, it’s not if it wasn’t included in the initial sprint planning and not added later during sprint refinement.

There is no actionable method on offer here. Thinking through things, as you navigate a new role and organization, will help you identify the important vectors at play and your own internal logic.

First Week Thoughts

First full week at new role behind me. Really grateful to jump into modern, well-architected, and advanced codebases.

The technologies I’ll be delving into on my own time and getting comfortable with:

1) Formik

2) React-Query

3) Yup schema validation

4) Styled components

5) Advanced TypeScript generics

Two things that I have been doing and practicing, as I navigate the new role:

1) Keeping product-first thinking in selecting tasks to work on. What creates the most value? What does the team need done to get over the launch line? What helps the product the most? This requires a lot of communication and starting to build relationships.

2) Pushing myself out of the comfort zone by picking non-trivial tasks to complete. It may be nice to get a little thing done to get my feet wet, but I learn more about the products and team by working on something a little more complex. Examples include a bug fix or a new feature

New Role After a Difficult Job Search!

I started a new role today at Cabonline Group. Excited to be part of the team and look forward to working with everyone!

It was an extremely difficult job search process, in the one of the toughest job markets I’ve ever encountered in my career. A lot of pre-conceived ideas about my position in the market, my craft, and who I am as an engineer were either refined in fire or completely obliterated.

Key lessons after this most recent period:

  1. Interview prep now is more important than before in a market that is being driven by the market and employers.
    • Don’t eschew DS and algorithms – get good at them, get good at solving them, and get good at communicating the process as you solve them.
  2. System design is important for senior roles. It may not look like how you prep for them, but being ready for them is essential.
  3. Our feelings as engineers about coding tests are not important. The market and employers are not listening to us about them. They will continue to use them, so get used to them and get good at them. See the first point.
  4. ATS is here to stay. Understand how the software analyzes resumes and play the game.
  5. You don’t know how companies scrutinize applicants. The criteria differ from employer to employer. Accept it and be prepared. Some favor open source contributions, while others value code tests. Some look for a stellar behavioral interview. You never know. I learned this from talking to Amaechi on Codementor.
  6. The most effective and useful feedback will come from people you hire or you don’t know. Seek it out and listen to them intently. Feedback from Christoph really changed my thinking and transformed my job search. I’m really indebted to him. Without his insight, I wouldn’t have gotten this job!

We are tribal beings, and an impromptu tribe of people helped me in landing this new role. I want to thank them because I am indebted to all their help, insight, guidance, time, mirroring, and experience.

Adam Castle, Yury Vinter, Christoph Nakazawa, Justin Bartlett, Harry Clayton Cook, Brett Hardmann, Alexey Bykov, David Stephenson, Amaechi Johnkingsley

So, thank you!

An Examination of the Private, Personal, and Public

I was planning an essay to post on here, about someone from a previous employer who really was formative in my personal growth and professional development over the last 7 years. But, I struggled to finish the essay. Actually, I couldn’t really write more than just sentence outlines.

This left me puzzled. I wondered what could be holding me back. After some digging, I realized that I was holding back because I needed to share some private details in order to provide context to the extent of the impact of this person on me. I considered sharing these private details. What could go wrong? This is how it is now, I’m just sharing and this is acceptable now on social media, even on LinkedIn.

I decided against sharing these details and posting the essay on LinkedIn. A couple of ideas emerged as the main reasons why.

The first one was that I didn’t want to set the precedent for myself that sharing private details is part of my activity on LinkedIn. I don’t believe in constructing a work-friendly persona of myself. Ain’t nobody got time for dat! Anouk Pappers puts this in these terms:

“On the other side of this dichotomy, people usually use “professional” presence to refer to a scrubbed, work-approved persona. But this too is not realistic. We shouldn’t present ourselves as someone we are not, or even express inauthentic views, just to fit into a particular work culture. I think it is becoming increasingly important that we be our authentic selves online and that we position ourselves in a genuine manner. In essence, we need to establish and maintain an online presence across all of our accounts that accurately reflects who we are and how we want to be perceived.”

Rather, I didn’t want to turn my vulnerability into a currency, that I trade with, with hopes that the trading brings in ‘income’ later. I remember reading a Facebook post (in Arabic) a few months ago about a TikTok vlogger who had converted his overweight state into videos that brought in money for me. The post lamented that this is all it was now for him, to eat and show off his physique on TikTok. I thought further of another TikTok vlogger, who has become his signature dance – a dance and then showing off his afro. Is that all that they are now – their ‘products’? I am nowhere near their reach or fame, but I am close to them in that I could easily transform my inner life into some form of product.

The second reason was that I reflected on the differences between private and public information. I remembered a conversation I had years ago with a journalist in Sweden, who taught me the distinction between public, personal, and private information. From his training, he was taught that private is like the contents of your journal, stuff that’s for your eyes only. Others may not understand the context or importance. Personal information is where you can write about your experiences, but in a way that resonates with others. Think of you talking to friends about your experiences, showing them that they can relate. That’s personal. Public – news, commercial and legal texts. No emotion. “Just straight facts,” in my friend’s words.

Given this model, what I wanted to share in the essay is private information. I would extend his definition of private to include those also in my inner circle. Personal is what I’m prepared to share with friends and perhaps some at work, while public is whatever I post online.

The line between private and personal has been blurry for me for a while. I have written vulnerable essays on Facebook that I have set to public. Was that really a sound thing to do? The answer to that question didn’t strike me with much confidence. I wouldn’t say that I regret posting those essays, but now there may be archived pages on the Wayback Machine. 

The food for thought for all of us is, what are you prepared to have the Machine index? 

George Couros puts it in another jaunty way, quoting Seth Godin:

“Everything you do now ends up in your permanent record. The best plan is to overload Google with a long tail of good stuff and to always act as if you’re on Candid Camera, because you are.”

This pithy quote gets to the heart of my objection, that I don’t want everything on my permanent record.

The final reason was that there was no way to write the essay without these private details. Leaving them out would make the essay cryptic, and then there’s the danger of cryptic-posting in order to get people asking for more in the comments. Or, writing around the details would make the essay harder to understand. Then, what is the point of even posting it?

I found this quadrant diagram, while doing the thinking for this post. It takes the models, put forward by George Couros and Anouk Pappers, cited above to a further level.

When I analyzed my essay idea, the core idea – the guy whom I wanted to celebrate – was in the green quadrant. But the meat of the essay lied in the red square, and I struggled to argue to move it to the yellow square.

Thinking holistically – that is, engaging my emotions while activating reason – is helpful in evaluating what I put on the public record.

#ProfessionalDevelopment #CareerAdvice

The Curse of Over-Engineering

We have two half days (affectionately known as Freaky Fridays) at work a month, where we get to work on our own projects or explore something new. Yesterday was dedicated to exploring Angular 2.

I thought that it might be a good opportunity to test it out in a real-world application – my Flashcards app to help me learn Swedish vocabulary.

The syntax isn’t too dissimilar from AngularJS, but rather is abstracted in more boilerplate code. Getting a basic app going, by following the Quickstart Guide, wasn’t too difficult.

“Look at where you have to be.”

Then, there was a knock at the door. It was Earl, the Grim Reaper of Over-Engineering. He asked me to remember that I am a software engineer and that everything has to be TIP TOP from build one.

So, the curse kicked in and I started scrambling to get the basic setup working with Webpack. I tried cramming in a Webpack tutorial, alongside the Angular 2 tutorial. Soon, it just became about cursing the day Webpack was built and racing through Stackoverflow, hoping someone else wrote something to make everything PERFECT now.

10 minutes before the end of the day, I realized… wait.

The goal was to learn Angular 2 and use it to build an app.

That’s it.

I told Earl that there was another engineer across the street, about to do something simple. He scurried away.

I gutted out the Webpack configuration and stuck with the lite-server package suggested in the Quickstart guide.

Moral of the story: fuck Earl and fuck over-engineering.

 

Learning How to Work in a Team

I am working on a new feature for one of our microservices. It’s about a medium-sized T-shirt that involves working with AngularJS’s ui-router, working with new API endpoints, and writing some CSS from scratch. I’m excited! And a little daunted…

To work against being overwhelmed and becoming unproductive, I focused on tackling the hardest part first – the routing and views. I knew that I was going to work with ui-router, so I read through a few tutorials and brushed up on routing in AngularJS.

I then put together quickly some mock views to connect to the new states and routes. This felt better than starting to code markup and styling, I had to remove the unknown first.

The tutorials only got me so far, so I stopped and thought about it. I did some searches on Google. After a few iterations on this cycle, I reached out to a coworker. Instead of telling him it’s broke give me the codes!, I explained what I had done, what I was trying to achieve, and what wasn’t happening as I expected. Rather than him coming to help me google, it turned into a discussion about patterns, structuring code, and a brief pair-programming to get something working quickly. I even got some praise that my initial concept is good and that I should just find the right balance, between sound design and time spent on the solution.

I thought of this article after the whole discussion with my coworker.

Verified by ExactMetrics