The Executive’s Guide To Current Engineering Best Practices

Software engineering best practices change fast.

Staying up to date can feel like a full time job.

In this article I will give an executive summary of modern generally accepted best practices for:

  1. Software Development Process
  2. Systems Architecture

Software Development Process

Agile Philosophy

One of the greatest shifts in software development process over the last few years has been the rise of agile methodologies.

At its core, agile philosophy prioritizes iterative, feedback-driven software development. This allows organizations to quickly adjust their product paths based on customer feedback. This ultimately leads to a more efficient deployment of resources than older methodologies such as waterfall.

However, outside the core values of iterative, feedback-driven development, agile does not prescribe any particular development process.

This leaves an organization’s implementation of the agile philosophy up to them. Two particular frameworks that have gained industry-wide popularity and acceptance include Scrum and Kanban.

Scrum

At a high level, in the Scrum system teams commit to shipping batches of work or features in set two to four week intervals called sprints.

A designated individual called the Scrum Master works with technical and product leaders to identify and flesh out tasks that the team will commit to completing in a given sprint.

The Scrum system optimizes for fast, iterative development through its short delivery cycles while simultaneously allowing teams to make clear commitments about their work and direction.

Read more about the details of Scrum from Atlassian.

Kanban

While Scrum focuses on defining and executing work in distinct intervals, Kanban prioritizes a free-flowing execution model.

Rather than committing to work up front for a given period, team members take on pieces of new work as they become available. The entire team then bears responsibility for defining work priority and choosing the execution path.

This framework optimizes for iteration and and feedback at a daily granularity. As a result, it is often ideal for teams with very limited advance knowledge of their work’s scope.

If you want to learn more about the inner workings of Kanban, check out Atlassian’s resources.

Systems Architecture

Cloud

The biggest change of the last decade in software architecture has been the move to the cloud. Cloud providers such as Amazon Web Services (Amazon), Google Google Cloud Platform (Google), and Azure (Microsoft) sell time on virtual machines (VMs) that companies can programmatically procure and rent. As a result, companies no longer need to buy and manage their own servers.

In addition to these building-block VMs, cloud providers provide a variety of more advanced services on top of their own virtual machines. More on this later.

Containers

Although VMs are now easily available through cloud providers, they are no longer the best practice way to run software in most scenarios.

If you are not familiar with containers at all, you can think of them as extremely lightweight VMs designed to run a single process. For instance, a web server. This mental model is not completely accurate, but it will work for now.

A host operating system runs a container runtime such as Docker. That runtime is responsible for taking a container image – an artifact containing only the application code and dependencies – and running it.

Unlike a VM which tries to emulate physical hardware and can be hundreds of MBs in size containers are typically 10s of MBs and can be as small as single-digit MBs.

The ability to package an entire application into a single artifact that can then be passed to a runtime on any host is quite powerful. It opens the door to:

  1. Running identically configured applications locally, in a testing environment, and in production. This is a huge advantage when debugging.
  2. Passing around artifacts from one developer to another. No more “but the tests pass on my machine,” scenarios.
  3. Easily consume and run third party/open source software, often with a single command.

If you are interested in learning more about the technical inner workings of containers I highly recommend the Docker documentation.

Container Orchestrators (What is Kubernetes?)

Along with containers have come container orchestrators. Where as the container runtime is responsible for running the container on a host, a container orchestrator is responsible for scheduling managing things like how many *instances* of a container are running across multiple hosts.

Together, a container runtime and a container orchestrator allow teams to efficiently deploy containers to a cluster of VMs.

For example, imagine I have an API web server receiving a large amount of traffic. I may want to run multiple instances of it to better handle the traffic. I can specify to my container orchestrator how many instances I would like to run in parallel and it will manage the scaling logic for me.

The most famous and widely used container orchestrator right now is Kubernetes. If you want to read more about its inner workings, I highly recommend their extensive docs.

Example Container-Driven Architecture

Using containers, container orchestrators, and virtual machines as our building blocks, let’s diagram out what an example architecture might look like.

Diagram of a container-driven architecture

The entire architecture revolves around the idea that small pieces of code should be deployed as soon as possible. This typically means releasing at least once a day. Rather than making large releases several times a year, the continuous delivery model makes small incremental changes to a system.

This aligns well with agile principles we have already discussed.

The deployment workflow begins with the version control system. These days that typically means using git on github, bitbucket, or gitlab.

The Continuous Integration service ensures that new code meets certain standards before it is added to the version control system.

This typically includes running tests and a linter to make sure there are no errors in the new code. Some companies write custom linters to ensure that the code is formatted correctly as well.

After persisting new code in the version control system a company will use some business logic to decide how and where to deploy it.

Once the criteria for deployment is met, the Continuous Delivery service will pull in the code and use it to build a container image.

The Continuous Delivery service then pushes the new container image to a container registry. The registry is basically a key value store for container images.

Finally, the Continuous Delivery service will talk to the Container Orchestrator and tell it to run the newly published image.

The orchestrator will then pull the image from the registry and run the image per the orchestrator’s configuration.

Conclusion

Engineering best practices have evolved substantially over the last few decades. I hope this guide has helped you gain a better picture of where things.

Leave a Reply

Your email address will not be published. Required fields are marked *