Software/Hardware Engineer | International tech speaker | Random inventor and slightly mad scientist
Many engineering teams are making the switch to a DevOps culture. This has a lot to do with the way the software development cycle has changed over the years. It used to take months to implement a new feature, get it through testing, fix any issues, and finally get it to production.
Now that bug fixes and new features need to be delivered to users faster, that same method of getting changes to production doesn’t work well anymore. There are issues with regressions happening, deploys aren’t consistent, and it can be hard to figure out which artifact is actually on production.
DevOps solves many of these problems and adds consistent deployments through automation. Once you show everyone the benefits of DevOps and start working with it more often, you want to keep those pipelines as efficient as possible. Here are a few ways you can keep your pipelines running smoothly.
Keep separate configurations for development and production
This is especially helpful if you have different services you use for staging and production. Many teams use staging APIs for testing and then switch to production APIs later. When you have these configurations separated, you won’t have to worry about the wrong credentials being used in the wrong environment.
This helps prevent hard-coding values into your app and it helps the CI/CD pipeline make those changes automatically based on other parameters you send. Having separate configs also helps with security. It lets you and the engineering team keep secrets encrypted and no one has to go in and manually update them.
Run the fastest tests first
To improve performance, you want to get feedback as soon as possible. That means running your faster tests first. If they are going to fail, it’s best to get those over with before you spend time running all of the other tests. That’s a part of the “fail first” paradigm.
You want to run tests in the order of how long they take to run from shortest to longest. That way all of those little issues can get solved early, before you go too deep into the rest of the pipeline. When you run tests like this, you don’t waste as much time getting feedback to your developers.
Focus on CI first
The continuous integration part of a CI/CD pipeline is arguably the most important part. This is where all of your code comes from version control and starts going through the build process. In this phase, you’ll be running all kinds of tests.
There will be security tests and maybe some other static tests being run on just the code here. This is where your code will get bundled to be shipped as an artifact. If any issues happen in the CI phase, the rest of the pipeline might not run. Even worse, you might get a bad artifact that messes up your pre-production environments and production where the users interact with your app.
Learning about what’s happening as your pipeline goes through the different phases will show you areas that you can improve. Monitoring your application is commonly done, so make sure you’re doing the same for your pipeline. If there are any errors coming up in the process, send a notification to the right people.
There are a number of tools you can use to monitor your pipeline and there are experiments you can run to figure out what to measure. A few things you might consider monitoring are the build to deploy time and CPU usage. Doing this kind of monitoring will give you data that establishes the steady-state of your pipeline and it will make any abnormalities stand out more.
Only deploy using CI/CD pipeline
There was a time when we deployed to production directly from our local machine after making changes and this led to a lot of issues. When you have multiple people deploying different changes to the same files at the same time, something is destined to get overwritten or corrupted. That’s why it’s important to make your CI/CD pipeline the only way to deploy changes to any environment.
This will ensure that multiple deploys will run in the correct order so that changes aren’t overwritten. It will also make sure that if there are any regressions coming from any changes will get caught in the right order. That way you don’t have a bunch of confused developers trying to figure out how some weird change that nobody made got to production.
From time to time, we run into the issue where code changes work on our local machines but not in production. There are a ton of environment differences and permissions that exist between your local machine and the production server. Those are usually really hard to track down and can lead to a longer debug time in production.
That’s where containers come in. Containers make it so your artifact will run the exact same way no matter what environment you’re using. While many CI/CD tools do this for you in the background, it’s better to have your own Docker images and containers that you know for sure work. That way if a production issue does come up, you’ll be able to reproduce it locally and fix it without disturbing users as much.
Use production-like environments
The purpose of your pre-production environments is to help you do accurate testing before your changes are live. That means all of the unit tests, integration tests, security tests, and manual QA tests happen in some pre-production environment. That’s why it’s critical that they all match production as closely as possible.
Your pre-production environments are only useful if they help you figure out what’s happening in production. If you’re working with data that doesn’t look remotely like anything in production, try to get an obfuscated copy of production data. If you have third-party services you use, try signing up for an account that only used in pre-production environments.
Fully automate deploys
The point of implementing a deploy pipeline is to remove all of the manual steps from it. That means no one should have to do anything once the pipeline starts running. No more button clicks or approvals are needed past that initial start.
If you have your pipeline set up to start running after you merge a branch or push changes, that’s even better. The goal is to make it so that when all of the code changes are done, the developer just pushes them, gets the code reviewed, and the rest happens automatically.
Run tests locally first
I know that a lot of developers forget to do this (or just don’t), but it saves time on your deploys. When you go ahead and run those unit tests on your local machine, you can fix the bugs before they get reported by your CI/CD monitoring. By doing this, you save yourself from running any part of the pipeline before the code is truly ready.
The reason we like to do this is because sometimes other people need to deploy changes and that creates a queue. By running tests locally first, you make sure that anything in the queue will pass unit testing and it won’t block other deploys by taking up extra time.
Make a pipeline that fits the project
There aren’t any two pipelines that look the exact same. The way you deploy an application has a lot of dependencies based on how your system is set up. All of the third-party services you use, the programming languages you work with, and the libraries you use all factor into your deploy process.
Once you know all of the things your pipeline needs to handle deploy, then you can look into the best tools for your application. This is one of the few times that copying and pasting code probably won’t be the solution just because of all the little things that make a pipeline work.
This is probably trickier than some of the other things on this list. There are certain checks that just can’t be automated if you want to make sure that users will have the right experience. Although there are a few tools out there that help make this a little more automated.
Selenium and TestingWhiz are a couple of the most popular automated testing tools. They don’t replace visual testing, but they do make sure a lot of the common issues are caught without a person going through and clicking every single button.
There are a lot of different things you can do to optimize your pipelines and this is just a short list of them. As you gather statistics on your pipeline runs, you can start to see places that can be improved. Then you can try and implement one or two of the things from this list to make it better.
If you can, spend the time on implementing everything we’ve discussed so far. All of these are best practices for CI/CD pipelines anyways. They just do some clean up on things that might have been left out of the initial pipeline configuration. Remember, these things change as the application gets updated or business needs change.
Make sure you follow me on Twitter because I post about stuff like this and other tech topics all the time!
If you’re wondering which tool you should check out first, try Conducto for your CI/CD pipeline. It’s pretty easy to get up and running and it’s even easier to debug while it’s running live.
Create your free account to unlock your custom reading experience.