Want to join a thriving community of quality champions? | Check out Shiftsync

Pre-Deployment Best Practices: 3 for Your Checklist

It's been a long day. There's a hot new feature you've been working on. You're so excited to get it…

By Testim,

It’s been a long day. There’s a hot new feature you’ve been working on. You’re so excited to get it to your customers, who are dying to have it. You’re tying up just one little bit of logic, and then it’s done. Now you’re ready to push it, right? Those customers can’t wait, and the feature’s done. It’s time to deploy!

Not so fast, my friend. Sure, the feature work might be done, but it’s irresponsible to just deploy that code without knowing that it actually works. Deploying a new feature that’s riddled with bugs or security defects isn’t doing your customers any favors. That’s why it’s important to have a checklist you go through before every deployment. Sure, it’s going to take longer for that feature to get to your customer. If they’re anything like the customers I’ve worked with, they can be horribly impatient. You want to make sure that the checklist you verify before deploying isn’t too onerous. Finishing a feature, then having it wait for quality assurance for a month isn’t doing you or your customers any favors either.

In this post, we’re going to talk about how to build a high-quality pre-deployment checklist that will both keep your velocity up and ensure that the work you’re doing meets your customers’ needs.

Building Quality in Three Steps

It’s impossible to ship software without bugs. Sure, one commit might be bug-free, but you’ll never stamp out every bug from your codebase. That’s not a reasonable goal, and we’re reasonable people. So, instead of aiming to ensure that every deployment is perfect, we’re going to talk about best practices: the steps you can take before deploying that will make sure your code is delivering what your customers need while minimizing bugs and maximizing velocity.

Step One: Verify Your Tests

Did you notice something missing in the opening paragraph from our introduction? The feature was “done.” But at no point did we ever verify that. Our hypothetical developer was ready to ship their code out to customers without ever confirming that it did what they needed it to. Our hypothetical developer needs to start writing some tests. There are a lot of reasons why developers don’t write tests. Whatever the reason, not writing tests for their software means that developers are adding risk to the code they ship. Mitigating that risk is what shifting testing left is all about. It’s a common pattern for developers to write a little bit of code that serves some small unit of functionality. They build around that code, then discover a place where they can refactor a little bit of logic.

After that refactor, the developer keeps working. It’s not until hours, days, or even weeks later that they discover that their refactoring introduced a small logic bug. Sometimes, that logic bug isn’t particularly impactful. Sometimes, it leads to security issues or data corruption. The key is that the bug isn’t caught until it’s tested. By testing earlier in the process, the developer is more likely to catch those kinds of bugs when they’re cheapest to fix: during the coding process.

Sometimes, deployment problems don’t come from a lack of tests. Some developer teams have plenty of tests, it’s just that they’re broken and nobody takes the time to fix them. Instead of a broken test being a blocker before deployment, it’s ignored by the team. Instead of ignoring broken tests, mature teams use features like git hooks or CI/CD pipelines to run tests automatically after every commit. By verifying that tests work, software teams help mitigate the risks associated with new code.

Step Two: Manually Verify Functionality

As the world has transitioned toward continuous delivery of features and code, the role of manual quality assurance is in flux. Finding the right balance between automated and manual testing is a challenging task. What’s right for your team isn’t necessarily going to be right for another team. The key is not to avoid manual testing entirely. Again, this is time that you’re investing between code being finished and features getting out to customers. As developers, we naturally want to automate all our tests so we can get verification that something works in just a few minutes instead of maybe waiting hours or days for quality assurance. Ideally, your QA workflow doesn’t take days to verify a new feature, but it’s still an important step.

As a developer, it’s important to recognize that not all testing lends itself to efficient automation. Sometimes, you’re going to need to wait for manual testing. A key part of this process is the developers themselves working to get better at testing their own features. Developers are often guilty of testing a tiny slice of functionality, then declaring a feature finished. This regularly goes hand in hand with writing insufficient or zero unit tests. By building effective unit tests and performing high-quality manual tests, developers free up quality assurance employees to focus on more difficult tests. There are always parts of an application that are difficult to manually test, like parts of the application that rely on a third-party service. A developer that builds and performs excellent tests on their own code frees up QA engineers to focus on those high-leverage manual tests.

Step Three: End-To-End Testing

You’re a developer that writes great unit tests, and you’re fanatical about testing your own code. Awesome! You’re probably wondering how you can continue to ensure high quality from your team. That’s where high-quality end-to-end testing comes into the picture. End-to-end tests are often much more complicated to write than unit tests. For this reason, a lot of teams skip them. They feel like it’s too much of an investment to build end-to-end tests, and that they’ll be too brittle once they’re running.

Nothing could be further from the truth. Automated end-to-end testing systems based on tools like Testim’s test automation are both easy to build tests with and resilient to the changes in your application. Tests with Testim are tools in the toolbox of test automation that make applications more resilient and ensure that features work the way they’re supposed to. They’re easy enough to build that developers, QA engineers, or project managers can add tests to the application. They integrate directly with CI/CD tools, making it easy to ensure that those end-to-end tests are running on an intermediate environment before code changes are promoted to production. Today’s software moves too fast for an engineer to re-test whether clicking “login” logs the user in every time your team wants to deploy. Testim’s fast test authoring means that they can automate that work and focus on the harder testing problems.

The Best Practice Is Continuous Improvement

If your team isn’t taking any of these steps right now, getting to best practice can seem like a big hill to climb. The important thing to understand is that you don’t have to do it all at once. The most important part of a DevOps mindset is continuously improving. You don’t need to write unit tests for your entire codebase tomorrow. Your end-to-end tests don’t need to cover every workflow in a day. Instead, you can start with one easily automated workflow on a free Testim account and learn as you go. Start with a critical path workflow in Testim and run it on a scheduled basis to ensure that it continues to properly function. The key is to be constantly learning and using what you learn to improve for your next deployment. The best day to improve your deployment checklist is today. What are you waiting for?

What to read next

7 Essential End-to-End Testing Best Practices

Best Practices for Large Test Automation Projects