Neither zero tests nor full coverage, just the right testing for your project

13/02/2026
tightrope walker crossing between two buildings

There are teams that don't test anything and pray before every deployment. And there are teams that aim for 100% coverage and don't deliver on time. Both have a testing problem. The question isn't whether to test, but how much, where, and why.

Your team has a project in production. Every time you touch something, someone asks, "Have we tested this?" The response is usually an awkward silence or a "I think so." And when things break—and they do—the conversation is always the same: "We need more testing."

But "more tests" isn't a testing strategy. It's an emotional reaction to a bug in production. And it usually leads to one of two extremes: either the team continues to not test (because nobody has the time), or someone decides that everything needs to be covered and the team spends more time writing tests than product code.

The question you should be asking yourself isn't "How many tests do we have?" It's "Are we testing what matters?" And the answer to that question changes depending on your project, your team, and the stage you're in.

The myth of 100% coverage

There's a persistent misconception in the industry: that a good project has near-100% test coverage. It's an easy metric to measure, easy to put on a dashboard, and easy to set as a goal. And yet, it's one of the worst metrics for determining whether your testing is sufficient.

A project can have 95% coverage and still break in production every week. How? Because coverage measures whether a line of code is executed during a test, not whether the test verifies something useful. A test that calls a function but doesn't check the result adds coverage without providing confidence.

Martin Fowler explains it well: test coverage tells you what code isn't tested, but it doesn't tell you if the tested code is well tested. It's an indicator of gaps, not quality. If your team becomes obsessed with the number, they'll end up writing tests to cover trivial lines of code (getters, setters, constructors) while ignoring the complex business logic where the real bugs reside.

Coverage is useful as a compass. If you have 15% coverage, there are probably critical areas left unprotected. If you have 85%, chasing 100% is probably not worth the effort. But using it as a goal is like measuring a company's health solely by the number of employees.

The testing pyramid (and why the order matters)

If there's one model that has stood the test of time, it's the testing pyramid. The idea is simple: many unit tests at the base, fewer integration tests in the middle, and few end-to-end tests at the top.

Unit tests verify small pieces of logic in isolation. In a Django backend, a unit test checks that a function correctly calculates a discounted price or that a serializer properly validates input data. They are quick to write, quick to run, and inexpensive to maintain. A well-tested Django project can run hundreds of unit tests with pytest in seconds.

Integration tests verify that various components work together. This includes ensuring that the Django view returns the correct JSON when it receives a request, that the ORM generates the expected query, and that the API responds with the appropriate status code. They are slower than unit tests and require more infrastructure (such as a test database), but they detect problems that unit tests miss.

End-to-end tests simulate the complete user flow. In a Flutter app, this means the test opens the app, navigates through screens, fills out a form, and verifies that the result is correct. They are the slowest, most fragile, and most expensive to maintain. But when they pass, they provide a level of confidence that no other type of test can.

The pyramid works because it tells you where to invest. If your team has limited resources—and every team does—invest in the base first. Many good unit tests are more valuable than a few fragile end-to-end tests.

How much testing do you need depending on the phase of your project?

This is where most guides fail: they treat testing as something static. "Your project needs X% coverage." But the amount of testing that makes sense depends on where you are.

Validation phase (MVP, prototype). If you're validating an idea with a minimum viable product, testing everything is a prioritization mistake. The product might pivot next week, and those tests you wrote for a flow that no longer exists are a waste of time. In this phase, only a few tests are essential: that the app starts up, that the main flow works, and that payments (if any) don't fail. Everything else can wait.

Growth phase. Your product has users, is generating value, and the team is adding features. This is where testing starts to pay real dividends. Every new feature should have tests of the business logic it introduces. Bugs that appear in production should generate a regression test before being fixed (first write the test that reproduces the bug, then fix it). The goal isn't to cover everything: it's to protect what already works while you build new features.

Maturity phase. Your product is stable, has a solid user base, and changes tend to be incremental. Here, testing investment focuses on critical business flows: checkout, registration, and integration with external services. An end-to-end test that verifies the entire purchase flow is worth more than fifty unit tests of auxiliary functions.

Scaling phase. Multiple teams work on the same codebase. Tests shift from being a quality assurance tool to a coordination tool. If team A changes an internal API and team B's tests fail, the CI/CD pipeline detects it before it reaches production. In this phase, the greatest return on investment in integration tests and contracts is found.

The 80/20 rule applied to testing

If you had to choose what to test with limited resources, the Pareto principle is surprisingly useful. 80% of production bugs come from 20% of the code. And that 20% is usually predictable.

Business logic with complex conditionals. If a function has three nested if statements with different possible paths, it's a candidate for testing. If it's a getter that returns a field, it probably isn't.

Integrations with external services. Payment gateways, third-party APIs, email services. Any point where your code communicates with another system is a potential source of errors. Tests here don't need to call the actual service (use mocks), but they do need to verify that your code handles expected responses and errors correctly.

Data transformations. Functions that receive data in one format and return it in another are natural candidates for unit tests. Examples include a Django serializer that transforms an object into JSON for the API, a parser that processes an imported CSV file, and a function that calculates a price with accumulated discounts.

Critical user flows. The actions users take most frequently (login, purchase, registration, search) deserve at least one integration test that verifies the entire flow. There's no need to simulate the browser: in Django, the test client allows you to make HTTP requests to the test server and verify responses without leaving pytest.

What you probably don't need to test: pure presentation code (styles, layouts), trivial configurations, database migrations, and automatically generated code. Not because they can't fail, but because the cost of maintaining those tests outweighs the benefit they provide.

Tests that subtract instead of add

Not all tests make your project more reliable. Some make it slower, more fragile, and harder to change. Recognizing these is just as important as writing the good ones.

Fragile tests. A test that fails every time you change an implementation detail (the order of a JSON object, the exact text of a message, the position of an element on the screen) creates noise. The team starts ignoring pipeline failures because "it's definitely that test that always fails." When a fragile test actually fails, nobody pays attention to it.

Slow tests without justification. A unit test that takes more than a second has a problem. A test suite that takes 20 minutes to run causes the team to avoid running them locally and rely exclusively on CI/CD. This slows down the feedback loop and hinders development. In a Django project, if tests take more than a couple of minutes, check if you're using the database where you could use mocks.

Duplicate tests. Three tests that verify the same thing in three different ways don't give you three times more confidence. They give you one confidence test and two maintenance tests. If you change the logic, you have to update three tests instead of one.

Tests that test the framework. Checking that Django returns a 404 when the URL doesn't exist is testing Django, not your code. Test your logic, not the framework's. The Django team already has its own tests for that.

The sign that your test suite has problems isn't that the tests are failing. It's that the team doesn't trust them. If someone says, "That's definitely a false positive," you have a more serious problem than just a bug in production.

Testing as an investment, not an obligation

There's a helpful way to think about testing: as an insurance policy. You pay a cost now (development time) to avoid a larger cost later (production bugs, rollbacks, angry users, lost revenue).

As with any insurance policy, it makes sense to commensurate the risk with the investment. You don't insure your bicycle with the same policy as your house. You don't test a contact form as thoroughly as you test a payment system.

According to the DORA State of DevOps report , teams with strong testing practices deploy more frequently and have lower failure rates. This isn't because they have more tests, but because the tests they have are in the right places. Well-targeted testing allows you to move faster, not slower.

If your team feels that testing is slowing them down, the problem isn't testing itself. It's that you're testing the wrong things, in the wrong place, or with the wrong tools.

Where to begin if you have almost nothing today?

If your project has few or no tests, the temptation is to "do a testing sprint" to catch up. Don't do it. That results in mediocre, hastily written tests that no one will maintain.

Instead, adopt a simple rule: every bug that reaches production generates a test before it's fixed. First, you reproduce the bug with an automated test. Then you fix it. This way, your test suite grows organically, focused precisely on the real problems in your project.

Start with the flows you're most afraid to touch. Those parts of the code where the whole team says, "Be careful with that." That's where a couple of integration tests can be a game-changer. In a Django project with Wagtail, it might be the content publishing flow. In a Flutter app, it could be the login or in-app purchase flow.

Integrate tests into your CI pipeline from day one. A test that only runs locally is practically useless. When tests run on every push, the team takes them seriously.

And most importantly: don't measure your progress in percentage coverage. Measure it in confidence. Does your team deploy on a Friday afternoon without feeling nervous? If the answer is yes, your testing is probably sufficient. If the answer is no, you know where to start.