Why Your API Tests Should Live in Your Codebase

Griffin Team·

Open any decent codebase and you'll find unit tests next to the code they test. Check the CI config and those tests run on every push. Now ask: where do your API tests live?

For most teams, the answer is "somewhere else." A Postman workspace. Some cURL commands in a doc. An OpenAPI file nobody updates. Whatever the specifics, API tests are disconnected from the code and PR workflow where actual changes happen.

This causes problems.

The Postman collection graveyard

Here's what happens with external API testing tools.

Someone creates a collection when the API launches. It's thorough, organized, maybe even parameterized correctly. For a few weeks, it gets used. Then the API changes—a field renames, a new header becomes required, an endpoint moves. The collection doesn't.

Why? Because updating it means switching tools. The developer who changed the endpoint updates the route handler, types, frontend client, and docs. But they skip the collection because it's not in the PR.

Months later, someone opens the collection. Half the requests fail. Welcome to the graveyard.

This isn't a discipline problem. It's a workflow design problem.

Postman's 2024 State of the API Report confirms it: teams struggle with documentation debt, visibility, and keeping testing consistent as APIs change.

We already solved this for unit tests

Unit testing used to work differently. Tests were separate projects, maintained by separate teams, run on separate schedules. Sound familiar?

The industry figured out a better way: tests live with the code. Jest files sit next to React components. Go test files share the same package. Pytest modules mirror the source tree. This wasn't just style—it changed how teams work:

  • Tests update in the same commit. Rename a function, rename the test. Same PR, same review, same deploy.
  • New developers find tests naturally. No wiki page needed. They're right there.
  • CI catches regressions automatically. No "remember to run the tests" step. They're part of the build.
  • Code review covers test quality. Reviewers see test changes with code changes. They can ask "where's the test for this edge case?" in the same PR.

When API tests live outside the repo, you lose this.

Five problems with external API tests

1. No version control

External tests aren't versioned with your code. You lose the commit trail. "When did this test start expecting 201 instead of 200?" "Who changed the auth header format?" Some tools offer versioning, but it's separate from code review where most teams validate API changes.

2. No code review

This is the big one. When a developer changes an endpoint and updates tests in the same PR, reviewers verify behavior and coverage together. They catch missing edge cases before merge.

External tests? The review loop is weak or nonexistent. Test updates happen silently, or not at all.

3. No real CI/CD integration

Sure, you can run external collections in CI. But that setup gets messy: export/sync steps, separate environment management, more places for drift.

Repo-native tests are simpler. Code and tests are one artifact. CI integration stays reliable.

4. Context switching kills momentum

Every time a developer leaves their editor to update tests in another tool, they lose flow. It's not just time—it's mental overhead.

That friction compounds. "Update external API tests" is easy to skip because nothing enforces it.

5. Onboarding is harder

New developer joins, clones the repo, starts exploring. If tests live in code, they find them. They learn API behavior and edge cases naturally.

Tests elsewhere? Add onboarding steps, access requests, tribal knowledge.

A concrete example

Say POST /v1/projects now requires organizationId in the request body.

Repo-native workflow:

  • Endpoint change and test update in the same PR
  • Reviewer sees both, asks for missing validation
  • CI fails if tests still send the old payload

External collection workflow:

  • Endpoint change merges
  • Collection update tracked separately or forgotten
  • Breakage shows up in staging, support tickets, or customer integrations

Same team, same intent, different results.

What repo-native API tests look like

API tests live in your repo, near the handlers they validate. Endpoint changes and test updates happen in the same commit. PRs show behavior and verification together.

CI runs tests on every push. No export step. No sync drift. Same version control, same review, same automation as everything else.

New developer joins? Tests are visible on day one. CI fails? Fix it like you fix anything else: branch, fix, PR, merge.

This should be the default for API testing.

Why Griffin is different

Griffin is built around this model:

  • API tests are files in your codebase
  • Test changes get reviewed in PRs with endpoint changes
  • CI runs API tests like any other tests
  • New teammates find and run tests with normal dev setup

Git and PRs, no separate testing universe.

The shift is already happening

The industry is moving here. The "shift-left" movement pushed quality checks closer to development. Contract testing tools like Pact are code-centric. OpenAPI specs are generated from code. Infrastructure-as-code put operational config in version control. API tests are next.

More teams want file-based, Git-native workflows where API definitions and tests act like code.

Teams that make this shift see fewer regressions, faster onboarding, more confidence during refactors. Not because they test more—tests are just easier to maintain, review, and run.

This is what we're building

API tests should be part of your development workflow, not an afterthought in another tool. Tests in your codebase. Tests in your PRs. Tests in CI. Tests every teammate finds on day one.

If you've opened an API collection full of broken requests, or shipped an endpoint change only to realize nothing validated it, you know this already.

Start using Griffin and bring your API tests back where they belong.