Smell #13: Circular Testing

Severity: High

Circular Testing: A high-risk pattern where an AI assistant generates both the implementation code and the unit tests for that code in the same session.

Why It's Dangerous: The Assumption Loop

AI models are optimized for consistency. If an AI makes a flawed assumption in the code (e.g., "input will always be a valid date"), it will make the same flawed assumption in the tests it generates.

The Result: The tests pass, but they only validate that the code matches the AI's (incorrect) mental model. They fail to catch real-world edge cases, security vulnerabilities, or logical errors.

Symptoms

  • [ ] You have "100% Test Coverage" but still experience frequent bugs in production.
  • [ ] Your tests only cover the "Happy Path" (valid inputs).
  • [ ] You merged code and tests without reading the test logic yourself.
  • [ ] The tests look like "Mirror Code"β€”they just repeat the logic of the implementation.

Example

AI-Generated Code: function parse(val) { return JSON.parse(val); } (No error handling)

AI-Generated Test: test('parse works', () => { expect(parse('{"a":1}')).toEqual({a:1}); });

What's missing: A test for invalid JSON. The tests pass, the "coverage" looks great, but the system crashes in production.

Debt Impact

This smell leads to False Confidence:

| Debt Category | Impact | |---------------|--------| | πŸ” SEC | Security edge cases are never tested. | | 🧠 KNOW | No human verified the "Acceptance Criteria" of the module. |

How to Fix

  1. Negative Testing: Manually add tests for invalid inputs, null values, and edge cases.
  2. Expert Review: Treat AI-generated tests as "Drafts" that must be approved by a human.
  3. Spec-First Testing: Provide the tests or acceptance criteria in your prompt and ask the AI to write code that passes them (TDD approach).

How to Prevent

  • Separate the Roles: Ask AI #1 for the implementation and AI #2 (in a separate session) for the tests.
  • The "Edge Case" Prompt: Explicitly ask for "5 negative test cases" for every implementation.
  • Human-Authored Tests: Write the critical business logic tests yourself.

Related Smells

Book Reference

  • Chapter 1: How circular tests create initial synthetic debt.
  • Chapter 7: Multi-Agent Orchestration β€” using a "Critic Agent" to break the circular loop.

Validate your AI, don't just test its assumptions