AI coding assistants are powerful but imperfect. Understanding their common failure modes helps you write better agent rules and catch issues faster.
Common Pitfalls
1. Hallucinated APIs
The AI may suggest functions, methods, or options that don't exist in the version you're using.
Example: Suggesting useFormStatus() from react-dom in a React 18 project (it was added in React 19).
Prevention rule: "We use React 18.2 — do not use React 19 features like useFormStatus, useOptimistic, or the use hook."
2. Outdated Patterns
The AI may use patterns from older versions of libraries because the training data contains more examples of the old way.
Example: Using getServerSideProps in a Next.js 15 App Router project.
Prevention rule: "We use Next.js 15 App Router exclusively. Never use Pages Router patterns like getServerSideProps, getStaticProps, or _app.tsx."
3. Wrong Import Paths
The AI may guess import paths that don't match your project structure.
Prevention rule: "Import UI components from @/components/ui/. Import utilities from @/lib/. Import server actions from @/server/actions/. The @/ alias points to the src/ directory."
4. Missing Error Handling
AI-generated code often takes the happy path and skips error handling.
Prevention rule: "Always wrap external API calls in try/catch. Use the AppError class from @/lib/errors for typed error handling. Never use empty catch blocks."
5. Security Oversights
AI code may skip input validation, use dangerouslySetInnerHTML, or expose sensitive data.
Prevention rule: "Always validate user input with Zod schemas. Never use dangerouslySetInnerHTML. Never include API keys or secrets in client-side code."
Writing Rules That Enforce Tests
Add explicit testing requirements to your agent rules:
markdown## Testing Requirements - Write unit tests for all utility functions using Vitest - Use React Testing Library for component tests - Follow the Arrange-Act-Assert pattern - Mock external services with `vi.mock()` - Test error states, not just happy paths - Minimum test structure: describe block + at least 2 test cases
Review Checklist for AI-Generated Code
Before accepting AI-generated code, check:
- Imports exist: All imported modules/functions actually exist in the project
- Types are correct: No
anytypes, interfaces match actual data shapes - API compatibility: Methods and options match your library versions
- Error handling: Edge cases and failures are handled
- Security: No XSS vectors, SQL injection, or exposed secrets
- Tests included: Generated code has corresponding tests
- Conventions match: Follows your project's naming, structure, and patterns
Debugging Strategy
When AI-generated code doesn't work:
- Check the imports first — this is the most common failure point
- Verify API signatures — check the docs for your specific library version
- Look for type mismatches — the AI may use a slightly wrong type
- Test incrementally — don't generate and run a whole feature at once
- Update your rules — every bug you find reveals a missing rule
Rule of Thumb
If you correct the AI for the same mistake twice, add a rule for it. Your rules file should be the accumulated wisdom of every AI interaction your team has had.