Some opinionated etiquettes for AI-assisted code generation, inspired by the Boy Scout Rule. Covers writing human-readable code in hybrid codebases, reviewing AI-generated code and PR descriptions, acknowledging AI in commit messages, maintaining project steering rules, avoiding test manipulation, structuring code review feedback across linters, AI agents, and humans, and evaluating AI-introduced dependencies. Based on hands-on experimentation with agentic engineering workflows, not research data.
Covers the limitations of vibe coding with Claude Code for production-grade work — context window constraints, ad-hoc design, and prompt-as-context friction — and how spec driven development with Kiro addresses them. Walks through Kiro’s three-document workflow (requirements, design, tasks) using a real Cloudflare R2 migration as a worked example. Discusses repository structure, where Kiro excels, and common pitfalls to avoid.