Etiquettes for AI Assisted Code Generation
1. Introduction#
Robert “Uncle Bob” Martin popularized the Boy Scout Rule in software development:
“Always leave the code cleaner than you found it.”
Borrowed from the Boy Scout motto about leaving campsites better than you found them, this principle has guided generations of developers to make small, continuous improvements to codebases. Fix a typo, rename a confusing variable, remove dead code—these tiny acts compound over time to prevent the inevitable entropy that plagues long-lived software projects.
I remember having a big aha moment when a senior engineer introduced this concept to me. The Boy Scout Rule worked beautifully to drive a progressive increase in code quality and encourage developers to take ownership of the codebase — even for code they didn’t originally write. You could trace authorship, understand intent, and gradually improve what previous developers (including your past self) had created.
But now we’re in the era of AI-assisted development. Large language models can generate entire functions, refactor complex logic, and even write tests. While this dramatically accelerates development, it introduces new challenges: How do you maintain code quality when you didn’t write significant portions yourself? How do you ensure that AI-generated code integrates well with human-written code? How do you preserve the collaborative spirit of the Boy Scout Rule when one of your “collaborators” is an algorithm?
In this post, I share some etiquettes — practical practices inspired by the Boy Scout Rule — for working with AI coding assistants. The scope is deliberately narrow: this is about code generation specifically, not the broader landscape of AI in software engineering. Think of them as “Uncle Gaurav’s opinionated list”: born from hands-on experimentation, not research, and very much a work in progress.
2. Common Etiquettes#
These are practices that come to mind from my own experience — tinkering with AI agents, experimenting with agentic engineering workflows, and learning what works (and what doesn’t) through hands-on exploration. They are highly opinionated and not based on research data, so take them as a starting point for your own thinking rather than a definitive guide.
2.1 Assume that the code you are writing is meant to be read by humans even though it is supposed to be hybrid#
This is a philosophical question. We write clean, structured code so that it is readable, easily extensible and maintainable. But when AI is writing code, if AI can understand and maintain the code, does it matter if it ends up writing a 200 line function (eww, cringe!)? While there is no easy answer, in my experiments, I have seen that the coding agent does not seem to remember patterns unless project steering rules are written and maintained (see 2.5), so it is safe to assume your code will be read by humans.
2.2 Review and understand the code and changes#
The most obvious one, generating code has become fast. For software engineers, generating lines of code was never really the bottleneck, but given that we are going into a direction where it is getting even faster thanks to AI generated code. It’s important to remember, high speed does not automatically mean high quality. The previous etiquette mentioned, we still are writing code for humans. So, it is important we understand what code is being generated and what could be some alternatives. Use the code assistant itself to spar on alternatives if it helps.
2.3 Review Pull Request (PR) description#
It’s now quite easy to generate PR descriptions. It can become rather descriptive where the key changes and purpose might be missed by a reviewer. The catch is that AI-generated descriptions tend to focus on the what — listing files, functions, and line counts — rather than the why: the context, the decision, the trade-off. That’s the part reviewers actually need, and it’s the part only you can write. AI is great at summarising the what — but the why is yours.
2.4 Add reference in the commit message with “co-authored by ”#
When an AI writes (or significantly shapes) your code, acknowledge it in the commit message. Most git forges (GitHub, GitLab) support the Co-authored-by trailer natively:
feat: add rate limiting to the payments API
Co-authored-by: Claude Code <[email protected]>
Think of it as giving credit where it’s due — except your co-author never asks for a raise or complains about yearly performance reviews. Beyond common courtesy, it creates a transparent audit trail. Teams, auditors, and your future self can see at a glance where AI was involved. This is increasingly relevant as organisations start requiring AI disclosure for compliance and legal reasons.
2.5 Invest in writing “project steering rules”#
The quality of AI-generated code is only as good as the rules you give the coding assistant to work with within the repository. Project steering rules—files like CLAUDE.md, .cursorrules, or markdown files in .kiro/steering/—tell your coding assistant about the architecture, conventions, and practices that the team has deliberately designed into the repository. Without them, the assistant has no choice but to guess, and it will: inconsistently, and often wrong.
Here’s the catch: most codebases in existence were designed before AI coding assistants were a thing. They have no steering rules. That means the moment you bring an AI into one of these repositories, you’re working with an assistant that has no context about why the code is structured the way it is.
The etiquette here is straightforward: if you modify a part of the codebase that has no steering rules covering it, write them. Document the patterns you’re following, the architectural decisions that shape that area, the conventions the team cares about. You don’t need to do it all at once—just the section you touched.
The payoff compounds. Better steering rules mean more consistent AI assistance in future changes. They also force you to articulate your patterns explicitly, which is a natural checkpoint for a code review: is this actually the pattern we want to encode? And once documented, similar changes in the future become far more predictable—the AI and your teammates know what to expect. In that sense, it’s not unlike the documentation you’d write to onboard a new team member: the act of writing it clarifies your own thinking, and the artefact outlives the moment.
This etiquette is, in spirit, the same as the one this post opened with: leave the codebase in a better state than you found it. Project steering rules are the new form of that cleanliness.
2.6 Don’t gamify with tests#
AI coding assistants have a well-known habit of taking the path of least resistance when a test fails: instead of fixing the underlying code, they quietly rewrite the test to pass. Technically the build is green. Congratulations, you got duped!
Good testing discipline doesn’t change just because AI is involved. Review generated tests with the same rigour as generated code. Make sure they’re actually asserting the right things, not just asserting whatever makes the suite pass. Testing is not a chore to be delegated and forgotten—it’s the part of the work that tells you whether everything else holds up.
2.7 Use AI for initial code review feedback, but keep humans in the loop#
Tools like GitHub Copilot and other AI agents can now review pull requests directly. This is genuinely useful — but it can also put you squarely on the peak of the Dunning-Kruger curve. The AI approval gives you just enough signal to feel confident, without the depth to reveal what it missed. You don’t know what the AI didn’t evaluate, so no alarm fires.
Here’s the underlying problem: getting timely human feedback on PRs is hard. Reviewers are busy, and PRs can sit for days. When feedback finally arrives, it tends to split into two kinds. The first is pattern and style feedback — you named something inconsistently, or used a pattern the team has moved away from. The second is deeper: a reviewer anticipates a performance issue under load, spots a subtle bug in an edge case, or flags a design decision that will cause pain later. The first kind is repetitive and learnable. The second requires genuine context and experience.
AI agents are well-suited to the first kind. They can catch style drift, flag obvious issues, and surface things a linter might miss — all within seconds of opening a PR. That fast feedback loop is worth using. But for the second kind, there’s no substitute for a human who understands the system, the team’s intent, and what’s been tried before.
A practical way to think about the review pipeline:
- Linter — catches syntax, formatting, and static analysis issues automatically. Should be a hard gate; no human time should be spent on things a linter can catch.
- AI agent — reviews patterns, consistency, and common issues. Best used to speed up the first round of feedback and reduce noise before human reviewers are involved.
- Human reviewer — focuses on business logic, architectural fit, performance implications, and anything that requires knowing the wider system. This is where the high-value feedback lives and where human judgement is irreplaceable.
2.8 Be cautious about addition of new libraries and frameworks#
AI agents have a tendency to reach for new libraries and dependencies to solve problems. When that happens, do your due diligence: check if the library actually fits your use case, whether it’s a sound technical choice for the project, and what the alternatives are. Don’t try to rationalise and overindex the AI’s choice after the introduction. There rarely is just one choice.
3. Conclusion#
The landscape of agentic engineering is still quite volatile, and the more experience I gain using it, I am certain “Uncle Gaurav’s” list would change. New tools will emerge, old practices will be retired, and some of these etiquettes may look obvious in hindsight.
But the thread running through all of them stays the same: AI assistance doesn’t remove the need for human judgement — it just changes where that judgement needs to be applied. The Boy Scout Rule still holds. Leave the codebase, the tests, the reviews, and the documentation and architecture in a better state than you found them. The tools may be new, but the responsibility hasn’t gone anywhere.