Reviewing AI-generated Code
AI's involvement decreases trust and makes reviews more intensive.
Source determines trust
Where code comes from establishes my starting level of trust when reviewing code. If it's from John Carmack, I'm already fawning over it. If it's generated from an AI, I'm already more skeptical of it. Code contributed from an unknown developer would be similar. But I'd still trust an unknown-to-me human more than an AI, while being wary of both.
Just a review
If I'm confident that the developer understands the code, my review can rely on that. I can lean on my confidence in the wisdom and track record of that developer. It's faster and more straight-forward that way. A review is just a review.
AI-generated code review
Let's modify the scenario. That same developer, in whom I'm generally confident because of his wisdom and track record, decides to start using AI to generate code.
This may be for things he does or doesn't understand. As a reviewer, I don't even know if he's looked closely at the code. It's generated. You don't have to look at or think about output you didn't type. Cursor, prompt, alt-enter, a glance, easy street, commit.
Becuase it's generated and from an untrusted source, the review needs to be more than a review.
It's as if I as the review have to start from the problem statement and reason through to a solution myself. The solution has to make contact with a human's brain. I may have to reason about the problem, a design, an implementation, proper tests. This is because all that I can presume to have happened is the typing of a prompt into a chat box and the output of code from an LLM. I can't just push that to prod. "LGTM" on a review with a fellow developer with whom you have rapport is to be forgiven. (LGTM is not the best review, but you can feel the difference in level of worry/scrutiny in these scenarios.) The dev you're working with has shown that he's good to go.
So all this work from design to implementation and test needs to be done, at least cognitively, in order to vet the code before you review. That makes this more than review. This is me gaining what I have to assume is original understanding of the code. Now I know that at least one of us has full understanding of the problem and an implementation. And now with understanding, I can get on with the usual review. I use my new understanding, matching it against the submitted code, finally able to judge its appropriateness.
One could argue that you should do this every time you do a review. But then why have another developer on your team who is supposedly coding?
Potential helps
Write your own code.
Understand your own code.
If you use code from another source, give it a citation. For copy-pasted code, affix a URL to the source. Make notes on modifications. If you were assisted by a fellow developer, add them to the commit as a co-author or comment the fact. If you generated the code, comment it as generated.
If there's a section that you're less confident in, state the fact in your review request.
The citations of sources in the code help focus the reviewer to more granular portions for which more or less time and criticality might be warranted in order to mitigate the risk of the code, based on the source.