Have you seen AI code review tools? They are just as bad as any other AI products - it has a similar chance of fixing a defect or introducing a new one.
Sure, but it's LLM - so this would be mix of some of the real defects (but not all of them) and a totally fake defects which do not actually need fixing. This is not going to help junior developers figure out good from bad.
And by running multiple versions of the system in parallel you can have them vote on which is the most likely part of the code which is a bug and which isn't.
We've known how to make reliable components out of unreliable ones for a century now. LLMs aren't magic boxes which make all previous engineering obsolete.
> this would be mix of some of the real defects (but not all of them) and a totally fake defects which do not actually need fixing
... and real defects that you never noticed or would've thought of.
> This is not going to help junior developers figure out good from bad.
Neither is them inventing fake defects which do not actually need fixing on their own. What helps juniors is the feedback from more senior people, as well as reality itself. They'll get that either way (or else your whole process is broken, and that has zero to do with AI).
Have you seen AI code review tools? They are just as bad as any other AI products - it has a similar chance of fixing a defect or introducing a new one.