It's probably impossible to detect ALL languages without training for them specifically, but there's good generalization happening. Our model is a unified model rather than a separate model per language. We started out with language-specific models but found that the unified approach yielded slightly better results in addition to being more efficient to train.
Candidly, it's a bit of a black box still. We hope to do some ablation studies soon, but we tried to have a variety of formatting and commenting styles represented in both training and evaluation.
I'd argue that knowing AI generated code shipped into production is the first step to understanding the impact of AI coding assistants on velocity and quality. When paired with additional context, it can help leaders understand how to improve proficiency around these tools.