I'd like to burst into a post a number of the unbelievable akin mishandlings of academic tasks I was reported, but. I do have a number of prize-worthy anecdotes that compete with yours. Nonetheless. Let us fight farce with rigour.
Even when the tasks are not in-depth, but easier to assess, you still require a /reliable evaluator/. LLMs are not. Could they be at least employed as a virtual assistant, "parse and suggest, then I'll check"? If so, not randomly ("pick a bot"), but in full awareness of the specific instrument. That stage is not here.