

Yeah, generating test classes with AI is super fast. Just ask it, and within seconds it spits out full test classes with some test data and the tests are plenty, verbose and always green. Perfect for KPIs and for looking cool. Hey, look at me, I generated 100% coverage tests!
Do these tests reflect reality? Is the test data plausible in the context? Are the tests easy to maintain? Who cares, that’s all the next guy’s problem, because when that blows up the original programmer will likely have moved on already.
Good tests are part of the documentation. They show how a class/method/flow is used. They use realistic test data that shows what kind of data you can expect in real-world usage. They anticipate problems caused due to future refactorings and allow future programmers to reliably test their code after a refactoring.
At the same time they need to be concise and non-verbose enough that modifying the tests for future changes is simple and doesn’t take longer than the implementation of the change. Tests are code, so the metric of “lines of code are a cost factor, so fewer lines is better” counts here as well. It’s a big folly to believe that more test lines is better.
So if your goal is to fulfil KPIs and you really don’t care whether the tests make any sense at all, then AI is great. Same goes for documentation. If you just want to fulfil the “every thing needs to be documented” KPI and you really don’t care about the quality of the documentation, go ahead and use AI.
Just know that what you are creating is low-quality cost factors and technical debt. Don’t be proud of creating shitty work that someone else will have to suffer through in the future.

















That’s a pretty strong case of whataboutism. Nobody said that anything was fine and dandy in China. Only that they planned to build a high speed rail and they did it, while the US repeatedly fails at the same thing.