

The article explains the problems in great detail.
Here’s just one small section of the text which describes some of them:
All of this certainly makes knowledge and literature more accessible, but it relies entirely on the people who create that knowledge and literature in the first place—that labor that takes time, expertise, and often money. Worse, generative-AI chatbots are presented as oracles that have “learned” from their training data and often don’t cite sources (or cite imaginary sources). This decontextualizes knowledge, prevents humans from collaborating, and makes it harder for writers and researchers to build a reputation and engage in healthy intellectual debate. Generative-AI companies say that their chatbots will themselves make scientific advancements, but those claims are purely hypothetical.
(I originally put this as a top-level comment, my bad.)
That’s an interesting article, but it was published in 2022, before LLMs were a thing on anyone’s radar. The results are still incredibly impressive without a doubt, but based on how the researchers explain it, it looks like it was accomplished using deep learning, which isn’t the same as LLMs. Though they’re not entirely unrelated.
Opaque and confusing terminology in this space also just makes it very difficult to determine who or which systems or technology are actually making these advancements. As far as I’m concerned none of this is actual AI, just very powerful algorithmic prediction models. So the claims that an AI system itself has made unique technological advancements, when they are incapable of independent creativity, to me proves that nearly all their touted benefits are still entirely hypothetical right now.