• 0 Posts
  • 5 Comments
Joined 1 month ago
cake
Cake day: February 5th, 2025

help-circle
  • I work in an extremely related field and spend my days embedded into ML/AI projects. I’ve seen teams make some cool stuff and I’ve seen teams make crapware with “AI” slapped on top. I guarantee you that you are wrong.

    What if our brains…

    There’s the thing- you can go look this information up. You don’t have to guess. This information is readily available to you.

    LLMs work by agreeing with you and stringing together coherent text in patterns the recognize from huge samples. It’s not particularly impressive and is far, far closer to the initial chat bots from last century than they do real GAI or some sort of singularity. The limits we’re at now are physical. Look up how much electricity and water it takes just to do trivial queries. Progress has plateaued as it frequently does with tech like this. That’s okay, it’s still a neat development. The only big takeaway from LLMs is that agreeing with people makes them think you’re smart.

    In fact, LLMs are a glorified Google at higher levels of engineering. When most of the stuff you need to do doesn’t have a million stack overflow articles to train on it’s going to be difficult to get an LLM to contribute in any significant way. I’d go so far to say it hasn’t introduced any tool I didn’t already have. It’s just mildly more convenient than some of them while the costs are low.



  • If you trained it on all of that it wouldn’t be a good builder. Actual builders would tell you it’s bad and you would ignore them.

    LLMs do not give you accurate results. They can simply strong along words into coherent sentences and that’s the extent of their capacity. They just agree with whatever the prompter is pushing and it makes simple people think it’s smart.

    AI will not be building you a house unless you count a 3D printed house and we both know that’s overly pedantic. If that were the case a music box from 1780 is an AI.