

Of course, and that’s why they need an anti-bullshit step that doesn’t currently exist. I still believe it’s possible to reign LLMs in, by maximizing their strengths and minimizing their weaknesses.
Of course, and that’s why they need an anti-bullshit step that doesn’t currently exist. I still believe it’s possible to reign LLMs in, by maximizing their strengths and minimizing their weaknesses.
Well that thinking flow can be automated, as far as we have seen. The Chain of Thoughts and Atom of Thoughts paradigms have been very successful and don’t require human intervention to produce improved results.
Yeah, like, have you ever met one of those crazy guys who think the pyramids were literally built by aliens? Humans can get caught in a confidently wrong state as well.
Transformer based LLMs are pretty much at their final form, from a training perspective. But there’s still a lot of juice to be gotten from them through more sophisticated usage, for example the recent “Atom of Thoughts” paper. Simply by directing LLMs in the correct flow, you can get much stronger results with much weaker models.
How long until someone makes a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
I’m convinced that it won’t help incompetent people magically become competent. I’m also convinced that some people will use it to speed up their work.
You probably don’t need an LLM to assist you, but the average human is terribly incompetent. I work in higher education and even people with PhDs often fail to type up a coherent email.
I suspect the fediverse is stickier than mainstream social media, since there’s no incentive to be anti-consumer, and anyone can make their own frontend.
It’s only a matter of time until these companies keep making unforced errors and people join us.
Elon has always been a terrible person, but he was once focused on things that society actually needed, like electrifying transportation to avoid climate collapse.
He seems to have gone sharply downhill into total insanity by taking ketamine while locking himself in a rightwing echo chamber. It’s the perfect storm of dissociating from reality.
I’m not sure humans can do it in a complete form. But I believe that is possible to approach human levels of confidence with AI.