The trick behind it, and it is a trick, is that they have been fed billions of pages of text that contains the majority of every sentence ever written and they use math to estimate the most appropriate word-by-word response to the question from all of the other examples of text that they have to work on.
Current LLMs are incapable of creating an original combination of words( in the absolute sense). They don’t make anything. They just repeat. They are stochastic parrots.
Sometimes the answer is obvious, assuming that you have all of the relevant information, that you can provide the right answer without thinking at all. And when LLMs are correct, it is because of this phenomenon, and not because they actually thought about the question, and came up with a response.
@[email protected] that’s a nice image, philosophers masturbating 😄😄😄…
But seriously, l’m amazed at how LLMs respond to my questions.
The trick behind it, and it is a trick, is that they have been fed billions of pages of text that contains the majority of every sentence ever written and they use math to estimate the most appropriate word-by-word response to the question from all of the other examples of text that they have to work on.
Current LLMs are incapable of creating an original combination of words( in the absolute sense). They don’t make anything. They just repeat. They are stochastic parrots.
Sometimes the answer is obvious, assuming that you have all of the relevant information, that you can provide the right answer without thinking at all. And when LLMs are correct, it is because of this phenomenon, and not because they actually thought about the question, and came up with a response.