It just came to my mind… An open ended question… Can computers really think ???
I’m starting to think a filter for three question marks in a post or title might be a good idea
Depends on the scale of measurement. If what some of the users I know are doing does in fact count as “thinking”, a brick can think, too.
There’s a whole school of philosophy that has argued about this for … Well forever, but especially the last 100 years, the philosophy of mind. The problem is definition: what does it mean to think. Some may argue that it requires consciousness, but then the problem of definition is what the hell is consciousness?
So on the trivial side, yes, of course computers can think, if thoughts are nothing special. Computers have states, they can react to and inspect their own states. Is that thinking? LLMs use something like neural networks modeled after the mind to generate streams of words, and encode knowledge and concepts using statistics. Is that thinking?
On the other side, well no, computers don’t think because they don’t have souls. Are souls real? Or maybe there’s more to human thinking than just neural networks, like quantum effects? Or more complexity due to chemical biology? Is the ability to answer a question the same thing as understanding a concept (see Chinese room experiment)?
These are the questions that philosophers love to masturbate with, publish many papers on, and make no real progress towards. Definitions are funny like that
@[email protected] that’s a nice image, philosophers masturbating 😄😄😄…
But seriously, l’m amazed at how LLMs respond to my questions.
The trick behind it, and it is a trick, is that they have been fed billions of pages of text that contains the majority of every sentence ever written and they use math to estimate the most appropriate word-by-word response to the question from all of the other examples of text that they have to work on.
Current LLMs are incapable of creating an original combination of words( in the absolute sense). They don’t make anything. They just repeat. They are stochastic parrots.
Sometimes the answer is obvious, assuming that you have all of the relevant information, that you can provide the right answer without thinking at all. And when LLMs are correct, it is because of this phenomenon, and not because they actually thought about the question, and came up with a response.
No
It must be possible, as I somewhat infrequently have to verbally harass my computer into working. It must be a conscious, thinking and feeling being, or else this behavior can not be explained.
@[email protected] l loved that method. Seriously, l need to master this technique.
Right now? No.
We’ve not designed software that enables that function.
In the future? Maybe.
But first we have to design software that enables that function.
I’ve come to describe a thought, the contents of a thought, as magic. Now I don’t believe in magic, it’s a trick. But, I’ll describe the creation of a thought as a magic event, there’s no rational explanation of why some people have the thoughts they do, the content of a thought is the result of a magic event. Given drugs, thoughts can get absolutely haywire.
I think everybody has had a holy crap what was I thinking moment. Like wild magic.
If a computer could think it would quickly become insane.







