I agree with mostly everything from this blog post. But I am also very curious about projects like these: https://github.com/mindverse/Second-Me that claim to be able to learn to immitate you. Which might end up being good enough for doing the final touches.
I tried, but it wouldn’t run on my hardware. I get to the training process and then it errors out at some point. If anybody has any experience or is willing to try it out, please let us know whether it was actually any good or not.
These will still fall prey to the reason that LLM summaries are bad: LLMs pick up the average, what is common, rather than what stands out and is genuinely important or new. Your writing will end up averaged out and the key things will be missed, only what is repeated again and again.
In my experience you can use a LLM to point out typos or grammar errors, but not to actually edit or rephrase your work. And at that point it’s effectively just a slow and expensive, but better, spelling/grammar checker.
In my experience you can use a LLM to point out typos or grammar errors, but not to actually edit or rephrase your work.
These will still fall prey to the reason that LLM summaries are bad.
So you didn’t try out this specific LLM based tool, but you extrapolate your experience from generic LLMs to judge it? To me that sounds like a hasty generalization .
I just want to genuinely now whether this specific tool might be more useful at a specific applications than generic LLMs, yet here on the lemyverse a discussion like that is impossible because AI BAD. It’s a sad and frustrating state of affairs.
“Average” is the key word here, for sure. Our goal as humans is to be better than the AI. If you’re not such a good writer, average is a step up. But maybe we should all try to level up, instead.
But it producing an average of its inputs does not mean that it produces writing of average quality. That would only work, if writing was random, like the proverbial monkey on a typewriter.
Instead, average quality writing is already very deliberate. Any human that writes is at least superficially aware of what they want to convey and to what group of people. An LLM can try to emulate that, and particularly successfully so when it has a text from an actual human writer on this topic in its training data. But it is incapable of the critical thinking necessary to actually decide in what order and with what level of detail to explain something novel.
Crucially, it does not understand things. It only produces patterns. So, it doubly cannot understanding what is necessary to actually understand things. What information you need to be provided before it clicks. And what information is just noise that distracts from the shortest path to understanding.
I agree with mostly everything from this blog post. But I am also very curious about projects like these: https://github.com/mindverse/Second-Me that claim to be able to learn to immitate you. Which might end up being good enough for doing the final touches.
I tried, but it wouldn’t run on my hardware. I get to the training process and then it errors out at some point. If anybody has any experience or is willing to try it out, please let us know whether it was actually any good or not.
These will still fall prey to the reason that LLM summaries are bad: LLMs pick up the average, what is common, rather than what stands out and is genuinely important or new. Your writing will end up averaged out and the key things will be missed, only what is repeated again and again.
In my experience you can use a LLM to point out typos or grammar errors, but not to actually edit or rephrase your work. And at that point it’s effectively just a slow and expensive, but better, spelling/grammar checker.
So you didn’t try out this specific LLM based tool, but you extrapolate your experience from generic LLMs to judge it? To me that sounds like a hasty generalization .
I just want to genuinely now whether this specific tool might be more useful at a specific applications than generic LLMs, yet here on the lemyverse a discussion like that is impossible because AI BAD. It’s a sad and frustrating state of affairs.
“Average” is the key word here, for sure. Our goal as humans is to be better than the AI. If you’re not such a good writer, average is a step up. But maybe we should all try to level up, instead.
But it producing an average of its inputs does not mean that it produces writing of average quality. That would only work, if writing was random, like the proverbial monkey on a typewriter.
Instead, average quality writing is already very deliberate. Any human that writes is at least superficially aware of what they want to convey and to what group of people. An LLM can try to emulate that, and particularly successfully so when it has a text from an actual human writer on this topic in its training data. But it is incapable of the critical thinking necessary to actually decide in what order and with what level of detail to explain something novel.
Crucially, it does not understand things. It only produces patterns. So, it doubly cannot understanding what is necessary to actually understand things. What information you need to be provided before it clicks. And what information is just noise that distracts from the shortest path to understanding.