• Jade@programming.devOP
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    3 days ago

    These will still fall prey to the reason that LLM summaries are bad: LLMs pick up the average, what is common, rather than what stands out and is genuinely important or new. Your writing will end up averaged out and the key things will be missed, only what is repeated again and again.

    In my experience you can use a LLM to point out typos or grammar errors, but not to actually edit or rephrase your work. And at that point it’s effectively just a slow and expensive, but better, spelling/grammar checker.

    • Beej Jorgensen@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      “Average” is the key word here, for sure. Our goal as humans is to be better than the AI. If you’re not such a good writer, average is a step up. But maybe we should all try to level up, instead.

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        But it producing an average of its inputs does not mean that it produces writing of average quality. That would only work, if writing was random, like the proverbial monkey on a typewriter.

        Instead, average quality writing is already very deliberate. Any human that writes is at least superficially aware of what they want to convey and to what group of people. An LLM can try to emulate that, and particularly successfully so when it has a text from an actual human writer on this topic in its training data. But it is incapable of the critical thinking necessary to actually decide in what order and with what level of detail to explain something novel.
        Crucially, it does not understand things. It only produces patterns. So, it doubly cannot understanding what is necessary to actually understand things. What information you need to be provided before it clicks. And what information is just noise that distracts from the shortest path to understanding.

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      3 days ago

      In my experience you can use a LLM to point out typos or grammar errors, but not to actually edit or rephrase your work.

      These will still fall prey to the reason that LLM summaries are bad.

      So you didn’t try out this specific LLM based tool, but you extrapolate your experience from generic LLMs to judge it? To me that sounds like a hasty generalization .

      I just want to genuinely now whether this specific tool might be more useful at a specific applications than generic LLMs, yet here on the lemyverse a discussion like that is impossible because AI BAD. It’s a sad and frustrating state of affairs.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        The only person who can answer whether a tool will be useful to you is you. I understand that you tried and couldn’t use it. Was it useful to you then? Seems like no.

        Broad generalizations of “X is good at Y” rarely can be accurately measured with a useful set of metrics, rarely are studied using sufficiently large sample sizes, and often discredit the edge cases where someone might find it useful or not useful despite the opposite being found generally true in the study.

        And no, I haven’t tried it. It wouldn’t be good at what I need it to do: think for me.

      • Jade@programming.devOP
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        Finetuning, self-hosting and whatever decentalised network they’ve got going on there aren’t going to change the core of the technology. Oh, and it’s a tiny local model (about 1/100th the size of cloud models), too, it’s going to perform poorly compared to SOTA models anyway.

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        What you see is the natural conclusion when one understands what llms can do at a core level without attributing any magic to it.

        • HelloRoot@lemy.lol
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          The tool is not just one LLM though. It uses multiple LLMs and multiple other non-llm things.

          Your argument is akin to saying: you can’t sit and ride on a wheel, so a wheel can never be used for personal transport. And thus the natural conclusion once you understand what a wheel can do is that you can’t sit and ride in a car, so a car is also useless for personal transport.