Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • Akuchimoya@startrek.website
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    edit-2
    4 hours ago

    I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be “information professionals”. If they, as a slightly better trained subset of the general public, don’t know that, the general public has no hope of knowing that.

    • WagyuSneakers@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      ·
      11 hours ago

      It’s so weird watching the masses ignore industry experts and jump on weird media hype trains. This must be how doctors felt in Covid.

      • Llewellyn@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        It’s so weird watching the masses ignore industry experts and jump on weird media hype trains.

        Is it though?