I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It’s worth a read if you want a peek behind the curtain on modern models.

  • Womble@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    7 months ago

    Youd be surprised at the level of unthinking hatred around them, but even discarding that Ive seen it said often that LLMs have no internal model of what they are talking about as they are just next word generators. This quite clearly contradicts that interpretation.

    • Spedwell@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      7 months ago

      concepts embedded in them

      internal model

      You used both phrases in this thread, but those are two very different things. It’s a stretch to say this research supports the latter.

      Yes, LLMs are still next-token generators. That is a descriptive statement about how they operate. They just have embedded knowledge that allows them to generate sometimes meaningful text.