• Affidavit@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    10
    ·
    18 hours ago

    A poorly designed tool will do that, yes. An effective tool would do the same thing a person could do, except much quicker, and with greater success.

    An LLM could be trained on the way a specific person communicates over time, and can be designed to complete a forensic breakdown of misspelt words e.g. reviewing the positioning of words with nearby letters in the keyboard, or identifying words that have different spellings but may be similar phonetically.

    • Disregard3145@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      8 hours ago

      the same thing a person could do

      asking for clarification seems like a reasonable thing to do in a conversation.

      A tool is not about to do that because it would feel weird and creepy for it to just take over the conversation.

    • Die4Ever@retrolemmy.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      5 hours ago

      An LLM could be trained on the way a specific person communicates over time

      Are there any companies doing anything similar to this? From what I’ve seen companies avoid this stuff like the plague, their LLMs are always frozen with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.

      The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn’t the same thing as training.