• Sibbo@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs).

    So, maybe this is the end of LLM companies simply scraping the web, because they will be unable to distinguish between LLM and non-LLM content. If someone would have made a snapshot of the complete web in 2020, and waits to sell it until 2030, the person may just become the richest person on earth.

    • leisesprecher@feddit.org
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      We find that preservation of the original data allows for better model fine-tuning and leads to only minor degradation of performance

      That means, as long as generated content isn’t like 90% of the Internet, they’ll be fine. Even then, you can find relatively easy ways to sift data for generated content. Doesn’t even have to be perfect.

      What really bothers me here is that we might create a world, where the typical AI style of writing takes over the world, because the AI learns on itself, and the companies simply don’t care about it. That’s not really a collapse as such, but a narrowing.

      • Match!!@pawb.social
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        That means, as long as generated content isn’t like 90% of the Internet, they’ll be fine

        this sounds so much like the 2° Celsius target for climate change