Since Meta announced they would stop moderating posts much of the mainstream discussion surrounding social media has been centered on whether a platform has a responsibility or not for the content being posted on their service. Which I think is a fair discussion though I favor the side of less moderation in almost every instance.

But as I think about it the problem is not moderation at all: we had very little moderation in the early days of the internet and social media and yet people didn’t believe the nonsense they saw online, unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media. To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views; and I think anyone with two brain cells and an iota of understanding of how engagement algorithms works can see this. So why is the discussion about moderation and not about banning algorithms?

  • OneMeaningManyNames@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    You think the Meta algorithm just sorts the feed for you? It is way more complex and it basically puts you on some very fine-grained clusters, then decides what to show to you, then collects your clicks and reactions and adjusts itself. For scale, no academic “research with human subjects” would be approved with mechanics like that under the hood. It is deeply unethical and invasive, outright dangerous for the individuals (eg teen self esteem issues, anorexias, etc, etc). So “algorithm-like features” is apples to oranges here.

    • Plebcouncilman@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Exactly my point. In lemmy I can still see all the posts, Meta’s algorithm will remove stuff from the feeds and push others and even hide comments. It is literally a reality warping engine.

      • OneMeaningManyNames@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Fancier algorithms are not bad per se. They can be ultra-productive for many purposes. In fact, we take no issue with fancy algorithms when published as software libraries. But then only specially trained folks can seize their fruit, which it happens it is people working for Big Tech. Now, if we had user interfaces that could let the user control several free parameters of the algorithms and experience different feeds, then it would be kinda nice. The problem boils down to these areas:

        • near-universal social graphs (they have all the people enlisted)
        • exert total control on the algorithm parameters
        • infer personal and sensitive data points (user-modeling)
        • not ensuring informed consent on the part of the user
        • total behavioral surveillance (they collect every click)
        • manipulate the feed and observe all behavioral response (essentially human subject research for ads)
        • profiteering from the above while harming the user’s well being (unethical)

        Political interference and proliferation of fascist “ideas” is just a function that is possible if and only if all of the above are in play. If you take all this destructive shit away, a software that would let you explore vast amounts of data with cool algorithms through a user-friendly interface would not be bad in itself.

        But you see, that is why we say “the medium is the message” and that “television is not a neutral technology”. As a media system, television is so constructed so that few corporations can address the masses, not the other way round, nor people interact with their neighbor. For a brief point in time, the internet promised to subvert that, when centralized social media brought back the exertion of control over the messaging by few corporations. The current alternative is the Fediverse and P2P networks. This is my analysis.