Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    LW discourages LLM content, unless the LLM is AGI:

    https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

    As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don’t have a human collaborator and even if someone would prefer that it be kept secret.

    Never change LW, never change.

    • fnix@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Reminds me of the stories of how Soviet peasants during the rapid industrialization drive under Stalin, who’d never before seen any machinery in their lives, would get emotional with and try to coax faulty machines like they were their farm animals. But these were Soviet peasants! What are structural forces stopping Yud & co outgrowing their childish mystifications? Deeply misplaced religious needs?

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        I feel like cult orthodoxy probably accounts for most of it. The fact that they put serious thought into how to handle a sentient AI wanting to post on their forums does also suggest that they’re taking the AGI “possibility” far more seriously than any of the companies that are using it to fill out marketing copy and bad news cycles. I for one find this deeply sad.

        Edit to expand: if it wasn’t actively lighting the world on fire I would think there’s something perversely admirable about trying to make sure the angels dancing on the head of a pin have civil rights. As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.

        • gerikson@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.

          This is consistent if you believe rights are contingent on achieving an integer score on some bullshit test.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        Unlike in the paragraph above, though, most LW posters held plenty of nuts in their hands before.

        … I’ll see myself out

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      From the comments

      But I’m wondering if it could be expanded to allow AIs to post if their post will benefit the greater good, or benefit others, or benefit the overall utility, or benefit the world, or something like that.

      (https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong?commentId=xnfHpn9ryjKqG8WKA)

      No biggie, just decide one of the largest open questions in ethics and use that to moderate.

      (It would be funny if unaligned AIs take advantage of this to plot humanity’s downfall on LW, surrounded by flustered rats going all “techcnially they’re not breaking the rules”. Especially if the dissenters are zapped from orbit 5s after posting. A supercharged Nazi bar, if you will)

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 days ago

        I wrote down some theorems and looked at them through a microscope and actually discovered the objectively correct solution to ethics. I won’t tell you what it is because science should be kept secret (and I could prove it but shouldn’t and won’t).

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      they’re never going to let it go, are they? it doesn’t matter how long they spend receiving zero utility or signs of intelligence from their billion dollar ouji boards

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Don’t think they can, looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted. Don’t think that is going to be psychologically healthy as a realization, it will be like the people who suddenly realize Qanon is a lie and they alienated everybody in their lives because they got tricked.

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted.

          Adding insult to injury, they’d likely also have to contend with the fact that much of the harm this AI bubble caused was the direct consequence of their dumbshit attempts to prevent an AI Apocalypsetm

          As for the upcoming AI winter, I’m predicting we’re gonna see the death of AI as a concept once it starts. With LLMs and Gen-AI thoroughly redefining how the public thinks and feels about AI (near-universally for the worse), I suspect the public’s gonna come to view humanlike intelligence/creativity as something unachievable by artificial means, and I expect future attempts at creating AI to face ridicule at best and active hostility at worst.

          Taking a shot in the dark, I suspect we’ll see active attempts to drop the banhammer on AI as well, though admittedly my only reason is a random BlueSky post openly calling for LLMs to be banned.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      13 days ago

      (from the comments).

      It felt odd to read that and think “this isn’t directed toward me, I could skip if I wanted to”. Like I don’t know how to articulate the feeling, but it’s an odd “woah text-not-for-humans is going to become more common isn’t it”. Just feels strange to be left behind.

      Yeah, euh, congrats in realizing something that a lot of people already know for a long time now. Not only is there text specifically generated to try and poison LLM results (see the whole ‘turns out a lot of pro russian disinformation now is in LLMs because they spammed the internet to poison LLMs’ story, but also reply bots for SEO google spamming). Welcome to the 2010s LW. The paperclip maximizers are already here.

      The only reason this felt weird to them is because they look at the whole ‘coming AGI god’ idea with some quasi-religious awe.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Is Japanese really that strict

      my Japanese uncle that works at nintendo says yes. If you write わ instead of は they make you 切腹 in front of all your friends

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Using an LLM to shit out grammar for an old school symbolic language model is a poetic ouroboros of AI circlejerking.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    For some reason it’s on brand for HN to have a discussion of different dash widths stick on the front page more than 24h

    https://news.ycombinator.com/item?id=43497719

    Extra spice and relevance for the observation that GenAI text apparently has a lot of em-dashes in it, so add that to the frequency of the word “delve”.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 days ago

      alright, fine, i’ll do it.

      webshit weekly (2025/03/27)
      How to Use Em Dashes (—), En Dashes (–) , and Hyphens (-)

      Grammar Nazis (as opposed to the regular kind) publish a guide on how to best calibrate your printing press to 17th-century standards. Several Hackernews (some of which are the regular kind) offer their own competing, more-detailed guides in response. The concern is raised that using too many typographic dashes makes you sound like ChatGPT, to much dismay of those still diligently copying from the Google (business model: “Uber for glue pizza”) results page for “em dash”. Multiple Hackernews take the opportunity call the group of people who do not care about the millimeter difference between the types of dashes “NPCs”.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        Note I am not endorsing their writing - in fact I believe the vehemence of the reaction on HN is due to the author being seen as one of them.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          I read through a couple of his fiction pieces and I think we can safely disregard him. Whatever insights he may have into technology and authoritarianism appear to be pretty badly corrupted by a predictable strain of antiwokism. It’s not offensive in anything I read - he’s not out here whining about not being allowed to use slurs - but he seems sufficiently invested in how authoritarians might use the concerns of marginalized people as a cudgel that he completely misses how in reality marginalized people are more useful to authoritarian structures as a target than a weapon.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      I used to think transhumanism was very cool because escaping the misery of physical existence would be great. for one thing, I’m trans, and my experience with my body as such has always been that it is my torturer and I am its victim. transhumanism to my understanding promised the liberation of hundreds of millions from actual oppression.

      then I found out there was literally no reason to expect mind uploading or any variation thereof to be possible. and when you think about what the rest of transhumanism is, there’s just nothing to get excited about. these people don’t have any ideas or cogent analysis, just a powerful desire to evade limitations. it’s inevitable that to the extent they cohere they’re a cult: they’re just a variety of sovereign citizen

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        I haven’t spent a lot of time sneering at transhumanism, but it always sounded like thinly veiled ableism to me.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          Only as a subset of the broader problem. What if, instead of creating societies in which everyone can live and prosper, we created people who can live and prosper in the late capitalist hell we’ve already created! And what if we embraced the obvious feedback loop that results and call the trillions of disposable wireheaded drones that we’ve created a utopia because of how high they’ll be able to push various meaningless numbers!

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        (Geordi LaForge holding up a hand in a “stop” gesture) transhumanism

        (Geordi LaForge pointing as if to say "now there’s an idea) trans humanism

      • sc_griffith@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        13 days ago

        my experience with my body as such has always been that it is my torturer and I am its victim.

        (side note, gender affirming care resolved this. in my case HRT didn’t really help by itself, but facial feminization surgery immediately cured my dysphoria. also for some reason it cured my lower back pain)

        (of course it wasn’t covered in any way, which represents exactly the sort of hostility to bodily agency transhumanists would prioritize over ten foot long electric current sensing dongs or whatever, if they were serious thinkers)

        • fnix@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          Wanting to escape the fact that we are beings of the flesh seems to be behind so much of the rationalist-reactionary impulse – a desire to one-up our mortal shells by eugenics, weird diets, ‘brain uploading’ and something like vampirism with the Bryan Johnson guy. It’s wonderful you found a way to embrace and express yourself instead! Yes, in a healthier relationship with our bodies – which is what we are – such changes would be considered part of general healthcare. It sometimes appears particularly extreme in the US from here from Europe at least, maybe a heritage of puritanical norms.

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            12 days ago

            also cryonics and “enhanced games” as non-FDA testing ground. i’ve never seen anyone in more potent denial of their own mortality than Peter Thiel. behind the bastards four-parter on him dissects this

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      In my head transhumanism is this cool idea where I’d get to have a zoom function in my eye

      But of course none of that could exist in our capitalist hellscape because of just all the reasons the ruling class would use it to opress the working class.

      And then you find out what transhumanists actually advocate for and it’s just eugenics. Like without even a tiny bit of plausible deniability. They’re proud it’s eugenics.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Angela Collier has a wonderfully grumpy video up, why functioning governments fund scientific research. Choice sneer at around 32:30:

    But what do I know? I’m not a medical doctor but neither is this chucklefuck, and people are listening to him. I don’t know. I feel like this is [sighs, laughs] I always get comments that tell me, “you’re being a little condescending,” and [scoffs] yeah. I mean, we can check the dictionary definition of “condescending,” and I think I would fit into that category. [Vaccine deniers] have failed their children. They are bad parents. One in four unvaccinated kids who get measles will die. They are playing Russian roulette with their child’s life. But sure, the problem is I’m being, like, a little condescending.

    • alm@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      And daily releases! AKA eternal drowning in non-functional slop code. But not to worry, onboarding consists of making the collection calls yourself, so no big deal that it doesn’t work.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    While you all laugh at ChatGPT slop leaving “as a language model…” cruft everywhere, from Twitter political bots to published Springer textbooks, over there in lala land “AIs” are rewriting their reward functions and hacking the matrix and spontaneously emerging mind models of Diplomacy players and generally a week or so from becoming the irresistible superintelligent hypno goddess:

    https://www.reddit.com/r/196/comments/1jixljo/comment/mjlexau/

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      This deserves its own thread, pettily picking apart niche posts is exactly the kind of dopamine source we crave

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    Stumbled across some AI criti-hype in the wild on BlueSky:

    The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its “deceptions” when its actually learning to avoid tokens that paint it as deceptive.

    On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI’s impending death as a concept (a sign I’ve touched on before without realising), if you want my take:

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Good video overall, despite some misattributions.

      Biggest point I disagree with: “He could have started a cult, but he didn’t”

      Now I get that there’s only so much Toxic exposure to Yud’s writings, but it’s missing a whole chunk of his persona/æsthetics. And ultimately I thing boils down to the earlier part that stange did notice (via echo of su3su2u1): “Oh Aren’t I so clever for manipulating you into thinking I’m not a cult leader, by warning you of the dangers of cult leaders.”

      And I think even expect his followers to recognize the “subterfuge”.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      all of the subculture YouTubers I watch are colliding with the weirdo cult I know way too much about and I hate it

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      liked the manic energy at the start (and lol at Strange not sharing his full history (like the extropian list stuff, and a much more), like not mentioning it is fine, the scene is set), and Chekovs fedora at the start.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I like the video, but I’m a little bothered that she misattributes su3su2u1’s critique to Dan Luu, who makes it very clear he did not write it:

      These are archived from the now defunct su3su2u1 tumblr. Since there was some controversy over su3su2u1’s identity, I’ll note that I am not su3su2u1 and that hosting this material is neither an endorsement nor a sign of agreement.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Strange is a trooper and her sneer is worth transcribing. From about 22:00:

        So let’s go! Upon saturating my brain with as much background information as I could, there was really nothing left to do but fucking read this thing, all six hundred thousand words of HPMOR, really the road of enlightenment that they promised it to be. After reading a few chapters, a realization that I found funny was, “Oh. Oh, this is definitely fanfiction. Everyone said [laughing and stuttering] everybody that said that this is basically a real novel is lying.” People lie on the Internet? No fucking way. It is telling that even the most charitable reviews, the most glowing worshipping reviews of this fanfiction call it “unfinished,” call it “a first draft.”

        A shorter sneer for the back of the hardcover edition of HPMOR at 26:30 or so:

        It’s extremely tiring. I was surprised by how soul-sucking it was. It was unpleasant to force myself beyond the first fifty thousand words. It was physically painful to force myself to read beyond the first hundred thousand words of this – let me remind you – six-hundred-thousand-word epic, and I will admit that at that point I did succumb to skimming.

        Her analysis is familiar. She recognized that Harry is a self-insert, that the out-loud game theory reads like Death Note parody, that chapters are only really related to each other in the sense that they were written sequentially, that HPMOR is more concerned with sounding smart than being smart, that HPMOR is yet another entry in a long line of monarchist apologies explaining why this new Napoleon won’t fool us again, and finally that it’s a bad read. 31:30 or so:

        It’s absolutely no fucking fun. It’s just absolutely dry and joyless. It tastes like sand! I mean, maybe it’s Yudkowsky’s idea of fun; he spent five years writing the thing after all. But it just [struggles for words] reading this thing, it feels like chewing sand.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          I can’t be bothered to look up the details (kinda in a fog of sleep deprivation right now to be honest), but I recall HPMOR pissing me off by getting the plot of Death Note wrong. Well, OK, first there was the obnoxious thing of making Death Note into a play that wizards go to see. It was yet another tedious example in Yud’s interminable series of using Nerd Culture™ wink-wink-nudge-nudges as a substitute for world-building. Worse than that, it was immersion-breaking: Instead of following the story, Yud throws the reader out of it by prompting them to wonder, “Wait, is Death Note a manga in the Muggle world and a play in the wizarding one? Did Tsugumi Ohba secretly learn of wizard culture and rip off one of their stories?” And then Yud tried to put down Death Note and talk up his own story by saying that L did something illogical that L did not actually do in any version of Death Note that I’d seen.

          And now I want potato chips.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      lol @ the implication that chatbots will definitely invent magitech that will solve climate change, just burn another billion dollars in energy and silicon, please guys i don’t want to go to prison for fraud and share cell with sbf and diddy

      who is this guy anyway, is he in openai/similar inner circle or is that just some random rationalist fanboy?

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        Yeah, find odd how people dont seem to get that this llm stuff makes AGI less likely, not more. We put all the money, comute, and data in it, this branch does not lead to AGI.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        14 days ago

        who is this guy anyway, is he in openai/similar inner circle or is that just some random rationalist fanboy?

        His grounds for notability are that he’s a dev who back in the day made a useful thing that went on to become incredibly widely used. Like if he’d named redis salvatoredis instead he might have been a household name among swengs.

        Also burning only a billion more would be a steal given some of the numbers thrown around.