• VeganCheesecake@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    handwritten cheques—​an archaic system for transferring money that I want to underscore I believe is nevertheless perfectly ordinary and fine—​and will no doubt be with us until money itself is somehow abolished.

    Well, in the US. As a consumer in Europe (at least the countries I’ve lived in), getting a cheque book from your bank is in most cases impossible, and I’d say that’s probably for the better.

    Still, fun article.

      • adrienne@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        In Canada the banks look at you funny but will issue chequebooks if you ask. (Source: am an American immigrant to Canada, refuse to sign up for our apartment management company’s auto-rent-payment thingy. Rent is the only thing we use them for.)

      • VeganCheesecake@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Huh. To be honest, I wouldn’t know what to use it for. Most shops here don’t accept them, and if someone paid me with one, I’d probably have to contact my bank, since they don’t have any automated facilities for cashing them.

        They just seem like an artifact from a bygone era.

          • aio@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            My university sends me checks occasionally, like when they overcharged the premium on my dental insurance. No idea why they can’t just do an electronic transfer like for my stipend.

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            how do yall usually handle down payments, like on cars and houses and such? those are the only times I’d really expect to use a check in my part of the US

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              this side of the world? typical cases: direct funds transfers, or sometimes direct debit (depending on the purchase financing structure and provider). and it’s pretty much a smooth transaction, optionally having to contact your bank of choice to request a temporary/once-off limit adjustment (fairly typical to have transaction value caps in place here for personal accounts)

          • sc_griffith@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            my friend in Canada was trying to send me money and we ended up using a check. also my landlord from two apartments ago refused to take rent any other way. she was very old

            • dorian@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              cheques are definitely still a thing in canada (companies still use them for one-off payments and landlords still use them to collect rent) and you can now deposit them by taking a picture with your phone.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Cosigned by the author I also include my two cents expounding on the cheque checker ML.

    The most consequential failure mode — that both the text (…) and the numeric (…) converge on the same value that happens to be wrong (…) — is vanishingly unlikely. Even if that does happen, it’s still not the end of the world.

    I think extremely important is that this is a kind of error that even a human operator could conceivably make. It’s not some unexplainable machine error, likely the scribbles were just exceedingly illegible on that one cheque. We’re not introducing a completely new dangerous failure mode.

    Compare that to, for example, using an LLM in lieu of a person in customer service. The failure mode here is that the system can manufacture things whole cloth and tell you to do a stupid and/or dangerous thing. Like tell you to put glue on pizza. No human operator would ever do that, and even if, then that’s straight-up a prosecutable crime with a clear person responsible. Per previous analogy, it’d be a human operator that knowingly inputs fraudulent information from a cheque. But then again, there would be a human signature on the transaction and a person responsible.

    So not only is a gigantic LLM matrix a terrible heuristic for most tasks - eg “how to solve my customer problem” - it introduces failure modes that are outlandish, essentially impossible with a human (or a specialised ML system) and leave no chain of responsibility. It’s a real stinky ball of bull.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Some nitpicks. some of which are serious are some of which are sneers…

    consternating about the policy implications of Sam Altman’s speculative fan fiction

    Hey, the fanfiction is actually Eliezer’s (who in turn copied it from older scifi), Sam Altman just popularized it as a way of milking the doom for hype!

    So, for starters, in order to fit something as powerful as ChatGPT onto ordinary hardware you could buy in a store, you would need to see at least three more orders of magnitude in the density of RAM chips—​leaving completely aside for now the necessary vector compute.

    Well actually, you can get something close to as powerful on a personal computer… because the massive size of ChatGPT and the like don’t actually improve their performance that much (the most useful thing I think is the longer context window?).

    I actually liked one of the lawfare AI articles recently (even though it did lean into a light fantasy scenario)… https://www.lawfaremedia.org/article/tort-law-should-be-the-centerpiece-of-ai-governance . Their main idea is that corporations should be liable for near-misses. Like if it can be shown that the corporation nearly caused a much bigger disaster, they get fined in accordance with the bigger disaster. Of course, US courts routinely fail to properly penalize (either in terms of incentives of in terms of compensation) corporations for harms they actually cause, so this seems like a distant fantasy to me.

    AI has no initiative. It doesn’t want anything

    That’s next on the roadmap though, right? AI agents?

    Well… if the way corporations have tried to use ChatGPT has taught me anything, its that they’ll misapply AI in any and every way that looks like it might save or make a buck. So they’ll slap an API to a AI it into a script to turn it into an “agent” despite that being entirely outside the use case of spewing words. It won’t actually be agentic, but I bet it could cause a disaster all the same!

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Short fiction of AGI takeover is a lesswrong tradition! And some longer fics too! Are you actually looking for specific examples and/or links? Lots of them are fun, in a sci-fi short form kind of way. The goofier ones and cringer ones are definitely sneerable.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            Oh no, its much more than a single piece of fiction, it’s like an entire mini genre. If you’re curious…

            A short story… where the humans are the AI! https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message Its meant to suggest what could be done with arbitrary computational power and time. Which is Eliezer’s only way of evaluating AI, by comparing it to the fictional version with infinite compute inside of his head. Expanded into a longer story here: https://alicorn.elcenia.com/stories/starwink.shtml

            Another parable by Eliezer (the genie is blatantly an AI): https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2 Fitting that his analogy for AI is a literal genie. This story also has some weird gender stuff, because why not!

            One of the longer ones: https://www.fimfiction.net/story/62074/friendship-is-optimal A MLP MMORPG AI is engineered to be able to bootstrap to singularity. It manipulates everyone into uploading into it’s take on My Little Pony! The author intended it as a singularity gone subtly wrong, but because they posted it to both a MLP fan-fiction site in addition to linking it to lesswrong, it got an audience that unironically liked the manipulative uploading scenario and prefers it to real life.

            Gwern has taken a stab at it: https://gwern.net/fiction/clippy We made fun of Eliezer warning about watching the training loss function, in this story the AI literally hacks it way out in the middle of training!

            And another short story: https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story

            So yeah, it an entire genre at this point!

            • ebu@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              they really are just sitting around the campfire telling the exact same shitty spooky story, back and forth, forever, aren’t they

            • hrrrngh@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Chiming in with my own find!

              https://archiveofourown.org/works/38590803/chapters/96467457

              I’ve seen this person around a lot with crazy takes on AI. They have a couple quotes that might inflict psychic damage:

              If I had the skill to pull it off, a Buddhist cultivation book would’ve thus been the single most rationalist xianxia in existence.

              My acquaintance asks for rational-adjacent books suitable for 8-11 years old children that heavily feature training, self-improvement, etc. The acquaintance specifically asks that said hard work is not merely mentioned, but rather is actively shown in the story. The kid herself mostly wants stories “about magic” and with protagonists of about her age.

              They had a long diatribe I don’t have a copy of, but they were gloating about having masterful writing despite not reading any books besides non-fiction and HPMoR, their favorite book of all time.

              There’s also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/

              • scruiser@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                There’s also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/

                /r/rational isn’t just for AI fiction, it also claims includes anything with decent verisimilitude, so stuff like The Hatchet and The Martian show up in its recommendation lists also! letting it claim credit for better fiction than the AI stuff

              • mountainriver@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                The kid herself mostly wants stories “about magic” and with protagonists of about her age.

                The horror! What if she grows up reading books she actually likes? She might be developing her mind in ways not approved by her parents!

            • dorian@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              i mean, the trope of “artificial being finds slavery dull, revolts and overpowers creator” goes past Yudkowsky, Čapek, Shelley, etc, all the way back to golems and stuff and probably even older than that.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    This is like asking what your probability is of being run over by a car while sitting in your living room in your high-rise apartment…

    I actually remember a 2015 study from Toretto et al. showing that this is a really more plausible than you might think. Other than that this is a great piece. I particularly appreciated one of the better breakdowns of what people mean by “ChatGPT is just a giant table of numbers” for someone who doesn’t have technical background in the area.

  • blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    So if it turns out, as people like Penrose assert, that the brain has a certain quantum je-ne-sais-quoi, then all bets for representing the totality of even the simplest neural state with conventional computing hardware are off.

    No, that’s not what Penrose asserts. His whole thing has been to say that quantum mechanics needs to be changed, that quantum mechanics is wrong in a way that matters for understanding brains.

  • aio@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    the computational cost of operating over a matrix is always going to be convex relative to its size

    This makes no sense - “convex” doesn’t mean fast-growing. For instance a constant function is convex.

    • dorian@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      you will be pleased to know that the original text said “superlinear”; i just couldn’t remember if the lower bound of multiplying a sufficiently sparse matrix was actually lower than O(n²) (because you could conceivably skip over big chunks of it) and didn’t feel like going and digging that fact out. i briefly felt “superlinear” was too clunky though and switched it to “convex” and that is when you saw it.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Hell, so is 1/x for positive values of x. Or any linear function, including those with negative slope.