I followed these steps, but just so happened to check on my mason jar 3-4 days in and saw tiny carbonation bubbles rapidly rising throughout.
I thought that may just be part of the process but double checked with a Google search on day 7 (when there were no bubbles in the container at all).
Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins.
Had I not checked on it 3-4 days in I’d have been none the wiser and would have Darwinned my entire family.
Prompt with care and never trust AI dear people…
It’s slowly refining its approach. No-one went for the pizza glue or eating rocks, so…
Reddit still delivers sometimes.
damn gemini, better luck next time
headline is inaccurate and downplays the incredible potential of ai. Google Gemini tried to kill this person AND their entire family
mods can you please ban “david gerard” or whatever his name really is. ai hate is already out of hand without people coming to push their agenda like this
hear hear
unfortunately I am firmly in the pocket of the concept of fiat money, big small data, and whatever the opposite of a metaverse is
but also,
mods can you please ban “david gerard”
if I ever release an experimental electronic album I’m calling dibs on this track name
whatever the opposite of a metaverse is
Grass. It’s grass.
Are we still on mastodon? In that case, I have severe hayfever shithead! Content warn your posts! ;)
(im obv joking here, and before somebody tries to honestly use this argument, I do have hayfever, and I have seen others post about this subject (aka is saying ‘touch grass’ a ‘slur’ because of people with allergies/or disabilities) and the consensus was, anybody who tries to make this argument really needs to touch grass).
Well I’m 100% covered because I have the worst hayfever in existence.
Like no kidding, I am allergic to every. single. thing that they had on what they call the “tree panel” and the “grass panel”. I need to be on antihistamines for 75% of the year or I cannot function.
So I’m allowed to use the slur as I’m from the community. Contact me if you want the “g-word pass” I guess.
if available, I highly recommend a steroid shot from a clinic or allergist for hay fever. the muscle at the injection site will hard lock for a good 5-10 minutes like a Windows PC rolling back an update, but 10 minutes after that your allergies will go away for the rest of the season like actual fucking magic
I’ll see people responding to fucken lemmy comments with “i ran the question through gpt and…” like what the fuck?
It’s literally the same thing as saying “I asked some RANDOM dude and this is what he said. Also I have no reason to believe he’s even the slightest bit educated.”
If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.
This is coming from someone who hates google, but if this person’s entire family had died, I would put a LOT of that blame on them before google.
I applaud your optimism that most people can do this without AI but have you gone and met people? Most people are not that capable of producing torrents of shameless bullshit as conscience or awareness of social and/or professional costs rear their head at some point.
And once they realise it, lives will be saved.
If they can’t do it themselves then they have no idea if the output is good. If they want to run it through the bullshit machine they shouldn’t post the output unless they know it is accurate.
YOU CAN DO THAT WITHOUT AI.
Can they, though? Sure, in theory Google could hire millions of people to write overviews that are equally idiotic, but obviously that is not something they would actually do.
I think there’s an underlying ethical theory at play here, which goes something like: it is fine to fill internet with half-plagiarized nonsense, as long as nobody dies, or at least, as long as Google can’t be culpable.
Can they, though? Sure, in theory Google could hire millions of people to write overviews that are equally idiotic, but obviously that is not something they would actually do.
The millions of people writing overviews would definitely be more reliable, that’s for sure. For one thing, they understand the concept of facts.
If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.
i have found I get .000000000006% less hallucination rate by throwing alphabet soup at the wall instead of spaghett, my preprint is on arXiV
If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.
This is coming from someone who hates google, but if this person’s entire family had died, I would put a LOT of that blame on them before google.
That would really put the “uh oh” in your spaghettios
Someone sell this commercial.
Spaghetti-O’s! Pick up a can and feed your family, because AI might have told you to make botulism.
Huh. I was making my own garlic oil this way (without advice from an LLM mind-you) and I was today years old when I learned this carries the risk of botulism (albeit small) , so in a way, an LLM has potentially saved my life by causing the chain of events which taught me something new.
never trust AI
Statements from LLMs are to be seen as hallucinations unless proven otherwise by classic research.
We don’t need a fancy word that makes it sound like AI is actually intelligent when talking about how AI is frequently wrong and unreliable. AI being wrong is like someone who misunderstood something or took a joke as literal repeating it as factual.
When people are wrong we don’t call it hallucinating unless their senses are altered. AI doesn’t have senses.
Yeah, LLM are accidentally right sometimes. But all they really do is pull words and phrases that it thinks statistically fit together.
Hallucination thought does fit.
It’s a term in the context of a source that implies untrustworthy, not authoritative and/or imagined.
Lots of examples in every day usage or scenarios that come to mind.
“And then I saw the defendant punch the victim and then I was blinded by the sunlight”
Are you sure you didn’t hallucinate the entire episode? It was night after all.
Or
“Somebody please get these ants off of me”
Doctor writes: Hallucinations of ants on skin
Those are examples of actual hallucinations where something did not happen.
Quoting a joke reddit thread as factual is not hallucinating. There was such a thread, but it wasn’t factual and an LLM is wrong to present it as factual.
That’s the issue. LLM’s aren’t trustworthy. They hallucinate.
I presume, as the default, that anything a LLM produces is a hallucination right out of the gate.
“Hallucination” implies LLMs can meaningfully perceive. They can’t, they’re not made that way and they have no reason to be.
We’re arguing language now though, and by definition it isn’t “hallucinating”. By saying that’s what’s happening, you’re unintentionally legitimizing the “AI is making decisions” misinformation.
To get really pedantic, “flashback” would be a better label. It’s not making things up whole cloth, just repeating stuff way out of context.
It’s not a “fancy word” here, but a technical term. An AI making things up is actually called hallucination.
Lmao
Technical terms can still be, technically speaking, dumb as fuck.
I am saying that coining it as a term was stupid and intended to make it sound intelligent when it isn’t.
Of course is the term stupid. Neither is an LLM an AI, nor is any AI in the current state intelligent. In the end it all boils down to being answer machines. Complex ones, but still far away from anything even remotely being am AI.
oh definitely, it’s fucking terrible question-begging. I’d like to know when it traces back to, and how good faith it was or wasn’t
It originally comes from false positives in computer vision afaik, where it makes some sense as the model is “seeing” things that aren’t in the image.
the technical term is either “confabulation” or “bullshit”; “hallucination” is a misleading label coined by the ai pushers.
It used to mean things like false positives in computer vision, where it is sort of appropriate: the AI is seeing something that’s not there.
Then the machine translation people started misusing the term when their software mistranslated by adding something that was not present in the original text. They may have been already trying to be misleading with this term, because “hallucination” implies that the error happens when parsing the input text - which distracts from a very real concern about the possibility that what was added was being plagiarized from the training dataset (which carries risk of IP contamination).
Now, what’s happening is that language models are very often a very wrong tool for the job. When you want to cite a court case as a precedent, you want a court case that actually existed - not a sample from the underlying probability distribution of possible court cases! LLM peddlers don’t want to ever admit that an LLM is the wrong tool for that job, so instead they pretend that it is the right tool that, alas, sometimes “hallucinates”.
oh but you see, it’s “hallucination” when LLM is wrong and it’s hype cycle fuel when it’s correct. no, LLMs don’t “hallucinate”, that implies that this state is peculiar, isolated, triggered by very specific circumstances. LLMs bullshit all the time, sometimes they are right, sometimes not, the process that produces both types of response is the same. pushing for “hallucination” tries to obscure that. use of “hallucination” also implies that LLMs know something, they don’t, by design. it just so happens that if they “get” things right, it’s because it appeared in training material enough times to make an impression in model.
LLMs bullshit all the time
Bullshitting to me is giving intentionally wrong statements. LLMs do not generate intentionally wrong statements. Saying they do, means that you imply intelligence.
LLMs know nothing nor are they intelligent. They also are not right or wrong, they generate output based on statistics.
“Hallucination” as a term for “AIs” making things up is used since the early 2000s (even if it’s meaning has changed since then).
bullshitting as in when you give a confident answer without regard of actual reality. previously discussed there LLMs do exactly that: generate confidently, authoritatively sounding text without regard of facts, because these things do not know facts or anything for that matter.
maybe it’s high time to change terms then
bullshitting as in when you give a confident answer without regard of actual reality.
So you say there could be different meanings of the same word? Like “bullshitting” or “hallucination”?
mod post: please desist, it’s just tiresome now
The wikipedia page you linked to actually states that the term is being pushed by industry (Google, Meta, OpenAI) and that its use is criticized by some researchers.
So you say, a technical term should not be created by the people who actually develop the technology the term is used for?
You’re confusing “developing” with “marketing”.
LOL, okay.
Does everyone else see this? These are the exact type of out of town haters we really want. I also think calling LLMs all but delusional is too generous and I mean that unironically.