The main use case for LLMs is writing text nobody wanted to read. The other use case is summarizing text nobody wanted to read. Except they don’t do that either. The Australian Securities and…
I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.
good tools are designed well enough so it’s clear how they are used, held, or what-fucking-ever.
fuck these simpleton takes are a pain in the arse. They’re always pushed by these idiots that have based their whole world view on fortune cookie aphorisms
How did you make sure no hallucinations existed without reading the source material; and if you read the source material, what did using an LLM save you?
I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.
Beats manually summarizing that info myself.
Maybe their prompt sucks?
“Are you sure you’re holding it correctly?”
christ, every damn time
That is how tools tend to work, yes.
“tools” doesn’t mean “good”
good tools are designed well enough so it’s clear how they are used, held, or what-fucking-ever.
fuck these simpleton takes are a pain in the arse. They’re always pushed by these idiots that have based their whole world view on fortune cookie aphorisms
Said like a person who wouldn’t be able to correctly hold a hammer on first try
we find they tend to post here, though not for long
it makes me feel fucking ancient to find that this dipshit didn’t seem to get the remark, and it wasn’t even that long ago
Jobs is Tech Jesus, but Antennagate is only recorded in one of the apocryphal books
@RagnarokOnline @dgerard “They failed to say the magic spells correctly”
I got AcausalRobotGPT to summarise your post and it said “I’m not saying it’s always programming.dev, but”
How did you make sure no hallucinations existed without reading the source material; and if you read the source material, what did using an LLM save you?
Did you conduct or read all the interviews in full in order to verify no hallucinations?