- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- [email protected]
The Microsoft-powered bot says bosses can take workers’ tips and that landlords can discriminate based on source of income
Yet another example of people fundamentally misunderstanding the proper use of LLMs and throwing them into production without any kind of sanity checks on the input and output. As someone who used to work for NYS as a software engineer, this is entirely unsurprising.
Work in HR. Have a very smart boss. Asked me about AI for recruiting, screening and other purposes. Told my boss, wait 5 years, we’ll see the catastrophic lawsuits and early adopters, then after 5 more there will be some plug and play usable solutions.
Anyone eating up Big4 and startups own horseshit deserve what they get. They’ve fully demonstrated they don’t QC, and especially on critical, difficult to parse, contextual or changing info LLMs are incredibly immature.
LLMs are still good for the kind of flowery language you need in HR, but not for any sort of fact-based generation.
Think of it as being creative, not logical.