bot@lemmy.smeargle.fansMB to Hacker News@lemmy.smeargle.fans · 5 months agoChatGPT is biased against resumes with credentials that imply a disabilitywww.washington.eduexternal-linkmessage-square6fedilinkarrow-up11arrow-down10file-textcross-posted to: [email protected]
arrow-up11arrow-down1external-linkChatGPT is biased against resumes with credentials that imply a disabilitywww.washington.edubot@lemmy.smeargle.fansMB to Hacker News@lemmy.smeargle.fans · 5 months agomessage-square6fedilinkfile-textcross-posted to: [email protected]
minus-squareLvxferre@mander.xyzlinkfedilinkarrow-up0·5 months ago studies how generative AI can replicate and amplify real-world biases Emphasis mine. That’s a damn important factor, because the deep “learning” models are prone to make human biases worse. I’m not sure but I think that this is caused by two things: It’ll spam the typical value unless explicitly asked contrariwise, even if the typical value isn’t that common. It might take co-dependent variables as if they were orthogonal, for the sake of weighting the output.
Emphasis mine. That’s a damn important factor, because the deep “learning” models are prone to make human biases worse.
I’m not sure but I think that this is caused by two things: