• boredtortoise@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    We’ve lived in a world where resume evaluation is always unjust. It’s just that. A resume can’t imply anything that can be used against you.

  • Deceptichum@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    People are biased against resumes that imply a disability. ChatGPT is just picking up on that fact and unknowingly copying it.

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    studies how generative AI can replicate and amplify real-world biases

    Emphasis mine. That’s a damn important factor, because the deep “learning” models are prone to make human biases worse.

    I’m not sure but I think that this is caused by two things:

    1. It’ll spam the typical value unless explicitly asked contrariwise, even if the typical value isn’t that common.
    2. It might take co-dependent variables as if they were orthogonal, for the sake of weighting the output.
  • SuperCub@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    I’m curious what companies have been using to screen applications/resumes before Chat GPT. Seems like they already had shitty software.

  • kata1yst@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Yet again sanitization and preparation of training inputs proves to be a much harder problem to solve then techbros think.