We’ve lived in a world where resume evaluation is always unjust. It’s just that. A resume can’t imply anything that can be used against you.
People are biased against resumes that imply a disability. ChatGPT is just picking up on that fact and unknowingly copying it.
studies how generative AI can replicate and amplify real-world biases
Emphasis mine. That’s a damn important factor, because the deep “learning” models are prone to make human biases worse.
I’m not sure but I think that this is caused by two things:
- It’ll spam the typical value unless explicitly asked contrariwise, even if the typical value isn’t that common.
- It might take co-dependent variables as if they were orthogonal, for the sake of weighting the output.
I’m curious what companies have been using to screen applications/resumes before Chat GPT. Seems like they already had shitty software.
Yet again sanitization and preparation of training inputs proves to be a much harder problem to solve then techbros think.
Let the underwhelming brain in a jar decide if your disability would make you less efficient at your work.