The best use case I can think of for “A.I” is an absolute PRIVACY NIGHTMARE (so set that aside for a moment) but I think its the absolute best example.
Traffic and traffic lights. If every set of lights had cameras to track licence plates, cross reference home addresses and travel times for regular trips for literally every vehicle on the road. Variable speed limit signs on major roads and an unbiased “A.I” whose one goal is to make everyones regular trips take as short an amount of time as possible by controlling everything.
If you can make 1,000,000 cars make their trips 5% more efficiently thats like 50,000 cars worth of emisions. Not to mention real world time savings for people.
If you want AI agents that benefit humanity, you need biased training data and or a bias inducing training process. E.g. an objective like “Improve humanity in an ethical manner” (don’t pin me down on that, just a simple example).
For example, even choosing a real environment over a tailored simulated one is already a bias in training data, even though you want to deploy the AI agent in a real setting. That’s what you want. Bias can be beneficial. Also if we think about ethical reasoning. An AI agent won’t know what ethics are and which are commonly preferred, if you don’t introduce such a bias.
show your work. 1 especially seems suspect. Especially since many AIs are not trained on content like you are imagining, but rather trains itself through experimentation and adversarial networks.
You’re just saying, human-written software can have bugs.
That’s pretty much exactly the point they’re making. Humans create the training data. Humans aren’t perfect, and therefore the AI training data cannot be perfect. The AI will always make mistakes and have biases as long as it’s being trained on human data.
The best use case I can think of for “A.I” is an absolute PRIVACY NIGHTMARE (so set that aside for a moment) but I think its the absolute best example.
Traffic and traffic lights. If every set of lights had cameras to track licence plates, cross reference home addresses and travel times for regular trips for literally every vehicle on the road. Variable speed limit signs on major roads and an unbiased “A.I” whose one goal is to make everyones regular trips take as short an amount of time as possible by controlling everything.
If you can make 1,000,000 cars make their trips 5% more efficiently thats like 50,000 cars worth of emisions. Not to mention real world time savings for people.
If you want AI agents that benefit humanity, you need biased training data and or a bias inducing training process. E.g. an objective like “Improve humanity in an ethical manner” (don’t pin me down on that, just a simple example).
For example, even choosing a real environment over a tailored simulated one is already a bias in training data, even though you want to deploy the AI agent in a real setting. That’s what you want. Bias can be beneficial. Also if we think about ethical reasoning. An AI agent won’t know what ethics are and which are commonly preferred, if you don’t introduce such a bias.
show your work. 1 especially seems suspect. Especially since many AIs are not trained on content like you are imagining, but rather trains itself through experimentation and adversarial networks.
Even how it trains itself can be biased based on what its instructions are.
Yes, and? If you write a bad fitness function, you get an AI that doesn’t do what you want. You’re just saying, human-written software can have bugs.
That’s pretty much exactly the point they’re making. Humans create the training data. Humans aren’t perfect, and therefore the AI training data cannot be perfect. The AI will always make mistakes and have biases as long as it’s being trained on human data.