If you want AI agents that benefit humanity, you need biased training data and or a bias inducing training process. E.g. an objective like “Improve humanity in an ethical manner” (don’t pin me down on that, just a simple example).
For example, even choosing a real environment over a tailored simulated one is already a bias in training data, even though you want to deploy the AI agent in a real setting. That’s what you want. Bias can be beneficial. Also if we think about ethical reasoning. An AI agent won’t know what ethics are and which are commonly preferred, if you don’t introduce such a bias.
If you want AI agents that benefit humanity, you need biased training data and or a bias inducing training process. E.g. an objective like “Improve humanity in an ethical manner” (don’t pin me down on that, just a simple example).
For example, even choosing a real environment over a tailored simulated one is already a bias in training data, even though you want to deploy the AI agent in a real setting. That’s what you want. Bias can be beneficial. Also if we think about ethical reasoning. An AI agent won’t know what ethics are and which are commonly preferred, if you don’t introduce such a bias.