Showing how easy it is to make deepfakes of politicians using artificial intelligence, independent senator David Pocock creates AI videos of Prime Minister Anthony Albanese and Opposition Leader Peter Dutton.
I’m not necessarily advocating it. I put the link up because its a useful addition for a post like this.
In saying that the idea that bans don’t work takes the ‘war on drugs/prohibition’ approaches out of context.
I’m writing this from memory (of reading, not experience :p ) because i don’t have time to go and reread it all so apologies if details are wrong, the essence should be there though,
Prohibition was enforced on the population by ideological puritans in power at the time. It seems no clear popular support backed or accepted the prohibition’s rationale and is a driving reason why it was so hard to maintain and dropped.
‘War on drugs’ ideas should be dropped because the evidence shows the American public have not benefited from the policy position, and in fact the ‘War on Drugs’ has likely increased the costs and harms associated with the drugs trade rather than diminished them. So, while we can say the ‘War on Drugs’ enjoyed popular support, in contrast to Prohibtion, the health, economic, violence, and consumption patterns have all trended negative against the policy over the period, meaning the policy has failed in its stated objective and needs changing.
The point of these two examples being referred to when considering other bans isn’t to sit on the ideological plane of libertarians and shout “All bans are bad, you won’t tread on me.” But to consider the negative implications of a proposed ban and how its reality could differ from the vision, and adjust accordingly.
There are enforced bans throughout society, think driving without a seatbelt, driving on the wrong side of the road, electrician sign offs, work with and manufacture of radioactive materials, essentially anything the enforceabke by the police and courts you can argue is a ‘banned practice’.
A ban targeting political party practices is far more enforceable than population wide bans, its a smaller ‘market’, with known players, to regulate. I beleive Lobby groups in Aus also have to identify themselves when they put out attack ads.
All that said, if a ban was implemented it doesn’t stop AI use in political advertising, but it does set the tone, and that means a lot. We as a society can’t stop murders, but we can build up barriers against their use as a legitimate tool of pursuing ones goals.
Okay, so I have a few points in rebuttal, but I think we’re generally on the same page. I would absolutely support a ban on political parties using deepfakes of opponents in attack ads or otherwise broadcasting them. Infact, I would support more than just fines. If it was discovered after the polls, I would support a full recall of the election. I’d suggest deregistration from that election as well, but I’d hope the voting public can show it’s distaste for that behaviour. I’d also support some level of required due diligence for news media in ensuring what they’re publishing is real. Though there has to be consideration of the suppression of important information. Can you imagine if the Watergate tapes were never released because it couldn’t be proved beyond a shadow of a doubt that they were real?
So I guess that brings me to my problem with this petition. It seems to be asking for a general ban on the entire population, and that’s just not something I can support. There is, and should be, higher standards of ethics expected of both of these groups. However, I don’t think it should be enforced on the average citizen. They just don’t have the ability to get that stuff in front of eyeballs without help.
The other side of this (I know we’d basically be pissing into the wind with how small we are) is that regulations targeting the companies themselves needs to be a part of this. You should never be able to type in the name of a notable figure (or anyone, really) into a generative AI and get it to spit out an image/video of that person. It’s being used to make porn of celebrities which is incredibly damaging, but there’s now been cases of students creating it of other students by feeding it pictures. If AI companies won’t create safeguards, we need to make them. As it stands, it requires an immense amount of power to train picture/video-generating AIs that can fool people. So targeting larger actors makes the most sense.
I’m not necessarily advocating it. I put the link up because its a useful addition for a post like this.
In saying that the idea that bans don’t work takes the ‘war on drugs/prohibition’ approaches out of context.
Prohibition was enforced on the population by ideological puritans in power at the time. It seems no clear popular support backed or accepted the prohibition’s rationale and is a driving reason why it was so hard to maintain and dropped.
‘War on drugs’ ideas should be dropped because the evidence shows the American public have not benefited from the policy position, and in fact the ‘War on Drugs’ has likely increased the costs and harms associated with the drugs trade rather than diminished them. So, while we can say the ‘War on Drugs’ enjoyed popular support, in contrast to Prohibtion, the health, economic, violence, and consumption patterns have all trended negative against the policy over the period, meaning the policy has failed in its stated objective and needs changing.
The point of these two examples being referred to when considering other bans isn’t to sit on the ideological plane of libertarians and shout “All bans are bad, you won’t tread on me.” But to consider the negative implications of a proposed ban and how its reality could differ from the vision, and adjust accordingly.
There are enforced bans throughout society, think driving without a seatbelt, driving on the wrong side of the road, electrician sign offs, work with and manufacture of radioactive materials, essentially anything the enforceabke by the police and courts you can argue is a ‘banned practice’.
A ban targeting political party practices is far more enforceable than population wide bans, its a smaller ‘market’, with known players, to regulate. I beleive Lobby groups in Aus also have to identify themselves when they put out attack ads.
All that said, if a ban was implemented it doesn’t stop AI use in political advertising, but it does set the tone, and that means a lot. We as a society can’t stop murders, but we can build up barriers against their use as a legitimate tool of pursuing ones goals.
Okay, so I have a few points in rebuttal, but I think we’re generally on the same page. I would absolutely support a ban on political parties using deepfakes of opponents in attack ads or otherwise broadcasting them. Infact, I would support more than just fines. If it was discovered after the polls, I would support a full recall of the election. I’d suggest deregistration from that election as well, but I’d hope the voting public can show it’s distaste for that behaviour. I’d also support some level of required due diligence for news media in ensuring what they’re publishing is real. Though there has to be consideration of the suppression of important information. Can you imagine if the Watergate tapes were never released because it couldn’t be proved beyond a shadow of a doubt that they were real?
So I guess that brings me to my problem with this petition. It seems to be asking for a general ban on the entire population, and that’s just not something I can support. There is, and should be, higher standards of ethics expected of both of these groups. However, I don’t think it should be enforced on the average citizen. They just don’t have the ability to get that stuff in front of eyeballs without help.
The other side of this (I know we’d basically be pissing into the wind with how small we are) is that regulations targeting the companies themselves needs to be a part of this. You should never be able to type in the name of a notable figure (or anyone, really) into a generative AI and get it to spit out an image/video of that person. It’s being used to make porn of celebrities which is incredibly damaging, but there’s now been cases of students creating it of other students by feeding it pictures. If AI companies won’t create safeguards, we need to make them. As it stands, it requires an immense amount of power to train picture/video-generating AIs that can fool people. So targeting larger actors makes the most sense.