Photo by Andrea De Santis on Unsplash.
We are committed to doing our part as technology companies, while acknowledging that the deceptive use of AI is not only a technical challenge, but a political, social, and ethical issue and hope others will similarly commit to action across society…
— from “A Tech Accord to Combat Deceptive Use of AI in 2024 Elections”, February 16, 2024.
Earlier this month, a motley group of tech firms signed a voluntary agreement to try and keep us all safe from deception during the 2024 election. Or, as safe as humanly possible.
“Good luck with that,” he said cynically.
At least 22 tech companies — including OpenAI, Google, Meta, Microsoft, Adobe, IBM, and TikTok — signed off on ‘A Tech Accord to Combat Deceptive Use of AI in 2024 Elections’. Given the fact that these firms, themselves, created the problem of AI deceptions and instant global delivery of lies, maybe it’s a baby step in the right direction.
Or maybe it’s just that tech company owners are struggling with guilty feelings, and they hope this ‘Accord’ will help them sleep better.
You can download the ‘Accord’ here.
As the document states, we will have a problem this year with AI-generated “deep fakes” — articles and photographs and videos that make it appear that our political leaders and potential political leaders are doing and saying things they might never have done or said. (Although, maybe they did, in fact, do or say those things? Who really knows?)
The document also mentions “cheap fakes” which have been around since time immemorial. Politicians and their accomplices don’t need AI to fool the public. The public actually wants to be fooled. We want someone to tell us, “Elect me, and everything will be just peachy.”
If they want to use cheap fakes, or deep fakes, what’s the real difference?
Since the start of the year, more than a dozen states have introduced bills to combat AI-generated threats. Well, that’s going to be waste of time, IMHO. They might as well pass laws that prohibit lying, for all the good that would do.
I suspect what’s really going on here is, the tech companies think we hate them for spreading lies more effectively than ever before in human history. They probably got that impression mainly from articles and editorials in the Lamestream Media, the very folks who used to have a corner on the lie-distribution business, and who are now jealous because AI does such a better job of it.
I wish I could grab the tech companies by the shoulders, and shake some sense into them. Guys, we love you! No previous technology was nearly this good at supporting confirmation bias.
If this is really just a guilt issue, on the part of tech companies, a good therapist would be more helpful than spending a lot of time trying to determine what is real and what is fake. Sometimes ‘real’ is good and sometimes it’s not. Ditto, ‘fake’.
I found this link online on Psychology Today, talking about how women can effectively stop feeling guilty, and it mentions eight approaches that would probably also work for tech companies.
Allowing for the fact, of course, that it’s sometimes a good thing to feel guilty.
The Lamestream Media has put considerable energy into warning us about AI and its possible dangers. (I admit to being part of that journalistic effort… but only regretfully.) But the real problem isn’t AI… or lies spread via social media algorithms… or stolen elections… or hate-mongering candidates.
The real problem is democracy.
What were our Founding Fathers thinking? Did they really think a vast collection of misinformed, apathetic, superstitious, self-absorbed voters could elect a competent government?
Amazingly enough, they made us believe we could.
So they can’t claim to be surprised if we believe a lot of other silly things.