Swimming upstream on the Anthropic / OpenAI debate
There is a ton of opinion being expressed on recent decisions by Anthropic and the OpenAI agreement with the U.S. Government. Most opinions I have read take the position Anthropic is right and OpenAI is wrong. One company chose the high moral ground while the other made an opportunistic decision that was sloppy, even immoral and self-destructive with customers. I’d like to swim upstream on this narrative for just a moment or two.
Two points seem to be at the center of the decisions and the public response: mass surveillance and unfettered military use. I find a multitude of questions in the public response. First, are we even capable as individuals, to recognize when mass surveillance is in use? As a society, we appear to be quite comfortable with Planet Labs and their ability to photograph the entire earth every day; the analytic abilities of Palantir and others; even the impressive social listening offerings of Onclusive Social, Meltwater and more. All of which embrace AI for the gathering and analysis of massive amounts of human movement, behavior and conversation. And lets’ not forget Netflix, Amazon, Siri & Alexa, Ring, software that tracks driving behavior and employment agreements that state everything done on company computers is or can be monitored. Roughly 90% of all workers in the U.S. work for a company, they are not self-employed.
As I’ve told my children for many years, there are no secrets on the web, and EVERYTHING is on the web. We may not like it, but technology has enabled so many cameras, devices and systems that it’s hard to know when anything really happens in private.
Second, unfettered use of AI for military purposes, or put more precisely, no autonomous weapons without human oversight. When we send robotic military assets into an active combat zone are we saying we want a human in the middle of those split second decisions? Or are we okay with an iron dome like response that takes out incoming missiles? Military leaders can have human oversight even while autonomous systems respond defensively. Do we really believe we are not using AI in autonomous weapons today? Systems deployed in defensive actions like Patriot missile systems, Aegis, THAAD, MANTIS and others? Or are we saying autonomous weapons are fine in a defensive posture but not okay on offense?
Someone, please tell me, where is the software that gives human oversight, in the moment of decision, to the defensive actions of these military assets? And where is the human that can keep up with these systems to make human-in-the-middle decisions when hundreds of missiles are raining down?
Something about this debate is insufficient, almost like being dishonest with ourselves while it brings about an incomplete reckoning with our own judgment on values. I have to wonder if our feelings and decisions will look the same when a military enemy uses these same AI capabilities against us.
As I watch the public opinion roll on, I believe the jury is still out. No question, Anthropic seems to have gained from the decision in the short term. In the long-term, and as wars come and go, I question whether public opinion will shift and if it will be in favor of tying the hands of militaries around the world.
————————— AI TruthTeller note ————-
How does any of this relate to AI detection? Take a closer look at Onclusive Social. They pull data from 850 million sources daily. Media outlets, TikTok, LinkedIn and much more. For bad actors, this is an invitation to learn, train and scale fraudulent agents, personas, images and messages. AI is an amazing platform and deception is big business. It shows up in many places and personal online safety means identifying and avoiding digital deception.