Privacy, perceptions and the magic of AI

The recent broohaha over Sam Altman’s comment about the process OpenAI uses to scan people’s interactions with chatGPT and forward those to human reviewers highlights our limited understanding of AI models.  Do we really expect AI to somehow magically know the answers and to learn on its own, without human oversight and correction?  This strikes me as being contrary to the entire learning process.  Is it not normal practice that children, teenagers and adults all need correction in order to have better understanding?  And should we not expect the same process for AI models as they learn and develop?

When I was in school my teachers frequently corrected my work.  When I was under the direction of coaches and trainers in athletics, I expected them to correct my performance and achievements.  I expected them to give instruction and personal feedback.  I even expected stern correction and what some might say, punishment, if I broke the rules.

Why would we expect or think the process is any different for artificial intelligence?

Privacy is a funny thing.  I had a conversation with a doctor some time ago and I defended the rights of China to monitor the conversations and public speech of their citizens.  Not that I want to live under such a regime or necessarily agree with how they do it, but rather I understood their reasoning and mentioned that we monitor and censure people in the United States on a regular basis.  He was shocked at my opinion, but let me explain.

Many private and public owned corporations have rules and technologies that govern, monitor and censor employees in the workplace.  They record phone conversations in order to provide training and to have evidence of wrongdoing if someone needs to be fired.  They set rules around the distribution of pornography, hate speech and bullying.  They limit the types of information that can be shared with the public and/or competitors.  And frankly, most people understand and agree with these practices.

In my home, which is private property, my wife and I set the rules of conduct for those that live here or visit.  We censor foul language.  We curtail and block certain types of entertainment.  We even ask people to leave or not to come visit if they are not willing to abide by reasonable standards of conduct.  And by the way, many of the people in America today, have Ring and other similar types of services that use cameras to monitor and track who comes to their front door.

Are all of these things a violation of privacy and if yes, how is it different than the creators of AI models, taking similar steps to teach, train, listen and engage with the users of their technology?  Is it really that different?  Or have we as a society, decided that the terms of engagement and privacy always need to adhere to our own values, opinions and perceptions?

Perhaps this is a good time to remember a quip I’ve shared with my children for several decades now, “remember, there are NO secrets on the Web, and by the way, everything is on the web.”  We would do well to remember that chatGPT, GROK and other publicly available AI models are built by private companies.  They usually have profit motives and ambitions to change or influence the world.  They are not holding private conversations with us solely for our personal benefit.  They use everything we give them, in some fashion, for their own benefit and gain.  Let’s not be naïve to the “magic” of AI or the delusion of privacy.

Next
Next

“A kid born today will never be smarter than AI, ever.” Nope. Not true.