Private or Public AI Models? Yes, please.

Throughout the years, analysts working for me have frequently asked, do you want see “X” or “Y” and the answer has often been, yes!  I want both!  The same has played out as we continue to bring AI Truth Teller™ to market.  Are we building this on public or private AI models?  The answer is, yes, we are doing both.

As many commercial companies have discovered, prototyping with generic data on a public model is fast, effective and extremely useful.  However, once the initial questions are answered the pull and power of private models is compelling.  I mean, who doesn’t want the advantages of using proprietary data for model training, private discovery and IP conversations and improved data security.  There is also the benefit of knowing all your customer interactions are not being used to train the public model, which happens to be available to everyone, including competitors who can harvest your learning and data sets in order to maintain parity.  Frankly, we love the proprietary nature of what we are doing.  I believe our customers do as well.

One of the disadvantages of developing and training a private model is time.  It simply takes longer to build a large data set that is unique to your company culture, brand and customer experience.  It’s way simpler to use large data sets that have been scraped and bought and aggregated to train up a generic model.  That’s not the game we are in.  We are vetting and carefully cultivating conversations, users, algorithms, models and processes.  The end result will be a precision built AI detection experience that provides the best possible decision support for our customers, better safely and broad AI awareness.

Yesterday I had conversation with an executive that is nationally recognized in working to protect children, teenagers and college students from online exploitation.  She shared her enthusiasm for our product and gave a few suggestions for features that aligned with our product road map.  She also shared information on a new Chabot and clearinghouse she is involved with in bringing to market.  However, what was most notable was the similarity in our efforts to take a little more time in the vetting of model training data and outcomes.  Although there is a shared sense of urgency, it was more important to take the time necessary to get it right and to create something with high value – something that can be trusted. 

This brings me to one more thought on why a private model can be extremely advantageous - improved reliability and control. 

It has been said that trust is the currency of choice in the digital economy.  While I believe this is true for our product, AI Truth Teller, that may not be the currency of choice for bad actors.  When your business model is built on deception, sleight of hand, fabrications and outright lies, trust has little to do with what you are peddling.  In fact, when the end goal is extortion, kidnapping, theft and manipulation of people’s thoughts, feelings and actions, then trust doesn’t really enter the conversation.  Trust is absolutely critical to the development and use of AI Truth Teller.  Getting it right is a must have.  We know we are early stage and our models are still learning.  Tuning and vetting processes are still being defined and implemented.  However, the product is deeply supported by engineers with a wide range of skill sets and we are seeing impressive gains.  I hope you will join us! 

We believe the future of AI is immersive and will impact every facet of society.  We likewise believe that AI detection and awareness is necessary for people to make informed decisions about how and where they interact with AI.  AI is going to be deeply personal and people need tools to navigate this new layer of pervasive technology.   AI Truth Teller is being built for that life changing journey and explains why our answer to the question is yes, please, private and public.  We want and need both!

Next
Next

AI has learned to lie and blackmail…