Constitutional AI, HR and Two-Way Text Messaging

Several months ago my company bid on a project to provide two-way messaging services for a mid-sized city government HR department.  Although the city proper has 200,000+ residents the immediate metropolitan statistical area (MSA) is loaded with 1.2M people.  It’s a nice sized area.

The HR department wants to use SMS in their recruiting activities.  This is a nice idea but opens up a bit of a Pandora’s Box.  I mean, how will the department actually know they are conversing with a real person?  And on the other side, how will applicants know they are actually talking with someone in the human resources department?  AI is everywhere and becoming deeply embedded within phones, social media, marketing automation solutions, call centers and everywhere else.

Related to this, over the last several months I’ve been playing with the idea of constitutional AI.  Constitutional AI is an approach to artificial intelligence that creates “guard rails” or rules of the road that AI must live by – assuming of course that we actually believe AI will be contained within specific rules.  Generative AI by definition has the ability to learn and create, so why can’t it learn and work its way around guard rails?  But I digress…

The idea is to establish clearly defined rules to govern the use and response of AI.  So let’s test this a bit with simple HR rules:

Rule #1: Do not make decisions based on race, gender, age, religion, or other protected characteristics.

Rule #2: Prioritize candidates based on skills, experience, and relevance to the job requirements.

Great, these are pretty straightforward rules and make sense for many organizations, but what about for a mid-sized city that has been established predominantly by white, Western European immigrants?  The population of the area is heavily biased toward historical immigration patterns and the area is filled with traditional thinking where the woman stays home to raise children and the men go off to work.  The city leaders have decided they want a more diverse workforce.  How do they do this without skewing their AI or HR hiring practices to ignore race and gender?

What about a predominantly black area of a large metropolitan city?  A manufacturing company has setup shop in the community and needs new workers.  The majority of their workers are from the surrounding community and very few of them are not black.  How do they go about recruiting a more diverse work force, even while employing the latest benefits of AI, when the AI tool they have selected for HR activities has been built with constitutional AI rules that says AI should be blind on race?

And what about religious organizations?  How do they go about embracing AI when they want to recruit people with common values and common faith if the AI platform they have selected has been coded with constitutional rules that do not allow decisions based on religion?  Within certain circumstances it is still legal to hire based on religion (e.g. churches and religious schools).

One of the criticisms I hear about Generative AI is that AI is simply learning and mimicking our human biases.  The story line goes on to say that when AI is given the opportunity to ingest everything available on the Internet, it will simply parrot and regurgitate the many biases we already have.

Well, yes, to some degree that will happen.  Our language is filled with descriptive words that convey bias.  However, it is very difficult to understand intent and feelings when it comes to the black and white words on a screen or in a book. And we need to remember, AI has does not have feelings.  Having said that, do we have a bit of a dichotomy if humans are trying to make AI “more human” and our ability to identify and understand bias is part of what makes us human. 

Returning to the topic at hand - AI, HR Departments and two-way text messaging - one of the beauties of two-way SMS is the real conversation that occurs between two people.  Yes, there are guidelines and laws to be followed, but humanity needs to keep the ability to be human.  If we go down a path where software within the HR realm prevents humans from expressing humor, compassion and other subtle forms of humanity, especially during the recruiting and screening process, unique personalities, gifted individuals and good people of all sorts will be screened and filtered out of the process.

Many years ago, I worked for a large company.  Our company had hit the radar of our State Department of Labor because our employee pool did not fit the proscribed racial mix.  The attorneys came in and trained the executive team.  The HR professionals altered their advertising focus and began recruiting at schools and in communities where the available workers and students fit the desired racial audiences.  We beefed up our relocation budget in order to move new hires into the area and improve our ratios.  Did we improve our ratios in order to appease and satisfy the department of labor?  We did.  Did we always prioritize candidates based on skills, experience, and relevance to the job description?  Not always, but generally speaking, yes.  At times we needed to make judgment calls based on desired outcomes.  We needed to change our racial mix and that meant changing advertising focus but sometimes it required a subtle shift in final hiring decisions.  We always hired who we thought were capable and competent people; talented people that could survive and thrive in our environment.  However, there were definitely times when we ignored the government quotas in order to hire the right people.  The process took time and effort, but decisions made didn’t always fit within the defined guardrails.

As I reflect on AI and the idea that constitutional AI could prevent companies from creating a better company or organization, precisely because they hire based on race, religion or other factors, I am persuaded that AI needs to be a tool in the hands of people.  It should not be allowed to operate independently in vacuum.  And therein lies the challenge – if your HR screening is built on an AI system that has billions of parameters, how can you tell what type of decisions are being made in the screening process?  Perhaps this is what we call blind faith and we move forward hoping that everything will be okay.

On the flip side, for job seekers, if the HR department contacts you via text messaging or email, how will you know if you’re talking to AI or a live person?  This may seem obvious and easy with small companies, but if you are applying at large companies, like the Fortune 500, this may be much more challenging when you’re contacted by automated processes and AI bots.  Welcome to the new world of gatekeepers - and if you can’t tell, get yourself some new tech that helps you know if you’re interacting with AI or with real people.

5/30/25 update:  https://www.linkedin.com/news/story/ai-is-now-conducting-job-interviews-7364026/

Next
Next

Simple, black and white decisions are hard to find.