The corporate world entered a new era with the launch of ChatGPT.
With its generative AI technology, the buzzy bot could write code, generate images, and compose emails in a matter of minutes. The days when employees would meticulously create presentations and comb through their inboxes felt like the past all of a sudden.
Businesses quickly embraced the technology, drawn by the promise of increased revenue and efficiency. The percentage of businesses that frequently utilize generative AI has increased to 65% from the previous year, according to a May poll by consulting firm Mckinsey & Company, which examined over 1,300 enterprises.
But with great power also comes great responsibility, so businesses that profit from generative AI also need to make sure that it is regulated.
A chief ethical officer can help with that.
A critical role in the age of AI
The specifics of the position differ from company to company, but in general, according to Var Shankar, chief AI and privacy officer at Enzai, a software platform for AI governance, risk, and compliance, they are in charge of assessing the potential effects that a company’s use of AI may have on society at large. So how does it impact your customers beyond just your business and your profit margin? What impact does it have on other individuals globally? And finally, he told, how does it impact the environment. The next step is to create a program that, each time AI is used, scales and standardizes those queries.
This position provides programming whizzes and policy nerds with a foothold in the rapidly evolving tech sector. Additionally, it frequently entails a substantial yearly salary in the mid-six figures.
But as of right now, according to Boston Consulting Group’s chief AI ethical officer Steve Mills, employers aren’t filling these positions quickly enough. He believes that while risk and principles are discussed a lot, not much is done to operationalize them in businesses.
A C-suite level responsibility
According to Mills, a candidate for the position should ideally have four areas of expertise. Technical proficiency with generative AI, product development and deployment experience, familiarity with key AI rules and regulations, and a substantial background in hiring and decision-making inside an organization are all necessary.
He said that he sees midlevel managers placed in charge much too frequently. Despite their knowledge, drive, and enthusiasm, these managers usually lack the authority to make organizational changes or unite the legal, business, and compliance teams. According to him, every Fortune 500 business that uses AI extensively must assign an executive to manage a responsible AI program.
Shankar, a lawyer by profession, stated that a particular educational background is not necessary for the position. The ability to understand a company’s data is the most crucial qualification. According to him, this entails being aware of the ethical ramifications of the data you gather, use, where it originates from, where it was used before entering your business, and what kind of consent you have obtained.
He cited healthcare professionals as one example, who may inadvertently reinforce prejudices if they lack a solid understanding of their data. Hospitals and health insurance companies that employed an algorithm to determine which patients would benefit from “high-risk care management” ended up giving healthier white patients priority over sicker black patients, according to a study published in Science. That is the type of error that an ethics officer may assist businesses in avoiding.
Collaborating across businesses and sectors
A confident ability to communicate with a variety of stakeholders is another requirement for those in the profession.
Christina Montgomery, vice president of IBM, leader of the company’s AI Ethics Board, and chief privacy and trust officer, revealed that her days are typically filled with client meetings and events in addition to other duties.
She remarked that she felt they had a great chance to shape the future and that is why she had spent a lot of time—possibly more recently—speaking at events, interacting with legislators, and serving on external boards.
She is a board member of the International Association of Privacy Professionals, which just introduced a certification program for AI governance professionals aimed at those aspiring to be leaders in the field of AI ethics. She converses with other chief ethics officers and government officials as well.
She believes that regular communication and the sharing of best practices are vital, and this is something that they do frequently amongst firms. One of the most important aspects of the job is to gain a deeper comprehension of what’s going on in society.
She expressed her concern that in the current regulatory environment, there is a lack of worldwide interoperability over standards, expectations, and morality regarding what businesses must comply with. “That is not how we can function in the world. Thus, discussions between businesses, governments, and boards are crucial at this time.”