Bias is present in all aspects of the enterprise, not just the human side. Here’s how tech can have its own biases.
TechRepublic’s Macy Bayern spoke with Aarti Borkar, vice president of IBM Security business and strategy, at Mobile World Congress 2019 Los Angeles about artificial intelligence bias and how to overcome it. The following is an edited transcript of the conversation.
Aarti Borkar: Bias is just inherently a part of every one of us, and if we write code, some of that bias could potentially go into it. Historically, if you think about what bias means in the workforce, it’s recruiters had a bias or managers had a bias and they hired people that “seemed like the right people for the right job,” because of a fun fact called gut-feel. If you convert that and make it mathematical and you say, “Why would you hire someone? What’s your reason for hiring someone?” And if you think about it carefully, it’s generally for skills. And skills are wide. You think of hard skills of, “Do they know how to program?” “Do they know how to do accounting?” “Do they know how to be a meteorologist?” Whatever it might be. Soft skills: They have to work in a team, they have to lead a team, they have to have certain mannerisms, they have to have certain behaviors or they have to have certain experiences, skill derived from doing this for X number of years.
That is actually mathematically possible to move into an algorithm. If you have the data, which is, what are all the skills in the world? what are the new skills that are developing? do you have those skills or not have those skills? if you build a skills-based story for a variety of things, like how candidates find jobs, how jobs are given, who do you recommend for a job, being the sidekick to a recruiter, when you’re doing promotions, when you’re doing raises, what are you paying people for, and you focus that around what makes sense for you, the individual across from you in the job, as well as the company as a whole and get that to be around the skills as we define it, that automatically takes away the things that I didn’t talk about, because skills, interestingly, do not have age, race and gender, though certain skills are developed in particular ways just by doing something for a period of time or because you were raised in a particular space or something to that effect, but it’s just part of who you are.
AI, trained in that fashion by the right people, which means not computer scientists, but computer scientists with behavioral scientists, neurologists, neuroscientists, et cetera, to be focused on a skill-based paradigm for everything we do around traditionally called HR, you end up with diversity being brought into the workforce because of the use of AI, instead of being worried about the bias that could cause problems with it.
People ask me all the time, “Hey, is AI bias even a thing in cyber?” And the point is, it is, because if you think about what bias means, bias means in simple words, a blind spot, which means I, because of my background and my thought process or my education or my focus, do not notice something that should be part of the decision-making for a particular problem. So if I’m a white hat in the cyber world and I’m trying to program a system and I have only been exposed to a certain type of cyber crime, or I’ve only ever lived in a particular geography, so I’ve only seen crime committed by certain types of bad actors, then I naturally do not know about certain other crafts, right? That creates bias because the tool I built will only be looking for one part of the problem, which means if somebody else attacks, have at it. Field day.
Turning that around, it means that you need two things. You need data about the diversity of the issues that you’re trying to prevent. That means you’re looking at incidents and you’re looking at telemetry from different parts of the world. You’re looking at it from different types of companies that have been attacked in the past. You’re looking at sizes, public sector versus private sector, et cetera, really wide. But then the team that you have is a combination of computer scientists and people who have been fighting cyber crime and the socks around the world or converted black hats. All of that helps, but have been aware of and introduced to a wide swath of issues, may have gone to different schools, may have learned from different people, may have fought different types of scenarios, and if they collectively train the AI system to spot incidents, you now have a scenario where the blind spot of one is kind of the focus of another and collectively the cognitive diversity of the group and shows you don’t have blind spots, making that AI system far more powerful than any one of those individuals themselves.