Dr Bethany Waterhouse-Bradley said progress was being spearheaded by white, male, middle-class computer engineers. That can show itself through sensors which find it harder to detect black hands than white, the lecturer in social policy at Ulster University said.
She added: “Ultimately, if you have designers who don’t have life experiences with various ability issues, various race and class issues and gender issues, we see repeatedly in history that society is designed for the people in the industry that it reflects and the tech industry is still very male-dominated.
“It is still very white and middle-class, and even Stanford (a university in California) has introduced a new artificial intelligence humanity institute and there are no black people represented at all, even though the idea of ethical artificial intelligence and some of the most leading research on that has been done by black and minority ethnic people.
“So, it is getting inclusiveness across the board so that people understand the lived experience that they are trying to design for.”
Computer algorithms are susceptible to bias because they are drawn up by humans, a discussion organised as part of the Festival of Politics and Ideas at the York Street campus of Ulster University heard.
Experts used the example of an Uber self-driving vehicle which knocked down a cyclist to illustrate a discussion about the limitations of artificial intelligence. The human “driver” was observed not paying attention since the vehicle was autonomous.
The idea behind artificial intelligence is that it works best when humans work with computers, rather than leaving it solely up to them, experts at the university said.
Expecting humans to pay attention when they are not needed 99% of the time, like in a driverless car, can be a difficult proposition.
Dr Waterhouse-Bradley said: “These are very human decision-making processes and we have a right and an ability to engage in that is really important for making these ethical decisions that are best for most people in society.”
Debate is continuing about how to regulate artificial intelligence.
Dr Waterhouse-Bradley added: “Some people argue that transparency is not the issue and that regulation and having a minimum standard of what is acceptable and how we challenge those things is the biggest issue.
“Transparency helps with those things, but accountability and regulation would be quite consistent in terms of what the best outcomes for the most marginalised people would be.”