10 Break-Out Sessions
[timetable id="9" column_title="0" filter_visible="1" filter_multiple="1" event_box_time="0"]
[timetable id="9" column_title="0" filter_visible="1" filter_multiple="1" event_box_time="0"]
Google News’ algorithms associated “he” with “doctor” and “she” with “nurse.” Microsoft’s AI chatbot Tay pledged allegiance to Hitler within hours of being online. COMPAS, a risk-assessment programme, predicted black defendants were more likely to commit further crimes than they actually were.
Artificial intelligence software is typically coded by white, young and privileged men – with consequences in terms of how they learn and function. But the bias carried by these algorithms may only be the tip of the iceberg when it comes to tech’s impact on society. “Artificial intelligence can do good,” says Ayesha Khanna, CEO of ADDO AI. “It can reduce disease, it can democratise access to infrastructure for the poor, but tech is a double-edged sword. And unless we manage it carefully, it could also do harm.”
AI algorithms have repeatedly been racist, sexist and, well, biased. The problem is, the industry itself does not even know what lies under the surface. “In the world of AI, it is common knowledge that there are potential issues and pitfalls with the technology,” says Heather Evans, advisor in advanced technologies for the Ministry of Economic Development and Growth of Ontario, Canada. “But there is not yet a good understanding of what these broad issues are.”
However, awareness is growing, and there are solutions out there regarding biased algorithms. “It is never too late! These codes are written by human beings. A lot of these biases come from poor data. You have to add more data, diversify the data, and retrain the model,” says Khanna.
Khanna can also imagine AI looking after itself eventually. “AI could be programmed to inspect other algorithms as they evolve and get fed more data.”
A diverse company culture not only helps create products that are fit for a broader audience, but also products that last longer. “If the objective of a team is to create a product which delivers a service, then your customers are probably a diverse group of people,” Evans says. “You need to understand them, and it is very hard to understand the perspective of someone whose life experience is so different than your own.”
So why does the tech industry have such a hard time creating truly diverse workplaces? Some insiders blame the pipeline. “In engineering, there are definitely fewer female candidates with minority backgrounds who have a wide range and depth of experience,” explains Pavan Kumar, co-founder & CTO of Cocoon Cam, which develops smart monitors to screen babies while they are sleeping. Kumar knows from first-hand experience how hard it is to create diversity in a start-up; 90 percent of the applications he gets are from men. What, then, is the best way to hire people from different backgrounds, ages, genders, and perspectives?
According to Zabeen Hirji, advisor on the future of work at Deloitte, the answer lies in changing the human resources department. “When you are going to hire from universities, you should take care to attract a diverse group of students,” she says. “And that means that the people you send on campus recruiting visits should actually be diverse.”
Khanna, meanwhile, argues that the solution lies in broadening the company’s reach. “I look increasingly at hiring digital talents, people who work remotely,” she says. “The moment I expanded my horizons, both in terms of geographical boundaries and whether I was hiring someone full time, part time, or as a consultant, my pool of talents got much bigger, and there was a higher chance that I found diverse talents.”
Just hiring a diverse team is not enough: Companies must also include everyone in the debate. “As a leader, you want to empower your team so that they can have an opinion, so that they are heard,” says Evans. “Because it is one thing for someone to have a comment, and it is another thing to be taken seriously.”
Everyone, in other words, is responsible for creating representative AI. “We have to demand transparency and accountability with our algorithms,” says Khanna. “We cannot be passive: We have to force and compel ourselves as human beings, but also the companies and the governments, to provide that sort of accountability.”