What Does it Mean to Build AI Ethically and Responsibly?

Over the past few years, companies large and small — in industries ranging from industrial manufacturing and biotechnology to consumer electronics and health care — have shouted about the transformative impact AI will have on their businesses and humanity as a whole. 

But as technology companies like Open AI, Midjourney, Google and Microsoft continue to develop these technologies at a rapid pace, questions have risen about the ethical implications. 

How are these AI systems being trained and developed? What can be done to make sure they are being created and implemented fairly and justly? What resources can be provided for the end users to help them better understand how these technologies work? 

Those kinds of questions have certainly been top of mind for researchers at Northeastern’s Institute for Experiential AI. This month, the institute will host a series of events about “Leading with AI Responsibility,” including a workshop and conference.

One goal of the events is to help demystify the technology and help business leaders and the public be better informed about how AI models are actually developed in the real world, says Usama Fayyad, executive director at the Institute for Experiential AI.

“There’s a lot of misunderstanding about AI, especially in academia and the public,” Fayyad says. “The reason this is called the Institute for Experiential AI (is because) Experiential AI is our code word for ‘humans in the loop,’” he says, noting that companies like Google hire armies of people to review the output these AI models make. 

The series of events kicks off Oct. 17 with an invitation-only workshop titled “Shaping Responsible AI: From Principles to Practice.” The workshop will be led by Cansu Canca, director of responsible AI practice and co-chair of the AI Ethics Advisory Board at the Institute for Experiential AI, and Ricardo Baeza-Yates, director of research and co-chair of the AI Ethics Advisory Board at the Institute for Experiential AI.

In the workshop, participants will work to “define, discuss and develop the essential elements of robust RAI (Responsible Artificial Intelligence) frameworks, best practices and ‘grand challenges,’” according to the institute’s website. 

But what does it really mean to build AI responsibly? It starts by bringing a diverse set of voices into the conversation, says Canca. 

“The core of the question lies in ethics,” she says. “But practicing responsible AI, developing these systems, designing these interfaces, putting them into practice in businesses, all of these require expertise from multiple perspectives. You need computer scientists who are working in this field. You need designers. You need policy people.” 


link