From deciding what to watch next on Netflix to ordering lunch from a robot, artificial intelligence (AI) is hard to escape these days.
AI ethics leader Elizabeth M. Adams is an expert on the social and moral implications of artificial intelligence. She recently spoke about overcoming those issues at an Opus College of Business event, Artificial Intelligence & Diversity, Equity and Inclusion.
Here are four key takeaways.
Artificial intelligence is all around us.
From traffic lights to unlocking mobile phones, computers are working to aid our every move.
“Artificial intelligence is basically training a computer model to think like a human, but at a much faster pace,” Adams said.
A futurist at heart, Adams embraces AI wherever and whenever she can.
“I'm a huge proponent of a four-hour workweek,” Adams said. “If I could have technology make my coffee, turn on my screens, so I could focus on my other research, I would.”
Despite good intentions, artificial intelligence can perpetuate historical bias.
Artificial intelligence hasn’t always worked in an inclusive or equitable fashion. Adams points out that AI has often struggled to accurately identify individuals, objects and trends.
Some of those struggles impact our social identity. For example, software programs continue to misidentify Black women as men. Other programs have difficulties identifying individuals, even for some of the most well-known faces in the world, such as Oprah Winfrey and former first lady Michelle Obama.
Other inaccuracies may impact standing in the community or financial well-being. Governments and law enforcement have begun using facial recognition software at a variety of levels, collecting data and information on citizens. Not only does this form of artificial intelligence raise privacy concerns, it can perpetuate bias based on how the technology and data is used.
For an example in business, AI bias has been found in hiring software. Certain resumes can be overlooked based on data that software is trained to value or avoid.
“We’re waking up to the challenges of AI, even though there are lots of benefits,” Adams said. “For those in vulnerable populations, now you have one more thing – this new technology – that you have to figure out how to navigate in your life.”
What is responsible AI?
As discrepancies and inequities come to light, more companies have embraced the use of responsible AI. While an exact definition is still evolving, responsible AI aims to reduce harm to all individuals and embrace the equitable use of artificial intelligence.
“It’s very important to have the right people, the right voices at the table when you’re designing your technology,” Adams said.
Adams lifts up companies like Microsoft and Salesforce as two giants that have been working to roll out responsible AI technology with the help of their entire workforce.
“It’s not just a technical problem,” Adams said. “It’s important to have diverse voices of all disciplines.”
Meanwhile global organizations such as the United Nations have put out guidelines for companies to follow for their AI technology.
Everyone must embrace responsible AI.
It’s not just mega companies or organizations that can bring about change. Adams stressed that everyone must embrace the new realities of working in a world with AI.
“There are lots of different opportunities to see yourself and to help fix some of the challenges,” Adams said. “Responsible AI is really starting to cascade out to the workforce, which is really, really important.”
Adams suggested people get started learning about AI by hosting education events, partnering with stakeholders in their community, and speaking with policymakers.
But most of all, she wants everyone to follow their curiosity.
“If you like art, follow your curiosity around AI in art,” Adams said. “If you like automobiles, follow your curiosity there. Wherever you decide that AI is important, follow your curiosity.”
This event was presented by Opus College of Business – Business in a Digital World (BDW) and Diversity, Equity and Inclusion initiatives.