Tommie Experts taps into the knowledge of St. Thomas faculty and staff to help us better understand topical events, trends and the world in general.

Last month, School of Engineering Dean Don Weinkauf appointed Manjeet Rege, PhD, as the director for the Center for Applied Artificial Intelligence.

Rege is a faculty member, author, mentor, AI expert, thought leader and a frequent public speaker on big data, machine learning and AI technologies. The Newsroom caught up with him to ask about the center’s launch in response to a growing need to educate ethically around AI.

First off, congratulations on the announcement of the center. What’s your vision for what this can become?

We’re partnering with industry in a number of ways. One way is in our data science curriculum. There are electives; some students take a regular course, while others take a data science capstone project. It’s optional. Students who opt for that through partnership with the industry, companies in the Twin Cities interested in embarking on an AI journey can have several business use cases that they want to try AI out with. In an enterprise, you typically have to seek funding, convince a lot of people; in this case, we’ll find a student, or a team, who will be working on that industry-sponsored project. It’s a win-win for all. The project will be supervised by faculty. The company gets access to emerging AI talent, gets to try out their business use case … and the students end up getting an opportunity working on a real-world project.

Secondly, a number of companies are looking to hire talent in machine learning and AI. This is a good way for companies to access good talent. We can build relationships, sending students for internships, or even students who work on these capstone projects become important in terms of hiring.

There are also a number of professional development offerings we’ll come out with. We offer a mini master’s program in big data and AI. The local companies can come and attend an executive seminar for a week on different aspects of AI. We’ll be offering two- or three-day workshops on hands-on AI, for someone within a company who would like to become an AI practitioner. If they are interested in getting in-depth knowledge, they can go through our curriculum.

We also have a speaker series in partnership with SAS.

In May we’ll be hosting a data science day, a keynote speaker, and a panel of judges to review projects the data science students are working on (six of which are part of the SAS Global Student Symposium). They’ll get to showcase the work they’ve done. That panel of judges will be from local companies.

Why is it so important that students and the St. Thomas community engage with artificial intelligence and what it means?

Everybody is now becoming aware that AI is ubiquitous, around us and here. The ship has already left the dock, so to speak, in terms of AI being around us. The best way to succeed at the enterprise level is to embrace this and make it a business enabler. It’s important for enterprises to transform themselves into an AI-first company. Think about Google. It first defined itself as a search company. Then a mobile company. Now, it’s an AI-first company. That is what keeps you ahead, always. With so many companies in the Twin Cities, they’re thinking about how they transform to an AI-first company, which is not just about taking a particular use case and applying AI to that. An AI-first company has certain ingredients in it. That is what we hope to impart by creating this larger awareness around the Twin Cities.

Ethics is a centering value of everything at St. Thomas. How do the things a St. Thomas student will consider about AI inform how they apply it? 

Being aware of the problems that may arise is so important. For us to address AI biases, we have to understand how AI works. Through these multiple offerings we’re hoping we can create knowledge about AI. Once we have that we can address the issue of AI bias.

For example, Microsoft did an experiment where it had AI go out on the web, read the literature and learn a lot of analogies. When you went in and asked that AI questions based on, say, what man is to a woman, father is to what? Mother. Perfect. … What man is to computer programmer as woman is to what? Homemaker. That’s unfortunate. AI is learning the stereotypes that exist in the literature it was learned on.

There have been hiring tools that have gender bias. Facial recognition tools that work better for lighter skin colors than darker skin colors. Bank loan programs with biases for certain demographics. There is a lot of effort in the AI community to minimize these. Humans have bias, but when a computer does it you expect perfection. An AI system learning is like a child learning; when that AI system learned about different things from the web and different relationships between man and woman, because these stereotypes existed already in the data, the computer just learned from it. Ultimately an AI system is for a human; whenever it gives you certain output, we need to be aware and go back and nudge it in the right direction.

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.