Manjeet Rege headshot.
Liam James Doyle/University of St. Thomas

In the News: Manjeet Rege on the Future of AI Regulation Amid Shifting U.S. Leadership

Manjeet Rege, professor of data science and software engineering at the University of St. Thomas School of Engineering, recently spoke with WCCO Radio about the current state of AI regulations, the role of self-regulation by tech companies, and the potential changes in AI policies with the upcoming shift in U.S. leadership.

From the interview:

Host: Give us a baseline. Are there any regulations currently on AI?

Rege: Not at the federal level. During the Biden era, at the end of 2023, an executive order was issued, and what that essentially said was it gave guidelines about ethical and responsible AI deployment. But over these past four or five years, especially in the last two years with the rise of generative AI, the U.S. has taken a very market-driven approach. It has kind of left it to these big tech companies for self-regulation. As a result, in the absence of federal-level regulation, states have started taking over and implementing some of these measures, like we are seeing in California.

Host: Yeah, in California, they have an AI Transparency Act. So what is that, and do we need something like that in Minnesota or nationally?

Rege: I think there needs to be a balance between allowing innovation and providing a framework for responsible AI development and deployment. The California law, at least, has received some pushback from people in Silicon Valley because some of the responsibilities also fall on developers. According to the initial draft of the law, from what I have read, you may have the same underlying technology being used for both coding and spreading disinformation, and you can’t hold an AI developer responsible for how it may eventually be used.

Host: Hmm, so the creator would be responsible if anything nefarious happened?

Rege: Right. And if you look at it from a global perspective, there is the European Union AI law, which really protects consumer data. There are provisions in the EU law that state if you’ve used generative AI for generating content there, there needs to be a label that indicates, “Yes, this has been created with AI.” As a result, you can differentiate what is real from what is fake.