The audience at the Chicago Forum for Responsible AI in Industry: Fintech
Artificial intelligence, or AI, is everywhere now. It is already proving to be a transformative tool in culture, business, and more. But how do we make sure it’s being used responsibly?
Academics have the most in-depth knowledge of AI systems and how they work. Industry professionals have — or want — the most experience using them.
“There’s only so much industry can do by itself,” said Brad Spirrison, corporate engagement lead for DPI’s Research and Development team. “There’s only so much higher ed can do by itself.”
With that idea in mind, on April 3 Spirrison and DPI Research Scientist Alvin Chin brought experts from both spheres together for the Chicago Forum on Responsible AI in Industry: FinTech. Although the official purpose was to talk about fintech, the topics of discussion over the course of the afternoon were applicable well beyond any one field.
Here are the 14 most important takeaways from the discussion.
1. There are no universal rules for AI use. “It’s all context,” said Lav Varshney, an associate professor in the Department of Electrical and Computer Engineering at UIUC. Varshney, who worked on the White House executive order on artificial intelligence signed in 2023, said the United States has looser regulations around AI than Europe, which potentially gives the United States more room for innovation.
2. Training is key. Randy Nornes, an executive vice president at Aon, likened AI to a high-performance sportscar that most people don’t know how to drive.
“They’re all in, but they don’t have a strategy for how to integrate it into their workforce,” he said. Nornes spoke in a fireside chat with DPI Faculty Member in Residence Vishal Sachdev, who had spoken on AI at DPI before. Agreeing, Sachdev pointed out that AI literacy should come before AI governance.
At Aon, no staff can use AI agents without certification in the ethics behind it: a practice Nornes recommends to other organizations.
3. There’s a new market. Another industry challenge, Nornes said, is the way that AI is transforming retail interactions. In the not-too-distant future, he said, if you want to buy an item of clothing, an AI agent will know your size, your budget, your favorite colors and styles, and where you like to shop. The agent will give you a selection of pieces based on the information and whatever other information you might provide about the reason you need the clothes.
“Now the companies that sell clothes have to sell them to bots,” he said.
4. Adjust your expectations for AI —both up and down. In conversation with UIUC Associate Professor Wei Wei, Kevin Kalinich, Intangible Global Assets Collaboration Leader at Aon, cited Bill Gates regarding technological innovations. Gates said the expectations for any new technology were always too high in the first year and too low for a decade out. The same is true for AI, Kalinich said.
5. The three keys to successful AI implementation: good data, a clear objective, and an understanding of the best means to achieve the objective, according to Frank Quan, an assistant professor in the Program for Actuarial and Risk Management Sciences at UIUC and an insurance sector lead at DPI. Quan, also part of the conversation with Wei and Kalinich, discussed his study, “Federated learning for insurance companies,” which can share insights without sharing specific data.
6. Responsible AI is possible, but complicated. In conversation with Chin, Xiaochen Zhang, founder and CEO of Fintech4Good and AI 2030, said that safety, security, privacy, and fairness are the fundamental goals of responsible AI. And within each of those goals, there are other factors. For example, fair lending in finance can come through the data being used, the model using the data, and the implementation of the model. Zhang reiterated the importance of workforce development — some organizations lay off staff once they implement AI agents, but only people truly have institutional memory. He also recommended developing a systematic way to report data and establishing an AI governing body.
7. Bigger is not better. Kris Hammond is the director of Northwestern University’s Master of Science in Artificial Intelligence program and the founder of Narrative Science (now part of Salesforce), a startup that used natural language generation to turn data into stories. In a panel moderated by Spirrison that also included Rajarshi Roy, an anti-money laundering expert from the IEEE Computer Society Chicago Chapter; Brendan McGinty, director of the Industry Program at the National Center for Supercomputing Applications at UIUC; and OJ Laos, director of Armanino’s AI lab. Hammond predicted an agentic model for AI. Just as you see a lawyer for some tasks, a doctor for others, and an accountant for others, the most likely future development of AI will be different agents for different areas rather than a one-size-fits-all personal assistant that will, for example, test your blood, file your taxes, and handle your legal matters.
8. Testing is everything. Roy said models and their variations need rigorous testing, run with realistic amounts of data and encryption/decryption.
9. You don’t have control. Laos said the biggest misconception companies tend to have about AI is how much control they have over what their employees — or customers — do with it. “You can’t just say, ‘these are the only use cases,’” he said, describing AI as “the floor, not the ceiling.”
10. AI is the new electricity. McGinty compared AI to electricity in terms of its potential effect on every area of life — not just business. But he reminded the audience that electricity was also scary before it became widespread.
“We can handle the scary, if we really get our collective act together,” he said.
11. It’s not magic. Building on the electricity analogy, Hammond added that just as electricity is a natural phenomenon that can be explained scientifically, AI is a set of instructions meant to do certain tasks. Transparency around what any AI model is doing and how it works makes a huge difference. “The moment you realize that’s what they do, the sooner you can incorporate them into other things,” he said.
12. “There’s more noise about AI than anything else.” Laos said that while AI-driven companies are less visible in Chicago than in, say, California, the region is not behind the curve. “A lot of it seems to be bluster, no matter where you are.” Eventually, however, that bluster will become real, as AI integrates into “every single industry.”
13. The conversation must continue. Led by Jeremy Riel, director of UIC’s TRAILblazer Lab and visiting assistant professor in its Department of Educational Psychology, the formal program ended with breakout sessions. Around the room, posters asked questions such as: “Who are all of the actors who should be involved in responsible AI strategy and practices in fintech?” and “What practical steps can fintech organizations take to embed ethical AI principles into their systems from the start rather than as a reactive measure? Participants wrote responses on post-it notes and attached them to the poster. In response to the question, “What concrete next steps should fintech companies, regulators, and researchers take to collaborate on responsible AI innovation?” one post-it note read, “More networking events.”
“We’ll definitely do that,” Chin said. “This is not the end.”
14. DPI can help! Both Varshney and DPI R&D Relationship Manager Akshaya Udayashankar pointed out that DPI’s AI practice includes a GenAI for Managers workshop. “The extension mission of the university is to help companies in all sectors,” Varshney said.
.
Kris Hammond, Rajarshi Roy, OJ Laos, and Brendan McGinty on the panel moderated by Spirrison.
Author: Jeanie Chung