Home Innovation CBJ symposium highlights challenges, opportunities of AI

CBJ symposium highlights challenges, opportunities of AI

Experts, business leaders discuss AI perspectives at Hyatt Regency Coralville event

Jim Chaffee CBJ AI Symposium
Jim Chaffee, executive director of learning innovation and technology and adjunct instructor in the University of Iowa’s Tippie College of Business, speaks at the Corridor Business Journal's AI Symposium Oct. 1 at the Hyatt Regency Coralville Hotel & Conference Center. CREDIT RICHARD PRATT

It’s hard to consider any trend in today’s business world without acknowledging the impact – or the potential impact – of artificial intelligence. Whether embraced or feared, AI continues to emerge as a game-changer across a wide swath of society, and business leaders with an eye to the future ignore the potential of AI at […]

Already a subscriber?

Want to Read More?

Get immediate, unlimited access to all subscriber content and much more.
Learn more in our subscriber FAQ.

Subscribe Now
It’s hard to consider any trend in today’s business world without acknowledging the impact – or the potential impact – of artificial intelligence. Whether embraced or feared, AI continues to emerge as a game-changer across a wide swath of society, and business leaders with an eye to the future ignore the potential of AI at their peril. AI experts gathered with about 200 area business leaders Oct. 1 at the Hyatt Regency Coralville Hotel & Conference Center for the Corridor Business Journal’s AI Symposium, seeking insights on how this explosively-expanding technology will revolutionize the way we work – and the practical and ethical challenges it may bring. Speakers at the symposium included Greg Edwards, founder and chief technology officer of Canauri, a Cedar Rapids-based computer security firm; Jim Chaffee, executive director of learning innovation and technology and adjunct instructor in the University of Iowa’s Tippie College of Business; attorneys Joe Leo, Lee Henderson and Chris Proskey of BrownWinick Law in Coralville; Joseph Engler, chief AI scientist and principal fellow in AI at Collins Aerospace; and Sandeep Guri, a senior AI engineer at Google and a Coe College graduate. Each offered their own perspective on the ever-evolving nature of AI and its high-profile offshoots, including ChatGPT.

Greg Edwards, Canauri

Mr. Edwards outlined significant developments in the AI landscape over the past year – and compared his assessments to those he made just a year ago. “When I looked back last year, I thought of AI as certainly revolutionary, on the order of electricity or the Industrial Revolution,” he said. “In reflecting on this year, I think it's more along the lines of the printing press.” When Johann Gutenberg developed the printing press in 1440, Mr. Edwards said, the pace of technological and innovation developments was accelerated for the next few centuries. “I think that's a better analogous way of looking at how AI is going to change the world,” he said. “Prior to the printing press, people didn't have access to information the way that they did through the 1800s and 1900s, and that's led to the innovations that we have today. Now AI is going to supercharge the intelligence that we have. Hopefully we keep it under control. And I believe it is going to lead to monumental change in the world.” Mr. Edwards noted that ChatGPT wasn't the first AI large language model, but since its emergence in November 2022, the LLMs have already grown to a $5.1 billion industry that “didn’t exist just two years ago,” with companies utilizing those LLMs to create agents to “actually be able to perform something.” That industry, he said, is expected to grow to a $47 billion industry in 2030, “so the pace of change that we're seeing is just going to accelerate.” Mr. Edwards compared the pace of AI adoption to that of other technological breakthroughs. For example, it took 16 years to get to 100 million users of cell phones. Once internet speeds were improved, other technologies were accelerated – Instagram took about about 30 months to get to 100 million users, and Tiktok topped 100 million users in just nine months. By comparison, ChatGPT took just two months to get to 100 million users, “and that's the sort of exponential change that we're going to see over the next 25 years,” he predicted. Mr. Edwards referenced the classic cartoon “The Jetsons,” saying he had once assumed that robots and flying cars would be commonplace by the year 2000. “We don't have any of that yet, but I do believe that it is coming,” he said, predicting that sentient AI – a hypothetical AI system that can experience and think like a human – will be in place in the next 25 years. Concerns about the dangers of AI have grown exponentially in recent years, and Mr. Edwards addressed the conConcept of P(doom), a term referring to the probability of catastrophic outcomes, or doom, as a result of advancing artificial intelligence – in this case, the percentage of belief that AI will become sentient, or acquire the same intellectual and emotional capabilities as we have as humans. “Some of the leaders in AI safety put their own P(doom) ranking (at) 80-plus percent,” he said, “but I absolutely believe that we will have sentient AI that can help us within 25 years.” The past year in AI has been marked by the emergence of large language models, Mr. Edwards said, noting that in the last year, there have been 21 major LLMs released. “And it’s important to understand that it's not just open AI and ChatGPT, it's lots of other companies,” he said. “Those 21 LLMs came from 17 different companies.” Of those 17 new LLMs, 16 are built on open-source frameworks, Mr. Edwards said, which “levels the playing field and allows almost anyone to be able to create technology using these LLMs.” Mr. Edwards identified several emerging trends in the AI realm: agents, as exemplified by models such as the VoicePlus messaging platform, which have used LLMs to create useful products; RAG (retrieval augmented generative) AI, using client-supplied data to provide better and more-tailored results for use on other platforms; development, allowing IT professionals to automate and accelerate their work; and cybersecurity, improving the outcomes on both the incoming and outgoing levels. What AI is currently missing, Mr. Edwards said, is a planning and reasoning component. “That's a big leap that we need to get to sentient AI,” he said, “so I see this over the next three years coming to somewhat of a plateau, and then a big spike again.”

Jim Chaffee, Tippie College of Business

Describing himself as “a strong advocate for extended reality and AI in education,” Mr. Chaffee said Tippie has been “integrating AI in the College of business for many, many years.” “It's been mostly focused on neural networks and machine learning and deep learning,” Mr. Chaffee said, “and we've been doing it with different kinds of applications in different classes, like our data optimization courses and our applied optimization courses.” The school’s embrace of AI technology continued when generative AI was publicly introduced in November 2022, Mr. Chaffee said. “We started down that path of what it is we're going to use AI for, not only to help our students, but also to educate our students about what's coming next,” he said. “So we jumped in. We said we're all in. We're going gangbusters. Not everybody did. This isn't an ‘everybody is on board and everyone's excited about it.’ This is a collegiate focus with the appropriate people who are excited about utilizing AI in their classes, in their research.” Mr. Chaffee discussed a few Tippie courses that are integrating AI components – a text analytics course, where the large language model is “not just to assist, but as part of the assignment”; auditing and advanced financial accounting courses, in which students are allowed to use AI and submit both their AI-generated assignment and their final assignment; human resources management, using AI to develop job descriptions and specifications; and microeconomics, using an AI tutor not to provide answers but to prompt students through problems. He also noted that Carl Follmer, associate director of Tippie’s accounting writing and communications program, used poe.com to develop an AI bot known as Teammate.Impy, which is designed to be an argumentative member of any group. He cited an example using the prompt “are bees good for us” and while Impy understood the sentiment, it then responded with concerns about bees being allergy concerns for some people. “When you are in a group, especially in a collegiate experience, you need that dissenting opinion, and Impy provides that,” he said. AI tools are also being used for students to improve their corporate presentation skills and for professors to more thoroughly evaluate the depth of students’ understanding of their subject matter. It’s not just about developing AI programmers, he said, but about the entire nature of AI literacy and the tools it can provide to improve the workforce. “We're trying to make sure our students are prepared when they come to you, for not only the things that you need them to know today, but the things you don't even know you need them to know,” Mr. Chaffee said. “AI is a great example. We're preparing them to come into the workforce and be able to say, ‘I know what that is. I might not know exactly the newest thing, but I've worked on something similar, or I've had experience with that.’” The ethical challenges of AI implementation are also high on the list for Tippie, Mr. Chaffee said. “We have more time to spend on the why versus the how,” he said. “How do you do something? That's great. We've been teaching that for many, many years, and we'll continue to teach the how. But why are we doing something? Why are we working with AI? Why are we developing data analytics schools? Ethics and responsibility is a key piece of what we're trying to bring. There is a huge ethical component to AI and a responsibility for all of us, adaptability and continuous learning. We want to make sure people are constantly coming back and saying ‘what's next, so we can be able to be there for them.”

Legal issues – BrownWinick panelists

Attorneys Joe Leo, Lee Henderson and Chris Proskey addressed the numerous legal and ethical issues that have already arisen, and will continue to arise, regarding the implementation of AI tools. Among the areas already being considered is the role of regulatory agencies and lawmakers in the AI sphere. While much remains to be done in this area, Mr. Henderson said in many cases, AI is already being regulated under current frameworks, including rules already in place, such as the Federal Trade Commission’s guidelines on unfair and deceptive trade practices. “So if you have made certain disclosures online, those disclosures need to be true,” Mr. Henderson said. Since the advent of ChatGPT, new guidelines have already emerged that prohibit using AI to discriminate against certain classes of people, mandate disclosures of AI utilization to customers, and enhanced rules on data privacy. But AI has certainly raised new concerns in the realm of intellectual property rights, Mr. Proskey noted. “The concept of protecting intellectual property rights has been built into our Constitution, so it's always been there, he said. “But who creates? Humans have always created new solutions which can be patentable, new names and products, identifiers which can be trademarked, and the new artistic works which can be protected by way of copyrights. But AI can solve problems. AI can come up with names. AI can create artistic works. So if you use AI, can those things be protected by way of patents or trademarks or copyrights? The answer is, we don't quite know yet. Patents have been filed that list AI as the inventor. And in the United States and in Great Britain, the courts have clearly spoken that AI cannot be an inventor. It requires human creativity to get patented. But now they're starting to parse how much AI was used versus how much was human involvement. And that's kind of unclear at this point in time.” Specific legal questions have also been raised regarding how AI generates images and other artistic works. In perhaps the most high-profile example, Getty Images filed a lawsuit in the United Kingdom in May 2023 against Stability AI, claiming that Stability used more than 12 million of Getty’s copyrighted images to train its "Stable Diffusion" system, which automatically generates images, and that that the images being generated by Stable Diffusion are substantially similar to its copyright works. The lawsuit is seeking damages of $150,000 per copyrighted work, resulting in potential damages of up to $1.2 trillion. A trial date in the case is set for summer 2025, but Mr. Proskey said it raises critical issues for both intellectual property and artistic expression. “Many of the risks are too risky, and many of the unknowns are too unknown for us to really speak,” Mr. Proskey said. “But it's something that we're spending all day, every day, trying to figure out how to protect our clients (and) their competitive advantage from being infringed upon, and how do we enforce our intellectual property rights?” AI offers a number of potential advantages for business leaders for simplifying complex processes and performing certain lower-level tasks, but Mr. Henderson warned that in legal terms, it’s still important for decision-makers to develop policies that include appropriate oversight. “One of the really important things to do when you're building your AI governance model – getting your stakeholders involved, writing a policy, whatever it might be – is to never lose that human element,” he said. “You can't just push everything to AI. You have to always make sure that it's getting the normal set of human eyes. (AI) augments your process. It shouldn't replace your process. I would think whoever's using it still has that duty.” Issues with AI have also arisen in the legal field, with attorneys using AI tools to help write legal briefs and find case citations – some of which have ended up being improperly cited or, in some cases, completely fictitious. “As lawyers, we tend to be very cautious in avoiding even the appearance of impropriety,” Mr. Proskey said. “So we're going to be cautious. But these things will come around the corner every day. I'm being approached by AI power tools to help make my job easier and provide better results. We certainly consider them, and they're helpful in ways. But they do pose risk if they're wrong, and ultimately, that falls on all of our shoulders.” Issues with data privacy and disclosure must also be addressed as business leaders determine the best ways to employ AI tools. “As with most things in business and law, don't have things happen by accident,” Mr. Leo said. “If you're using these tools, be thoughtful about how you're using them, why you're using them, and do a cost-benefit analysis. That’s the piece that I can't answer for most of my clients, but I can have them start thinking about – with all the uncertainty that exists, you don't want to end up in a position where you accidentally ended up in an area that you don't want to be. Be thoughtful about how you're using these tools, keeping in mind that at some point in the future, there might be rules that get applied.” --------------------------- In other presentations at the symposium, Mr. Engler from Collins Aerospace spoke on the challenges of enabling AI for complex systems, and Mr. Giri from Google offered a technical explanation of how chips talk in a supercomputer as part of ongoing development and infrastructure behind Google's AI and machine learning technologies. BrownWinick Law served as the symposium’s gold sponsor, and silver sponsors included CCR, RSM and the University of Iowa Tippie College of Business.

Stay up-to-date with our free email newsletter

Follow the issues, companies and people that matter most to business in the Cedar Rapids / Iowa City Corridor.

Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.
body::-webkit-scrollbar { width: 7px; } body::-webkit-scrollbar-track { border-radius: 10px; background: #f0f0f0; } body::-webkit-scrollbar-thumb { border-radius: 50px; background: #dfdbdb }
Exit mobile version