The Other Side of AI: Exploring the Risks & Limitations
In an age where the boundaries of technology constantly expand, Artificial Intelligence stands at the forefront, promising unparalleled transformation across industries. Yet, with great innovation comes great responsibility. As leaders, the onus is on us to not only harness the power of AI but to also navigate its multifaceted challenges with foresight and acumen.
As with every technological advancement in history, along with benefits come risks, either perceived or real. We want you to be aware of and understand those that may come into play with your organization.
We hear these questions often: What challenges and limitations have early adopters faced, and how can we learn from them? What specific risks may be relevant to my industry? My role? How can I avoid them, or in case they occur, how do I respond?
All great questions and worth exploring. The most common challenges within AI include: inaccuracy, cybersecurity, intellectual property rights, regulatory compliance, explainability, personal privacy, workforce displacement, equity and fairness. We expand on several of these in our complimentary ”5 Big Conversations” guide.
A pretty substantial list, but let’s address five risks and a few limitations.
Looking to level up your understanding and practice of AI? Gain access to our new course today – Leading AI: A Masterclass for AEC Leaders
1. Reliability
Models can produce different answers to the same prompts, making us wonder about the accuracy and reliability of outputs. A bit different from hallucinations, reliability is when the content technically may be correct, but different perspectives and angles on the topics may introduce confusion. So how do we know what to trust?
2. Ethical Dilemmas
We don’t know the standards or values on which the model was trained. Do they align with our firm’s culture? Integrating moral and ethical values into AI systems presents a huge challenge: how do developers prioritize ethical implications and decision-making to help avoid negative impacts?
3. Organizational Risks
What about potential organizational risks? Could integrating AI into your workforce negatively impact specific groups disproportionately? In a recent study, 62% of respondents predicted AI will increase racial, gender, and economic disparities. For example, could benefits to management also be deterrents to frontline workers? The possibilities are many.
4. Dependence on AI
Some experts have suggested dependence on AI could turn into a risk: they believe over-reliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. So how do we strike a balance between leveraging AI and using employee wisdom for decision-making?
5. Impact on Emotions
Another people-oriented risk is the impact on emotions. We’re definitely on the front-end of this one. How will these new technologies effect self-image, for example? Consider how employees may react to decisions and feedback from AI systems. People’s feelings about themselves may differ depending on who or what evaluates them, and that has important consequences for your colleagues.
Did you notice that four of the five issues relate directly to your employees? Yes, we’re talking about technology, but the implications for your firm likely are centered more on the people aspects of AI.
AI certainly has its limitations, as well. Two areas include employee adoption and technology.
Regarding employee adoption of AI, there are a couple of important barriers:
- The first is lack of trust. Many workers just aren’t ready to embrace AI in their roles.
- The second limitation is a lack of AI skills, expertise, and knowledge. Workers don’t know what they don’t know.
And regarding technology:
- The biggest is the gap in-between AI capabilities and human abilities. For example, judgment: AI lacks some reasoning skills humans have. It also has limited understanding of context, and it can’t yet match human creativity and empathy. Can AI help in each of those instances? Absolutely. But it certainly cannot be a replacement. That’s why we strongly recommend human oversight and intervention in any AI use case you pursue at this stage.
- Lastly, I’ve previously touched on inaccuracies, and I’ll go deeper on them in our next segment. A related limitation to this issue is the difficulty to cleanse data once it’s in a model. It is very hard to eliminate a bias or disinformation, for example.
In summary, there is substantial work to be done on overcoming risks and limitations involved with AI use. Our encouragement to you is two-fold:
- Continue on your AI journey with eyes wide open. Be aware of and understand the potential potholes along the way. Caution is a wise approach, for sure.
- But secondly, focus on the amazing benefits of AI integration. In our opinion, the pluses far outweigh the minuses.
If you would like further guidance on any of the learnings or ways to increase AI adoption within your organization, please do not hesitate to contact us at info@thrivence.com.
Gary McClure is a senior consultant at Thrivence, a consulting firm specializing in strategy, leader development, organizational performance, and technology. For more than 15 years, Gary has led organizational transformation initiatives and taught leaders how to navigate successful change. He can be reached at gary.mcclure@thrivence.com.