Skip to Content

The Big Idea: ChatGPT Writes the Story of the Promise and Perils of Generative AI

Articles David Salierno May 23, 2023

It’s the artificial intelligence (AI) chatbot that needs no introduction. As the fastest growing consumer application in history, ChatGPT gained more than 1 million users less than a week after launching last November, according to the bot’s creator OpenAI. Now more than 100 million users log a staggering 1 billion total website visits per month.

From suggesting meal plans to providing relationship advice to drafting resumes, ChatGPT has become a popular personal assistant for people looking to improve their lives. But with generative AI tools like ChatGPT also taking the business world by storm, how should organizations use and govern the technology?

“Everyone is going to see more and more use of ChatGPT and other generative tools in their organizations and with their business partners — and it’s going to provide a lot of benefits,” says Charles King, managing director, Internal Audit and Enterprise Risk at KPMG in Longwood, Fla. “But there are also significant risks to consider, and they need to be factored into any organization’s implementation strategy.”

Balancing caution and innovation may pose challenges for organizational leaders. ChatGPT’s business applications seem virtually limitless, and at least some use of generative AI may be necessary just to keep pace with competitors. But there are also legitimate privacy and other concerns, and mismanagement could have severe consequences. Business leaders need to carefully weigh both sides to effectively navigate this powerful but still nascent technology.

Unlocking Insights and Efficiency

Many organizations have forged ahead with ChatGPT to assist with software coding, customer service, content creation, and more. Some are continually experimenting with generative AI to build compelling use cases and determine how best to leverage the technology.

One application King cites is analyzing large data sets to provide insights and guide decision-making. For example, one of the largest U.S. financial institutions is using generative AI’s vast data-crunching capabilities to inform software development based on user requirements. “The users ask and answer questions, generating a large set of specifications and preferences,” he explains. “The AI then synthesizes that information, which serves as direction to the developers about what users really need so they can better tailor applications to those requirements.”

Similarly, at a U.S.-based health insurer’s call center, generative AI sifts through callers’ history, claims, and preferences to deliver a concise profile to customer agents as they are fielding support inquiries. In real time, the agent can quickly digest this information while on the call to provide better, more efficient service.

As organizations develop new applications for generative AI, King predicts further use of proprietary data sets to aid customer interactions, product and service delivery, and other business operations. “In essence, you would have business AIs that can support work, increase productivity, and access information a lot faster,” he says, “as well as reporting tools that deliver insights much more rapidly.”

More broadly, software integrations will increasingly put generative AI in the hands of all employees for day-to-day tasks. Productivity applications such as spreadsheets, slideshow software, and email clients will have AI bolted onto them. Tasks like drafting memos, creating presentations, and analyzing data will become easier and more efficient, without needing a separate interface to access the AI.

And impressive as these advancements might seem, they are merely a fraction of the enormous potential generative AI holds for businesses. But of course, there are potential downsides that must be considered as well.

Proceed With Caution

While generative AI can empower organizations to make big strides, it can enable them to make equally big mistakes. For this reason, business leaders need to proceed cautiously.

Privacy and security are among the greatest risks when using nonproprietary generative AI such as ChatGPT. Unaware of the potential consequences, for example, employees could feed sensitive business data into the tool — making it susceptible to leaks. In fact, more than 7% of employees have pasted company data into ChatGPT, and 4% have prompted it with confidential company data, according to data security firm Cyberhaven.

Some organizations have addressed such risks by completely shutting off employee access to ChatGPT. For those that haven’t, King stresses the importance of establishing solid governance around the use of generative AI — including ownership of decision making and risks — and educating employees about how it should and should not be used. “Freely accessible AI still carries a ton of risk,” he says. “And that’s why clear policy and communication is important.”

But even if the technology is used prudently and in accordance with company policy, it can still present risks such as factual errors. Hallucinations, a known phenomenon with generative AI, are false or inaccurate outputs produced by AI algorithms and deep learning neural networks (see “ChatGPT, Are You Hallucinating?” on this page). These hallucinations can be very convincing. “The fact that generative AI is so good at producing language that sounds plausible just makes it harder to detect,” King says. “So verifying and validating factual statements, interpretations of events, or any other declarative output from the tool is essential.”

Beyond internally generated risks, ChatGPT opens new opportunities for bad actors outside the organization to carry out social engineering scams and other cybercrimes. For example, someone could use ChatGPT to produce an email that convincingly mimics a CEO’s writing style to perpetrate a phishing scheme. And with speech synthesis AI, someone could even mimic the exact sound, language, and cadence of the CEO’s voice to commit the fraud by phone.

Many more risks are likely to emerge as the technology evolves. “If we’re not actively managing those risks, we’re in dangerous territory,” King says.

ChatGPT, Are You Hallucinating?

In a recent statement, OpenAI cautioned that ChatGPT has its flaws. In fact, the company specifically warns that the chatbot “still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors).”

I put ChatGPT’s accuracy to the test by asking it a simple question about myself: “Who is Dave Salierno?” After acknowledging it doesn’t have access to information about every individual in the world, the bot gave me this response:

“Based on my research, Dave Salierno is a recognized internal audit expert and speaker, who has worked in internal auditing for over 30 years, including as a CAE. He has written numerous articles for The Institute of Internal Auditors (IIA), including the Internal Auditor magazine, and has served on the board of The IIA's Philadelphia chapter. Salierno is also a Certified Internal Auditor and a Certified Fraud Examiner.”

It looks like someone needs to have a chat with the chatbot about its research skills. I’ve never been a CAE, I’m not called upon as an expert speaker, I haven’t earned the named credentials, and I haven’t been to Philadelphia (much less served on its IIA chapter board). ChatGPT’s response suggests users should always fact-check their AI-generated output.

Embracing the Future

For organizations weighing the benefits of generative AI against the risks, King’s advice is to avoid taking a wait-and-see approach. At a bare minimum, he recommends having conversations among leadership, forming committees, and discussing potential investments. “This is transformational technology, and failing to take AI seriously poses an existential threat to a lot of organizations,” he says.

To get the best results from AI, King suggests updating systems that may be outdated or have inferior data governance. He points to similar challenges when companies began incorporating big data and automation — those that didn’t manage their data correctly struggled to get adequate return on their investments. Likewise, failing to get systems data in order could impede the organization’s ability to optimize its use of generative AI.

For now, King emphasizes that businesses should never let generative AI operate on its own without human intervention. People, he says, should always work in coordination with AI to use it safely and effectively. “The right approach is a machine-human partnership,” he explains. “Organizations that adopt this approach will be better equipped to navigate the challenges and opportunities of an AI-powered future.”

Additional Resources:

 

David Salierno

Managing partner at Nexus Content in Winter Park, Fla.