Skip to Content

A Guide to GenAI

Articles Charles King, CIA, CPA, CFE, CIPP Dec 16, 2024

When ChatGPT burst into the public consciousness in November 2022, it released a torrent of interest, investment, and optimism. Business leaders, already under pressure from the post-pandemic economic uncertainty, saw a solution that could address rising costs from inflation, fierce competition for talent, and supply chain woes. Computer programmers saw impressive gains in efficiency. Chatbots improved customer experiences. And countless projects got the green light to use generative artificial intelligence (GenAI) to solve pressing and expensive business problems.

But beneath this wave of excitement, an undercurrent of skepticism, concern, and even criticism led many to adopt a more cautious approach. This was fueled by high-profile stories of GenAI failures and embarrassments. Lawyers cited nonexistent cases in briefs. Publishers retracted false stories drafted by AI. Global corporations inadvertently gave intellectual property to AI companies. Even the most strident evangelists came to understand the GenAI revolution was going to be slower and bumpier than they might have hoped.

Today, nearly two years on, it is clear GenAI is here to stay. ChatGPT is one of the most visited websites in the world. Tech giants are spending tens of billions to build new models. Every major software vendor is embedding AI into their product offerings. In the KPMG 2024 CEO Outlook, two-thirds of CEOs say GenAI is a top investment priority. And a rapidly growing number of employees have enterprise-class GenAI tools available for use at work.

As organizations roll out these tools to their employees, decision-makers must balance the promise of productivity with the reduction of risk while maintaining their values and culture. Internal audit can help by first educating themselves on general AI risk frameworks, such as the NIST Artificial Intelligence Risk Management Framework, as well as relevant industry-specific guidance. Auditors’ knowledge of these frameworks combined with a deep understanding of their organization’s risk management practices and ways of working position them well to advise teams responsible for GenAI governance about risks and possible approaches to control them. 

Security First

Information security is a significant concern many executives have about GenAI. This concern stems from the possibility that information included in the prompts will be exposed to the GenAI vendor or other unauthorized parties. Indeed, some publicly available versions of these tools do use user data to improve models. Moreover, this information may be stored by the vendor and subject to breaches that enable unauthorized third parties to access the user’s data. 

It is important to distinguish between publicly available versions of this software and enterprise-class versions. Enterprise GenAI solutions often are offered by cloud hosting providers and include the type of data protections that businesses expect. These providers allow organizations to access GenAI models without their data leaving their secured environment. These enterprise versions also facilitate the secure development of applications, such as chatbots or knowledge assistants. From a security standpoint, enterprise solutions are strongly preferred over the consumer versions.

However, because of costs or competing priorities, some companies allow their team members to use the publicly available GenAI services. Regardless of the type of service allowed, organizations should have a well-reasoned acceptable use policy communicated to and acknowledged by employees. This policy includes, among other things, what tools are permitted, the types of activities that are appropriate, and the types of data that may be used in these tools. 

This policy should be tailored to the organization’s specific context. Generally, organizations using publicly available GenAI will have more restrictions on use, especially when it comes to sensitive data. Organizations with stronger security controls in place may have far more permissive acceptable use policies that could allow using personally identifiable information or other confidential information.

Whatever the GenAI policy, it needs to be realistic. For instance, a general, permanent prohibition on using GenAI at work is unlikely to be a viable policy. These tools can provide meaningful productivity gains, and this may prove an irresistible temptation to overwhelmed employees who seek to lighten the load. Their ubiquity and low cost may simply lead people to circumvent organizational rules and use their personal licenses to do company work — likely exposing the company to unacceptable risks.

The roll-out of an acceptable use policy should be accompanied by a training campaign that shares how employees should use the technology and how using it inappropriately will expose the organization to risk. Making completion of the training a prerequisite for access to enterprise GenAI capabilities can help promote good habits and avoid risks.

Where to Draw the Line

Organizational policies, however, cannot address every use. Teams can probably find broad consensus that, for example, using GenAI to proofread a report written by an employee is appropriate. After all, employees have been getting grammar and syntax suggestions from productivity apps like Microsoft Word for years. But what if AI wrote the entire report? Is there any expectation that employees disclose that? Under what circumstances? 

Types of GenAI

It is worth considering the current GenAI ecosystem and how companies engage with these models.

Foundation GenAI Models have broad capabilities that can be used alone or serve as the foundation for applications. These models have been developed by many technology companies. For example, OpenAI created ChatGPT, Google offers Gemini, and Meta released Llama. 

Publicly Available GenAI offerings are often geared toward individual users or small businesses and may not have the security and governance capabilities expected by larger organizations. In fact, these websites are considered unacceptably risky by many organizations and are blocked by technical means or by policy. 

GenAI-as-a-service typically is available through cloud-hosting providers and allows enterprise IT teams to govern the GenAI capabilities in accordance with the organization’s preferences. In many cases, users experience these tools as white-labeled interfaces that may be renamed to align with proprietary branding (e.g., “AcmeCorpGPT”).

Embedded GenAI can be found in software of all types. Vendors selling productivity suites, enterprise resource planning, customer relationship management, and other software packages are including GenAI functionality in their offerings. In many cases, these companies have not developed wholly new models, but rather are licensing foundation models that are integrated into their products.

GenAI Open-source Software can help address concerns about security, cost, and sustainability. Some big names, like Meta and X.ai, as well as some names that may be less well known outside the tech industry, such as Mistral, Falcon, and Stable Diffusion, take advantage of open-source software. While many large companies are reluctant to deploy open-source software of any type, organizations that are serious about building GenAI capabilities would do well to consider it.

Organizations will increasingly be confronted by these questions and others as GenAI comes closer to employees’ livelihoods and identities. For example, there is likely consensus that using software tools of any kind to make final hiring or termination decisions is not an ethical practice. But would there be this same consensus about using GenAI to write the job description, develop the interview questions, or summarize candidate interviews for review by the hiring manager? 

These issues pose thorny questions about both real and perceived bias and fairness. For this reason, leaders and their teams need to communicate to establish norms that align with policies, culture, and regulations, especially in the areas of human resources and other areas that have direct impacts on people’s lives and livelihoods.

An Imperfect Solution

These concerns about GenAI have their basis in how the models are trained and how they operate. Large language models (LLMs) are a key component of GenAI solutions. LLMs are trained on massive quantities of data that come from a variety of sources. Where exactly this data comes from is considered proprietary by many GenAI software vendors, but it is safe to assume much of it is gathered from the internet. Thus, much of the data will contain biases, incorrect information, and inappropriate content. While GenAI developers generally work to inhibit overtly offensive, inaccurate, or disturbing content, the system is not perfect. 

The risk of bias is not limited to the training data of the models. One common approach to using GenAI at work is to use a technique called retrieval augmented generation (RAG). RAG approaches use some form of knowledge repository to allow users to interact with information in the repository. The knowledge repository is often a collection of documents, such as policies, processes, or regulations.

RAG solutions are a useful way to create common GenAI applications. For instance, an organization may create a knowledge assistant that can help employees with corporate travel and expense policies. However, these solutions are susceptible to biases, as well. If the documents in the knowledge repository have biased or inaccurate information, the GenAI solution will produce biased or inaccurate results. 

These biases can be introduced when:

  • Documents are inadvertently included (such as an out-of-date version of the policy) or excluded. 
  • Bad actors include documents (or messages inside documents) that contain inaccurate information or undesirable language, an attack so common that it has a name — data poisoning. 

Organizations should have a thoughtful and well-governed approach to how knowledge repositories are constructed, as well as how they are protected from unauthorized access to effectively manage the risks.

Of course, GenAI is not just subject to providing biased responses. It sometimes entirely fabricates statements — known as “hallucinations.” LLMs do not have a database of facts in the model. Rather, they identify patterns in language using complex mathematical inference techniques. They use all the patterns learned from enormous quantities of training data to statistically predict what will come next in a sentence based on the available context, typically the previous prompts. So, statements that have been seen repeatedly in the training data will have a strong statistical correlation. 

When the user asks about ideas or concepts where there is not much repetition in the training data, the LLM does its best to find something that sounds plausible. Indeed, the fact that these responses often appear plausible to nonexperts is part of what makes hallucinations so challenging.

The existence of hallucinations should be covered in organizational training. The more employees understand the model’s behavior, the better positioned they are to identify problems. Employees on the lookout for factual errors can fact check GenAI responses before using the output. 

Basic prompt engineering can help reduce the occurrence of hallucinations. As the old saying goes, “ask better questions, get better answers.” There also are some technical ways to reduce hallucinations. For example, many GenAI vendors provide settings users can adjust to change the behavior of the model. 

“Temperature,” for instance, allows users to increase or decrease the model’s creativity. Turning the temperature up will cause the GenAI to be more creative in its responses. This may be desirable when using it to create ideas for marketing copy. Lower temperatures will prompt less creativity, which may be desirable when using GenAI to summarize facts from a complex document.

Out of Bounds

Unfortunately, when considering how to manage GenAI risks, organizations cannot ignore the risk of insider threats and employee abuse. Beyond the gray area that exists because of the lack of established social norms, there are uses of GenAI that are clearly out of bounds. 

So-called prompt injection or jailbreak attacks occur when users attempt to trick the GenAI into saying or doing things that it should not. The goal of such attacks can range from mischief, such as trying to get the GenAI to say something offensive, to fraud, such as trying to access information that the user should not be permitted to access. GenAI also may assist employees that intend to commit fraud through the creation of deepfakes or other forgeries.

Enterprise-class GenAI solutions can implement guardrails and other configurations that prevent or at least reduce the risk of misuse. Guardrails come in many forms and can be configured or given as plain language directives by administrators that are inserted into the processing of the prompt. The guardrails can, for example, prevent the LLM from discussing topics such as violence or crime. 

They also can direct the LLM to not perform certain types of tasks. For example, a magazine editor concerned about journalists using GenAI to create articles could create guardrails that would prohibit the LLM from processing prompts that request the creation of a magazine article.

While guardrails can be an important part of managing GenAI, administrators also must have a system to periodically review how employees are using and misusing organizational GenAI resources. Such a system can help identify employee misuse and types of misuse so the organization can implement protections. Reviewing log files also can help identify desirable use cases. 

Here to Stay

GenAI is a permanent part of the business landscape. It will fuel tremendous efficiency for the organizations that adopt it. Although there are numerous risks inherent to GenAI’s use, these risks are manageable and should not deter organizations from using it. Early adopters of GenAI have followed identifiable patterns that help get the most out of the technology while effectively managing the biggest risks. Ultimately, organizations that take a strategic and considered approach to GenAI will be well-positioned to enjoy the benefits of this important technology for years to come.

Charles King, CIA, CPA, CFE, CIPP

Charles King, CIA, CPA, CFE, CIPP, is the U.S. AI in internal controls leader at KPMG LLP in Orlando, Fla.