Skip to Content

Voice of the CEO: Lessons From NYT’s AI Lawsuit

Blogs Anthony Pugliese, CIA, CPA, CGMA, CITP Jan 16, 2024

In a matter of seconds, a chatbot can answer questions, draft a letter, create a business plan, compose a sonnet, or create a stunning work of visual art. But to create those works, the chatbot must first learn by analyzing and consuming existing content. Thus, if a chatbot learns how to produce content by first using someone else’s protected work, what rights and compensation — if any — are owed to the original content creator?

That question will now be taken up by a federal district court in Manhattan, and the ruling could impact organizations of all types and sizes. Internal auditors should pay close attention to the outcome, which is likely to impact their organization’s content, proprietary data, thought leadership, and more.

The New York Times recently became the first major American media organization to challenge copyright protections in the age of artificial intelligence (AI). The Times has sued OpenAI and Microsoft for copyright infringement over claims that its protected content has been fed to chatbots to help them learn and, ultimately, mimic and compete with the news organization. The Times joins Getty Images, Universal Music, and a growing list of content creators who have filed more than 100 AI-related lawsuits.  

Although the Times did not seek a specific dollar amount in compensation, the suit says OpenAI and Microsoft should be held responsible for “billions of dollars in statutory and actual damages.” That sum is staggering, but it’s only the tip of the iceberg.

If this case does ultimately go to court, the final ruling could dramatically shape the nascent artificial intelligence space, impact trillions of dollars in original content, and either reaffirm or transform existing intellectual property protections.

In that case, any organization that produces thought leadership or white papers, any executive or subject matter expert who gives speeches or writes articles (even this blog!), any educator who develops curriculum, etc. would be impacted.

While the conclusion of the case is likely months away, there are several takeaways and tips that you can use to protect yourself or your organization.

Know and document your organization’s risk profile. Anything can be entered into a generative AI platform and later accessed or repurposed without your organization’s knowledge or permission. Think about the content that your organization produces (for internal or external use). Not just publishes, but produces… What would happen if someone uploaded your emails, board minutes, memos, thought leadership, research, etc. into ChatGPT and they suddenly became available to the general public? What if that content was modified and passed off as original? Identify and analyze your organization’s specific risks, both in term of using the tools yourself as well as how someone else might use them.

Make sure your organization has appropriate AI controls and policies. Once you have assessed the risk, develop a plan for how your organization will mitigate, accept, or avoid these risks. Protect your (and your organization’s) intellectual property.

Last year, Samsung employees inadvertently leaked corporate data by entering it into ChatGPT, making it part of the platform’s knowledge base. That means when any user uses the platform, they may receive responses with Samsung’s proprietary data. Employees may know that information cannot be shared with third parties, but many don’t make the connection that entering data into a generative AI platform means that information is later accessible to others. According to the Cisco 2023 Consumer Privacy Survey, only half of regular generative AI users say they refrain from entering personal or confidential information into these AI platforms.

On the workforce side, your organization may want to offer annual “digital literacy” training to employees. On the back end, evaluate the organization’s internal systems and consider IT monitoring controls that can identify and stop your company’s intellectual property from being uploaded to AI tools.  

Protect your property. Just because you don’t put your information in an open-source tool doesn’t mean someone else won’t. Work with a legal expert to protect your intellectual property: copyrights, trademarks, trade secrets, or patents.

Beyond legal protections, consider what content you make publicly available and the format in which you’re making it available (i.e., don’t upload an unlocked Word document).

Over the last year, The IIA has taken steps to enhance our copyright protections, policies, resources, and training related to artificial intelligence. Some examples include:

  • Updating disclaimer language to specifically restrict putting protected content into a large language model.
  • Building out our AI governance and AI task force.
  • Putting copyright language on every page of our publications, including the new Global Internal Audit Standards.
  • Exploring a pop-up notification when someone tries to copy and paste protected content, reminding them of our terms of use.
  • Creating internal and external educational materials on copyright and data literacy.
  • Enhancing our web crawling initiative to identify instances where someone may have used or changed our content without permission.

Remember that no strategy or safeguard is foolproof. You should anticipate that your copyrighted data/content will get leaked and have a plan to respond when that inevitably happens.

Keep track of new laws and regulations. The AI legislative and regulatory landscape is rapidly evolving. Be sure your organization keeps up with the latest developments to stay aware of new risks and requirements. In the last year alone, the European Union passed the world’s first comprehensive AI law, at least 25 U.S. states, Puerto Rico, and the District of Columbia introduced AI bills, and the U.S. Congress introduced more than 30 AI-related bills.

The IIA is being proactive and helping shape the discussion around artificial intelligence. Internal auditors bring a valuable perspective to rulemaking and in helping organizations safely and responsibly navigate AI. Over the last year, The IIA and our Advocacy Team have:

  • Submitted four letters to various regulators discussing how internal audit can be a resource as government and industry begin to develop policy regulating AI use.
  • Engaged congressional stakeholders to explore inserting an internal audit qualification for service on the proposed National AI Commission.
  • Hosted U.S. Rep. Don Beyer (Va.), vice chairman of the congressional AI Caucus, as a keynote speaker at The IIA’s Financial Services Exchange in September. The discussion focused exclusively on the congressional approach to regulating AI.

Embrace AI, but Do So Cautiously

To be clear, I’m optimistic that AI presents tremendous opportunity for our profession and business in general. As an internal auditor, I’m also realistic about needing practical controls in place. Don’t miss out on AI, but be sure you have a plan for using it responsibly and protecting yourself from those who may not do the same.

For more tools and resources to help you understand, use, and audit artificial intelligence, visit The IIA’s new Artificial Intelligence Knowledge Center.

Anthony Pugliese, CIA, CPA, CGMA, CITP

Anthony Pugliese is president and CEO of The IIA.