Quantifying the Risks of Generative AI

Generative AI can revolutionize business processes and boost worker efficiency—but organizations must first mitigate potential risks.

By Rod Fontecilla

Since Google’s paper about its transformer model came out in 2017, the field of generative artificial intelligence (AI) has exploded. From the launch of OpenAI’s GPT-1 model in 2018 to the recent much-hyped launch of GPT-4, the technology has quickly evolved from an innovation to a paradigm-shifting disruption. It seems like every week there’s an exciting new AI announcement from a large player like OpenAI, Stability AI, Google, or Meta, or from a smaller AI startup like Anthropic, Hugging Face, or ElevenLabs.

But much of generative AI’s development has occurred quietly and away from public view. Prior to ChatGPT putting generative AI into the hands of consumers in late 2022, it was something that Big Tech approached with caution. In 2016, when Microsoft’s newly released generative AI chatbot began spewing racist messages,companies decided to take a more careful approach. Big Tech’s generative AI technology was thus primarily developed in private while companies worked on ways to mitigate ethical and safety issues. But the launch of ChatGPT put pressure on technology companies to introduce their own generative AI products.

Some fear that significant problems could arise as powerful but imperfect AI systems are made widely available despite lingering safety risks. In May, a group of 350 researchers and technology leaders expressed a need for more caution around generative AI2 in an open letter. Not long after, one of generative AI’s pioneers, Dr. Geoffrey Hinton, quit his role at Google to focus on raising awareness of the dangers of the technology. In this paper, we’ll explore the core risks that could affect broad adoption of generative AI, focusing on safety, privacy, and regulatory concerns.



Chief among most experts’ concerns around generative AI are its safety implications. AI safety encompasses a broad range of issues including brand safety, bias, unethical or malicious use cases, and misinformation.

Brand Safety
Brands have a number of reputational concerns when it comes to generative AI. AI models can generate false information, which could damage their reputation as a trusted brand. A generative AI customer service agent speaking to customers in real time might pass along inaccurate information or even show bias. Some consumer-focused brands have already faced pushback for using AI. For example, customers have called out brands for featuring AI art in advertisements, citing ethical concerns related to pending lawsuits by artists3 against companies that used their copyrighted art as training data without permission or licensing.

AI systems reproduce historic biases in their training data. If an AI model ingests texts using hateful language about a racial group, the system’s output might reproduce those biases. In healthcare, the use of anonymized historical patient data or research in model training might reproduce that data’s existing biases, leading to substandard patient care. For instance, an AI system designed to help with diagnostics might not flag common pregnancy risks in parts of the population that have been historically underdiagnosed due to racial bias.

Unethical or Malicious Use Cases
Some fear that generative AI could be used for scams, deep fake videos, attacks, and mass political misinformation via social media bots. Unethical businesses could use the technology to spam their competitors’ online reviews; scammers could use it to impersonate a company or a government institution; and foreign governments could use it in cyber warfare. Organizations will have to be vigilant to protect themselves, their employees, and their customers from these types of attacks.

One of the big risks of generative AI is the spreading of misinformation. While providing incorrect information during a business inquiry might potentially inconvenience a customer, other types of misinformation could be far more costly. For example, some companies are considering using generative AI to help with compliance and reporting documents. Mistakes in those documents could incur large fines in some sectors.

While these are serious risks, they are far from insurmountable. Organizations that decide to adopt generative AI should create risk management plans to address the technology’s safety concerns. What those plans will look like will depend on the type of AI an organization is using, how it is using it, and the relevant risks. Mitigation plans could include fact checking by external systems, human oversight, extensive fine-tuning to control biases, or custom guardrails.



Earlier this year, OpenAI faced privacy challenges from multiple governments. This started in March, when Italy’s data protection watchdog temporarily banned4 ChatGPT in the country due to potential European Union General Data Protection Regulation (GDPR) violations and the technology’s lack of age-gating. Since then, Canada’s privacy commissioner has expressed concern over the company’s privacy standards5 and the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law held hearings on OpenAI and privacy.6

The US Federal Trade Commission (FTC) also opened an investigation7 in July into whether the company violated consumer protection laws around illegal data collection, privacy violations, and the dissemination of false information about individuals. The FTC’s investigation will probe OpenAI’s data leaks8 to better understand the impact of the tech on consumer privacy.

Organizations implementing generative AI should pay careful attention to privacy risks regarding training data, the use and storage of inputted prompt data, and their technology providers’ history of data breaches and privacy protection. Ensuring that AI systems don’t store or use user-provided data as training data is critical. Some sectors might also want to get permission, via waivers, from customers or patients when collecting personal data for use in AI systems. Organizations should adapt existing privacy policies to cover the additional privacy risks and emerging regulations pertaining to generative AI.



Governments and regulatory bodies around the world are rushing to regulate generative AI. The EU’s AI Act9 was one of the first comprehensive AI regulatory frameworks to be released. In the US, the AI Bill of Rights10, while focused on the federal government’s use of AI, hints at an approach the US government could enforce more broadly in the future. Multiple groups and agencies are working on AI regulations in the US, but there is currently no imminent legislation. That creates more risk for organizations looking to deploy the technology, as they will be doing so without a full understanding of how emerging regulations may affect their implementations.


Regulatory Moves

In June, a bipartisan group of legislators introduced a bill11 to create a commission that would focus on regulating AI. The commission would be tasked with determining how to mitigate risks and harms while protecting innovation. It would review the ways that agencies across the government currently regulate or provide oversight to AI to determine whether a new standalone AI regulatory agency is needed. The downside is that this process is expected to take at least two years, leaving a regulatory vacuum in the interim. Some state and city governments have created AI regulations, such as New York City’s regulations12 on the use of AI in hiring and promotion decisions. Meanwhile, sector specific regulatory agencies may create regulations of their own.


Voluntary Measures

In addition to potential regulations from governments or agencies, in July a number of tech companies, including OpenAI and Google, agreed to implement voluntary measures13 for greater AI transparency and safety. This includes things like watermarking content produced by AI models to guard against deepfakes and pledging to protect users’ privacy. Additional voluntary guidelines may emerge in the future.



The US Copyright Office14 is also paying close attention to AI. In March, it launched an initiative to examine issues around generative AI and copyright. Currently, it does not allow content produced by AI models alone to be copyrighted. However, works that contain AI-generated materials and sufficient human authorship can potentially be copyrighted—but the copyright only covers the human-generated portions. A company that generates a logo via AI would therefore not be able to copyright the logo, but a company that creates an eBook with some AI-generated images can copyright the non-AI-generated portions.

Organizations looking to integrate AI into their operations face the challenge of doing so before the regulatory environment is transparent and defined. However, since governments often look to other governments for inspiration on regulatory standards, US organizations might benefit from studying foreign regulations. The EU’s AI Act is a good template for regulatory action. Organizations should make a risk management plan that anticipates some of the regulations likely to emerge in the coming years to guide their implementations.

While organizations should take the risks of generative AI seriously, there are also significant benefits to its adoption. An effective generative AI plan should make risk management a central focus. An integrated generative AI risk management plan will ensure that an organization holistically assesses the potential impacts of the technology before adopting it—and has a clear plan for mitigating the risks generative AI presents.


How Guidehouse Can Help

Guidehouse is uniquely placed to help create a comprehensive generative AI strategy.

As an organization with significant proficiency in risk management and industry-specific compliance requirements, Guidehouse is uniquely placed to help create a comprehensive generative AI strategy. With deep experience in both corporations and governmental organizations, Guidehouse’s cross-functional experts help organizations manage risks while maximizing the benefits of new technologies like generative AI.



1“Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day”, March 24, 2016.
2 “AI Threat Warning”, May 30, 2023.
3 “Artists Are Suing Artificial Intelligence Companies and the Lawsuit Could Upend Legal Precedents Around Art”, May 5, 2023. midjourney-ai-art-image-generators-lawsuit-1234665579/.
4 “Italy reverses ban on ChatGPT after OpenAI agrees to watchdog’s demands”, May 3, 2023. agrees-watchdogs-demands.
5 “Canada to launch probe into OpenAI over privacy concerns”, May 26, 2023.
6 “OpenAI CEO embraces government regulation in Senate hearing”, May 16, 2023.
7 “F.T.C. Opens Investigation Into ChatGPT Maker Over Technology’s Potential Harms”, July 13, 2023.
8 “March 20 ChatGPT outage: Here’s what happened”, March 24, 2023. blog/march-20-chatgpt-outage
9 “The EU Artificial Intelligenc Act”, June 14, 2023.
10 “Blueprint for an AI Bill of Rights”, n.d.,
11 “AI Regulation Is Coming To The U.S., Albeit Slowly”, June 27, 2023. sites/washingtonbytes/2023/06/27/ai-regulation-is-coming-to-the-us-albeit-slowly/?sh=38af42387ee1.
12 “How New York is Regulating AI”, June 22, 2023.
13 “OpenAI, Google, others pledge to watermark AI content for safety, White House says”, July 21, 2023.
14 “U.S. Copyright Office Provides Guidance on Registrations involving AI-Generated Works”, March 22, 2023.

Let Us Help Guide You

Complexity demands a trusted guide with the unique expertise and cross-sector versatility to deliver unwavering success. We work with organizations across regulated commercial and public sectors to catalyze transformation and pioneer new directions for the future.

Stay ahead of the curve with news, insights and updates from Guidehouse about issues relevant to your organization and its work.