Article

Generative Artificial Intelligence: Updates and Considerations

By Alma Angotti, Tim Mueller

Recent months have shown an explosive amount of development of, interest in, and publicity around generative Artificial Intelligence (AI) tools (such as OpenAI ChatGPT, Google Bard, etc.). While such tools have wide-ranging applicability and show promising early results for increasing efficiency and performance across a variety of industries, companies and individuals should both be aware that all such publicly available tools capture and store any information submitted to them. In fact, interest in and use of these tools has increased so dramatically that the Congressional Research Service recently published a primer on generative AI1, which includes not only an overview of the tools and the current landscape, but makes specific mention of data privacy concerns. Similarly, a joint research paper published earlier2 this year by Stanford Internet Observatory, Georgetown University’s Center for Security and Emerging Technology, and OpenAI explored potential uses of generative AI by malicious actors.

 
While OpenAI, Microsoft, Google, and others are slowly making fully private usage of these tools available to customers, use of public tools (even those with subscription models) at this time should be approached with caution. Several instances of sensitive data leaks have already been reported in the past few months, most notably by Samsung3, and OpenAI itself also confirmed a data leak4. Though these concerns are being or have been addressed, it is strongly recommended that users do not submit any personal, proprietary, confidential, or client-sensitive information to any such public tools. Users should also be aware that these tools have very few baseline controls to prevent returning incorrect or fabricated information, such as citing fictitious cases for a court filing5.
 
Additionally, usage of generative AI and other advanced AI tools are already a significant area of focus among bad actors, and new/evolving fraud schemes and approaches are already appearing in the market. There is already evidence of how “traditional” fraud approaches, such as email/text phishing and spoofing, can be enhanced with generative AI6 to appear more legitimate. Similarly, evolving versions of fraud and scam techniques are appearing, such as AI-enhanced voice cloning to imitate friends or family, and deepfake videos to facilitate identity theft (a recent FTC business guidance article7 discusses similar issues). Though such techniques are not novel, as seen in this cybercrime case from 20198, these types of fraud will likely become more widespread as generative AI tools are democratized for use globally. Companies across all industries should stay abreast of these developments, look to identify key areas of exposure to such threats, and consider investing in additional or improved fraud prevention measures.

 

How Guidehouse Can Help

For companies looking to identify current or potential risks arising from generative AI or other AI/ machine learning tools, or for those seeking independent, expert implementation guidance, tuning, and/or validation of any AI/ machine learning solutions, Guidehouse provides premier advisory, risk assessment, and solution development services for commercial and government clients in the financial services, healthcare, energy, and defense sectors.

 


1 Busch, Kristen. 2023. Review of Generative Artificial Intelligence and Data Privacy: A Primer. Congressional Research Service. May 23, 2023. https://crsreports.congress.gov/product/pdf/R/R47569.
2 Goldstein, Josh A., Renee DiResta, Girish Sastry, Micah Musser, Matthew Gentzel, and Katerina Sedova. 2023. “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations.” Cyber.fsi.stanford.edu, January. https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and.
3 Bloomberg.com. 2023. “Samsung Bans Generative AI Use by Staff after ChatGPT Data Leak,” May 2, 2023. https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak.
4 OpenAI. 2023. “March 20 ChatGPT Outage: Here’s What Happened.” Openai.com. March 24, 2023. https://openai.com/blog/march-20-chatgpt-outage.
5 “A Lawyer Used ChatGPT to Prepare a Court Filing. It Went Horribly Awry.” 2023. Www.cbsnews.com. May 29, 2023. https://www.cbsnews.com/news/lawyer-chatgpt-court-filing-avianca/.
6 Saha, Sayak, Krishna Naragam, and Shirin Nilizadeh. n.d. “Generating Phishing Attacks Using ChatGPT.” Accessed July 10, 2023. https://arxiv.org/pdf/2305.05133.pdf.
7 “Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale.” 2023. Federal Trade Commission. March 20, 2023. https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.
8 Stupp, Catherine. 2019. “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case.” WSJ. Wall Street Journal. August 30, 2019. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402.

Alma Angotti, Partner

Tim Mueller, Partner and Segment Leader


Let Us Help Guide You

Complexity demands a trusted guide with the unique expertise and cross-sector versatility to deliver unwavering success. We work with organizations across regulated commercial and public sectors to catalyze transformation and pioneer new directions for the future.

Stay ahead of the curve with news, insights and updates from Guidehouse about issues relevant to your organization and its work.