Threat Intelligence

    Reining in the Cyber Risks of Workplace AI Adoption

    AI capabilities have the potential to transform the workplace, but their unmitigated use by employees can open up organizations to a world of risk

    by Stephanie Overby

    Key Points

    • A majority of employees have used generative AI tools at work, but only a quarter of organizations have an AI policy.
    • Unsanctioned or unsupervised use of new AI platforms can lead to unintentional data exposure and breaches.
    • There are steps companies can take to put guardrails around AI usage.

    Interest in artificial intelligence (AI) has skyrocketed over the last year as generative AI tools like ChatGPT leapt onto the scene with potential applications for a growing array of business use cases. But there’s more than just idle fascination with these capabilities: Workplace adoption of generative AI is growing rapidly. More than half (56%) of U.S. employees say they use generative AI tools on the job at least occasionally and nearly one-third (31%) use them on a regular basis, according to a Conference Board survey1.

    AI capabilities in general — and generative AI tools specifically — have the potential to transform the workplace. Productivity improvements resulting from generative AI could add the equivalent of $6.1 trillion to $7.9 trillion annually to the global economy, according to a report from McKinsey2. At the same time, though, McKinsey and many others have documented the big potential risks that generative AI brings into enterprises along with that anticipated productivity boost.

    And that duality makes it a big problem — particularly for those concerned with cybersecurity — that so much of today’s workplace AI adoption is undocumented, ungoverned, or even unknown. Only around one-quarter (26%) of respondents to the Conference Board survey said their organizations have an AI policy. Shadow AI — AI tools or systems being used or developed without organizational approval or oversight — puts organizations at risk.

    How Employee AI Usage Expands the Attack Surface

    There’s no denying the opportunities that AI capabilities will create in terms of productivity, innovation, and growth. The most common uses for generative AI today, according to the Conference Board survey, are drafting written content (68%), brainstorming ideas (60%), and conducting background research (50%). The McKinsey report noted that the majority of generative AI’s value will come from use cases in the areas of customer operations, marketing and sales, software engineering, and research and development.

    But what happens to the information shared in queries submitted to a public generative AI tool, for example? One major electronics manufacturer earlier this year discovered three separate instances of employees inadvertently leaking a variety of sensitive company information — a confidential business process, internal meeting notes, and source code — through the use of generative AI tools on the job3. Data shared to the major public generative AI tools is used for ongoing training of these large language model (LLM) platforms, so they perform better over time. But do you trust your trade secrets or pricing models to a third party with no contractual obligation to protect such important data?

    That’s just one of the data protection issues that emerges when employees experiment with AI. Unbridled use of AI tools can introduce a range of potential cyber risks for companies, including:

    • Data exposure: AI systems depend on enormous volumes of data, some of it highly sensitive or personal. Without proper protection in place — data encryption, access controls, and secure data storage — a company risks exposing that data to unauthorized parties. As Neil Thacker, CISO for EMEA and Latin America at Mimecast partner Netskope explained to ComputerWeekly, the increased use of generative AI tools in the workplace makes businesses vulnerable to serious data leaks4.
    • Data breaches: Cybercriminals follow the money. And money comes from high-value data. As more organizations use generative AI tools, bad actors will seek out ways to intercept any high-value data being shared via these interfaces. “[With] analogous to account takeover (ATO), where a hacker gains access to an online account with malicious intent, hackers seek to gain access to trained AI models to manipulate the system and access unauthorized transactions or PII [personally identifiable information],” Jackie Shoback, co-founder of a venture capital firm that invests in digital identity startups, recently wrote in Forbes. “As the complexity of AI solutions increases, more vulnerability points across models, training environments, production environments and data sets will undoubtedly proliferate5.”
    • Adversarial attacks: Another known threat is the intentional manipulation of AI models with bogus input. Experts have long warned of the risk of adversarial attacks on AI systems and platforms designed to corrupt their models and outputs. Still, researchers recently revealed an exploit capable of causing all the major generative AI platforms to go off the rails6.
    • Insider threats: As AI becomes more integrated into business operations, there are more opportunities for those inside an organization to use their access to tinker with algorithms or models for malicious purposes or monetary gain. 

    New Controls for New Threats

    The only surefire way to eliminate the cyber risks associated with AI is to ban its use. While some companies have put a moratorium on specific types of AI, such as generative AI, for now, that’s probably not a sustainable solution for most. Instead, business leaders can craft an approach to business AI adoption and usage that aligns with their own risk profiles and appetites, by taking the following steps:

    • Create an AI steering committee: Establishing a group with representation from IT, cybersecurity, data and analytics, and key business stakeholders is an essential first step. This committee can review the organization’s AI practices and policies, including tool usage, data sharing, and data storage and deletion parameters, and align them with the organization’s enterprise risk profile and tolerances.
    • Conduct a baseline AI risk assessment: It’s important to find out what types of tools and systems have already been adopted in the organization and the specific vulnerabilities this usage could create. Company leaders can prioritize the mitigation or elimination of these risks based on a risk-reward calculation. The AI Risk Management Framework developed by the National Institute of Standards and Technology (NIST) can help company leaders think through the cybersecurity and privacy risks associated with the use of AI systems7.
    • Develop a company-wide policy for AI usage: While 46% of respondents who said they use generative AI in the Conference Board survey also said their management was fully aware of their AI use, 34% said their organization had no AI policy (and another 17% didn’t know if there was one). It’s important that companies create — and communicate — enterprise-level rules on the use of AI technologies by the workforce (including what tools are sanctioned, what data can be shared when using public tools, and what disclosures employees must make about any materials produced with the help of AI). For example, a company may prohibit the entry of PII, intellectual property, and systems code into generative AI tools and prevent employees from using any such tools until they have been trained on their use and the risks involved.
    • Set cyber standards for AI tools: When considering the adoption of new AI tools or platforms, companies should fully vet the vendor’s cybersecurity controls and practices. Because AI tools ingest so much data, they are high-value targets for cybercriminals seeking to exploit their vulnerabilities. So, it’s important to know, for example, whether an AI platform is secure by design and what vulnerabilities it might have.
    • Communicate and educate: Companies should be explicit in sharing their policies regarding AI use in the workplace with all employees (and contractors) and educate them about the associated cyber risks. Integrating the subject into regular cybersecurity awareness training modules ensures that everyone remains up to speed on emerging threats and best practices. 
    • Monitor access and user behavior: As ever, CISOs should enforce access controls and look out for anomalies in user behavior to minimize the risks of insider threats.

    The Bottom Line

    There is extraordinary potential for companies to harness generative AI to boost productivity, but that potential is not risk-free. Businesses must proactively assess and address the risks that come along with advanced AI. Armed with greater clarity, an organization and its workforce can more confidently adopt these new capabilities while maintaining the security of company and customer data. Read more about how Mimecast is using different types of artificial intelligence in its own cybersecurity solutions.


     

    1 “Majority of US Workers Are Already Using Generative AI Tools —But Company Policies Trail Behind,” The Conference Board

    2 “The economic potential of generative AI: The next productivity frontier,” McKinsey Digital

    3 “Samsung bans use of generative AI tools like ChatGPT after April internal data leak,” TechCrunch

    4 “ChatGPT is creating a legal and compliance headache for business,” ComputerWeekly

    5 “Managing Privacy And Cybersecurity Risks In An AI Business World,” Forbes

    6 “A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It,” Wired

    7 “AI Risk Management Framework,” National Institute of Standards and Technology

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top