Home

Donate

Beware the Emergence of Shadow AI

Abhishek Gupta / Aug 16, 2023

Abhishek Gupta is a Fellow, Augmented Collective Intelligence at the BCG Henderson Institute, Senior Responsible AI Leader & Expert at BCG, and Founder & Principal Researcher at the Montreal AI Ethics Institute.

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

The enthusiasm for generative AI systems has taken the world by storm. Organizations of all sorts– including businesses, governments, and nonprofit organizations– are excited about its applications, while regulators and policymakers show varying levels of desire to regulate and govern it.

Old hands in the field of cybersecurity and governance, risk & compliance (GRC) functions see a much more practical challenge as organizations move to deploy ChatGPT, DALL-E 2, Midjourney, Stable Diffusion, and dozens of other products and services to accelerate their workflows and gain productivity. An upsurge of unreported and unsanctioned generative AI use has brought forth the next iteration of the classic "Shadow IT" problem: Shadow AI.

What is Shadow IT?

For those unfamiliar with Shadow IT, these are often code snippets, libraries, solutions, products, services, and apps on managed devices that lurk outside the oversight of corporate, nonprofit, and government IT departments. Shadow IT can threaten an organization's cybersecurity, privacy, and data confidentiality. For example, they increase the likelihood of data breaches and ransomware infiltrating the corporate network, often costing the organization more than $1m for each incident, according to the Verizon 2023 Data Breach Incident Report.

Well-defined and enforced policies, including strict network monitoring, device usage, and other oversight mechanisms (mostly) work well to keep Shadow IT at bay. But, these are proving to be insufficient with Shadow AI.

What is Shadow AI?

Shadow AI refers to the AI systems, solutions, and services used or developed within an organization without explicit organizational approval or oversight. It can include anything from using unsanctioned software and apps to developing AI-based solutions in a skunkworks-like fashion. Wharton School professor Ethan Mollick has called such users the hidden AI cyborgs.

How is Shadow AI different from Shadow IT?

There are a few things that distinguish this modern incarnation from its ancestor. Shadow AI is potentially a lot more pernicious and pervasive.

The higher perniciousness has its roots in how significantly AI can circumvent GRC controls. For example, employees inputting confidential information into ChatGPT, not realizing what the terms of service are, can violate the commitments that the organization has made to its clients on protecting how they use their data and discussions. This can have further downstream impacts, if the leaked data becomes part of the training set for the next version of the system. This opens up the possibility of other users inadvertently receiving that information if the system is prompted the right way.

The higher pervasiveness comes from the democratizing aspect of generative AI. In particular, the new crop of tools requires very little technical or specialized knowledge outside of the dark arts of prompt engineering. This leads to thousands of flowers blooming across the organization, where hitherto tech-shy departments might become hotbeds for AI use. Given how little financial and technical resources are necessary to use generative AI, these uses may evade detection by management and traditional controls.

Take, for example, staff in a marketing department using Midjourney to generate images for a new ad campaign. Previously, they might have had to seek a budget (requiring manager signoff) and someone to set up the tools (requiring IT staff support), which would alert GRC functions and trigger appropriate workflows. Now, all you have to do is sign up on the website, pay a few dollars and you're off to the races. While the democratization of these products is empowering, it poses a nightmare for those who seek to protect the assets of an organization.

Why is there an urgency to address this?

Generative AI systems are here to stay; in many cases banning them will only make the shadows darker. The promise of productivity gains and better utilization of staff capabilities is too rich of an opportunity to pass up, especially for business leaders and senior management that are focused on eking out the most gains possible. At the same time, the tone in the wider ecosystem has shifted towards incorporating responsibility as a core operating measure in how a business is run. In technology circles, “Responsible AI” is becoming the talk of the town, while policymakers and legislators are racing to establish the definitive AI governance approach. Similar enthusiasm is popping up within organizations around ensuring that companies live their values through a Responsible AI approach. Perhaps then it is not unfair to argue that Responsible AI should sit on the CEO agenda as a core priority alongside other business and functional considerations.

Taking advantage of the momentum and leveraging it to implement concrete measures within an organization should become the top priority for GRC executives. While legislation and policy will going to take time to develop, the impacts of generative AI adoption within organizations is already taking place. Not protecting against these risks will leave a vast open attack surface ripe for the taking by malicious actors.

What are some of the concerns?

Shadow AI raises concerns for operational teams, and should worry senior management and leadership as well.

Cybersecurity risks such as the ones highlighted above are one of the prime concerns due to Shadow AI. There is a lack of visibility and control, including a limited ability to control spending on these systems, often violating financial governance and policies when staff purchase individual subscriptions. And, the terms of use under those subscriptions can vary dramatically compared to enterprise policies which often offer data isolation, privacy, security, and compliance with internal policies that can be incorporated into a cloud provider's instance of the model.

This risk is also linked to infrastructure integration challenges. When third-party generative AI systems are queried as a service, it may prevent continuous monitoring and checks from being applied uniformly by policy. Products and services might go out the door with unknown vulnerabilities and failure modes that can manifest unexpectedly as customers interact with the system, a nightmare for risk and compliance officers who prefer not to be surprised.

Without oversight, shadow AI could perpetuate biases or be deployed in an unsafe manner. This can lead to ethical issues or legal liability, creating more strife between departments and staff that want to use this technology and those who seek to protect the organization from risk. For example, using a generative AI system when recruiting in order to analyze resumes, summarize work histories, etc. might lead to biases against certain demographic groups, and violate an organization's compliance under the New York law that cautions against automated employment decision tools (AEDTs). Similarly, using generative AI systems to write performance reviews creates legal liabilities for violating internal company policies and laws that prohibit significant decisions about someone’s employment being automated without knowledge, consent, and recourse to human decision-making.

What are some of the opportunities?

These concerns and challenges come with some opportunities, even perhaps escaping the pilot purgatory organizations might face when adopting generative AI systems. For example, figuring out why staff are experimenting with certain tools can surface unarticulated technology needs. This can become a great opportunity for the IT staff to vet and then adopt new tools and services that allow staff to complete their work.

Being proactive in adoption also allows the GRC staff to evaluate and enhance existing governance approaches. For example, finding places where policies failed in capturing misuse or unsanctioned use presents points for improvement and makes policies and governance approaches more robust. It is also an opportunity to review industry standards and best practices to better manage the impacts of generative AI systems.

Never one to pass up an opportunity for increasing nuanced understanding of the capabilities and limitations of these systems, tackling Shadow AI is also a strong signal to educate staff on how to best use these tools and where to not use them. If the organization doesn't have internal support teams to help with the adoption of new tools, instituting the practice of Responsible AI champions might be worth considering, as such trained staff can provide practical, hands-on advice to those seeking to use these tools in compliance with organizational policies.

The goal is not to hinder experimentation but to encourage safe experimentation so that those who are closest to the daily problems and organizational needs, i.e., rank and file employees, can confidently surface their desire for new tools while doing so in a policy-compliant fashion.

What are some technical and policy solutions?

If you have a leadership role in an organization, there are a few steps that you can take immediately to level-up your game and fight back against the (hidden) scourge of Shadow AI.

If you don't already have a Responsible AI strategy, starting with a short policy on where generative AI use is permitted and where it is strictly off-limits is a helpful start. It will provide staff with clarity and confidence in their experimentation. A bonus is to tie that with existing acceptable use policies within the organization to reduce barriers to adoption.

Consider setting up an ad-hoc committee that reviews requests for tools and services that can provide a fast turnaround on permitted use, triaging based on the severity and likelihood of risks. The committee can also act as a great source to understand which parts of the organization are most interested in using these tools and from what areas most use cases emerge. This information can then be used to tweak the policy mentioned above.

Enhancing monitoring and testing, such as the use of gated API access to third-party AI systems, allows tracking inputs and outputs to check for violations of data confidentiality and privacy requirements. It can also help with ensuring that outputs from third-party systems are not biased or unsafe in a manner that compromises the purpose for which they are used within the organization.Finally, deeper and more pervasive use of AI isn't just a technological change, it is also a cultural shift. Making Responsible AI the default mindset in your organization will go a long way in altering the everyday behaviors and workflows that will make this approach norm rather than the exception.

Authors

Abhishek Gupta
Abhishek Gupta holds the prestigious BCG Henderson Institute (BHI) Fellowship on Augmented Collective Intelligence (ACI) studying the complementary strengths in hybrid collectives of humans and machines. He serves as the Director for Responsible AI at the Boston Consulting Group (BCG) advising clien...

Topics