Data Powering AI Strengthens Business While Creating New Dangers, Bloomberg Law

Time 5 Minute Read
May 2, 2025
Publication

Artificial intelligence is transforming the workplace at an exponential rate, but its growing prominence raises complex data privacy and security issues that companies will need to navigate as they scale AI deployment across their organizations.

Data is the fuel that powers AI technology; and yet it is this core characteristic of AI that raises a host of privacy and security concerns. AI’s functions and objectives include the need to process vast datasets for model training, an ability to correlate different data points from numerous sources, proficiency in creating detailed profiles about individuals, and a capacity to execute impactful decisions.

These are often in direct tension with privacy requirements and can trigger several data management challenges. AI also can be leveraged by bad actors to carry out a range of malicious activities, increasing cybersecurity risks for organizations.

Companies should consider the following key privacy and data security risks as they expand their deployment of AI in the workplace

Privacy Compliance Risks

Inputting personal information into an AI tool may trigger various US and global privacy and data protection laws. In the US, 20 states have passed comprehensive privacy laws. The data protection principles that undergird most modern privacy laws, such as data minimization and purpose limitation, can be antithetical to AI’s reliance on processing vast quantities of data for myriad purposes.

In addition, many AI systems are black boxes, making it difficult to address notice and transparency requirements. AI systems also present challenges for complying with privacy rights requests. For example, it may be impossible to provide access to or delete personal information that was inputted into an AI system, as raw data is often discarded once it’s processed and becomes an inherent part of the system’s model.

Specific AI systems and use cases also may trigger heightened requirements under relevant privacy laws, such as those related to profiling and automated decision-making. As companies integrate AI into their business, it’s imperative to consult with in-house and outside counsel to ensure privacy compliance risks are addressed proactively.

Data Management

Because AI tools can enable more efficient analysis of data across systems and improve over time by ingesting more data, there is an incentive to grant them access to vast quantities of company data. Even without systematic access to company data, employees may input company information into AI tools to accomplish business tasks. Companies can quickly find themselves losing control over their confidential data. For example, many vendor AI tools have broad terms seeking to use customer data for model training and development.

There also is always a risk that data inputted into generative AI tools may be reproduced to other users in response to their prompts, particularly if the tool is publicly available. To the extent that trade secrets or other confidential information are entered into AI tools, that information could be at risk of losing its protections.

These risks underscore the need for internal guardrails with restrictions on the types of information that may be inputted into AI tools, proper configuration and access controls for such tools, ongoing monitoring mechanisms, and rigorous review of AI vendor terms.

Workplace Monitoring

The use of AI technology for workplace monitoring is on the rise as companies seek to better assess employee behavior and performance, track and manage workflow, and increase workplace security. This type of monitoring can raise privacy concerns. For instance, employers that use AI tools to record and analyze employee communications need to think about federal and state surveillance and eavesdropping laws.

In addition, AI tools that incorporate biometric features may implicate stringent biometric privacy laws and heightened requirements under state privacy laws. Monitoring also may increase the risk of privacy tort claims depending on the level of invasiveness and the type of notice provided to employees.

Cybersecurity Threats

AI tools are increasing the volume and velocity of cyberattacks. For example, threat actors leverage AI to quickly and efficiently conduct social engineering, write malware, and perform reconnaissance. AI tools also are decreasing the barrier for entry into cybercrime.

Almost anyone can ask a generative AI tool to write a malware script or create convincing content for phishing scams. As the threat landscape evolves, a robust cybersecurity and incident response program that includes training, tabletop exercises and other preparedness activities is more essential than ever.

Accountability

Companies are moving quickly to integrate AI tools, often without fully considering the array of privacy, cybersecurity, and other legal risks. The use of unsanctioned AI tools by employees also has increased, creating a rise of “shadow AI.”

Speed to market, a lack of visibility into AI tools, and the absence of established industry baselines for AI governance can expose companies to significant risk. Companies should consider implementing an AI governance program that delegates roles and responsibilities for managing AI risks, establishes an approval process for new AI use cases, imposes conditions and requirements on use of AI tools, and restricts what data may be inputted into AI tools.

It’s unquestionable that AI will be an impactful force in the workplace. Companies that successfully integrate AI tools into their business will work to address these privacy and cybersecurity risks.

While AI technology potentially enables companies to do more, produce more, and achieve more, it’s the data powering this technology that remains a company’s strongest asset that must be protected at all costs.


Copyright 2025 Bloomberg Industry Group, Inc. (800-372-1033) www.bloombergindustry.com. Reproduced with permission.

Related Insights

Jump to Page