By Mark Adams - Beacon - March 26, 2026
Artificial intelligence (AI) is rapidly evolving. As employees have started to experiment and utilize this technology (typically by way of using it on their computers), companies have started to adapt by developing AI policies to manage its proper use.
However, AI is now moving beyond software and into the physical workplace. One of the newest examples is the emergence of AI‑enabled smart glasses—wearable devices capable of recording audio and video, accessing cloud‑based AI assistants, translating conversations in real time, identifying objects, and retrieving information through voice commands. While these tools can offer productivity gains in areas such as field service, manufacturing, logistics, and training, they also introduce significant compliance, privacy, and security considerations for employers.
For HR leaders and business owners, the challenge is balancing the operational benefits of these technologies with the legal obligations that arise when employees use devices capable of constant recording, data capture, and AI‑enabled analysis.
From a federal compliance standpoint, two areas deserve particular attention. First, the National Labor Relations Act (NLRA) protects employees’ rights to engage in concerted activity, including discussions about wages, working conditions, and union organizing. Workplace policies that broadly prohibit employees from recording conversations or workplace conditions can run afoul of the NLRA if they are interpreted as discouraging employees from documenting workplace concerns or engaging in protected activity. As a result, employers implementing policies governing AI‑enabled glasses should ensure that restrictions are narrowly tailored and include language clarifying that the policy is not intended to interfere with employees’ rights under federal labor law.
Second, the Occupational Safety and Health Act (OSHA) may also be implicated. Wearable devices that include displays, cameras, or voice prompts could present workplace safety hazards if they distract employees from operating machinery, driving vehicles, or working in hazardous environments. Employers have a general duty to provide a workplace free from recognized hazards, which means organizations should evaluate whether smart glasses could create distraction risks or interfere with required personal protective equipment. In some environments, limiting their use in production areas or during safety‑sensitive tasks may be necessary. Additionally, if there are unsafe working conditions or hazards, employees may wish to record those unsafe conditions as part of a complaint or concern that they want to raise to OSHA. Such conduct would be protected activity as well.
Privacy laws present another layer of complexity, particularly because AI glasses often include continuous audio or video recording features. At the federal level, the Electronic Communications Privacy Act (ECPA) and related wiretap provisions prohibit the interception of certain communications without consent. More importantly for many employers, several states in the Northeast impose strict requirements regarding audio recording.
States such as Massachusetts and Connecticut generally require two‑party consent before recording a conversation, meaning all participants must agree to the recording. Unauthorized recording in these states can potentially expose individuals—and in some cases employers—to civil or criminal liability. New York and Rhode Island operate under one‑party consent rules, but privacy expectations may still arise depending on the setting, particularly in areas such as restrooms, locker rooms, or medical facilities.
From a business perspective, AI glasses may present a significant data security risk. These devices can capture proprietary processes, confidential documents, customer interactions, or sensitive operational details. If recordings are uploaded automatically to cloud‑based AI systems, organizations may unintentionally expose trade secrets or regulated data. For those employers that handle and maintain protected health information (or PHI), protected such information from unauthorized access, disclosure or dissemination could conceivably raise data breach concerns that could invoke the Health Insurance Portability and Accountability Act (HIPAA).
So, there are numerous and competing interests that can come into play with this technology.
As wearable AI technologies become more common, organizations should consider developing clear policies addressing their use. Employers may wish to define where smart glasses may be used, prohibit recording conversations without proper consent, restrict the capture of confidential or proprietary information, and evaluate safety risks in operational environments. Policies should also clarify that nothing in the policy is intended to interfere with employees’ rights under applicable labor laws.
AI glasses and other wearable technologies will likely become more common in the coming years as organizations look to integrate artificial intelligence into everyday workflows. By proactively addressing compliance obligations, privacy concerns, and data security risks, employers can take advantage of the productivity benefits of AI wearables while minimizing legal and operational exposure.