In late 2025, a class action lawsuit was filed against Sharp HealthCare, centering on allegations that the San Diego-based provider used an artificial intelligence dictation and recording tool to capture patient-clinician conversations without adequate notice or consent. The complaint asserts that the ambient AI engine recorded sensitive clinical dialog in exam rooms and telephone conversations, generating automated clinical notes while failing to secure legally required consent documentation from patients.¹
The lawsuit claims that staff used the technology since April 2025, and that while Sharp purportedly documented patient consent, in many cases consent was not actually obtained and instead was retroactively inserted into records by the AI tool or other mechanisms.¹ Plaintiffs estimate that hundreds of thousands of encounters may have been recorded under the controversial process without proper transparency.¹
This legal action highlights the complex intersection among telecommunications law, call and voice recording consent requirements, AI-powered clinical tools, and federal privacy regimes like HIPAA.¹ While the litigation primarily cites violations of privacy and wiretapping statutes, the operational dimensions implicate healthcare contact systems that automate or monitor voice interactions under technology governance frameworks.¹