Synthetic intelligence (AI) is en vogue. Because it quickly reshapes industries, firms are racing to combine and market AI–pushed options and merchandise. However how a lot is an excessive amount of? Some firms are discovering out the laborious manner.
The authorized dangers related to AI, particularly these dealing with company management, are rising as rapidly because the expertise itself. As we defined in a latest submit, administrators and officers danger private legal responsibility, each for disclosing and failing to reveal how their companies are utilizing AI. Two latest securities class motion lawsuits illustrate the dangers related to AI–associated misrepresentations, underscoring the necessity for administration to have a transparent and correct understanding of how the enterprise is utilizing AI and the significance of guaranteeing sufficient insurance coverage protection for AI-related liabilities.
AI Washing: A Rising Authorized Danger
Constructed on the identical premise as “greenwashing,” AI washing is on the rise. In its easiest phrases, AI washing refers back to the observe of exaggerating or misrepresenting the function AI performs in an organization’s services or products. Simply final week, two extra securities lawsuits have been filed towards company executives primarily based on alleged misstatements about how their firms have been utilizing AI applied sciences. These newest lawsuits, very similar to the Innodata and Telus lawsuits we beforehand wrote about, function early warnings for firms navigating AI–associated disclosure points.
Cesar Nunez v. Skyworks Options, Inc.
On March 4, 2025, a plaintiff shareholder filed a putative securities class motion lawsuit towards semiconductor merchandise producer Skyworks Options and sure of its administrators and officers within the US District Courtroom for the Central District of California. See Cesar Nunez v. Skyworks Options, Inc. et al. Docket No. 8:25–cv–00411 (C.D. Cal. Mar. 4, 2025).
Amongst different issues, the lawsuit alleges that Skyworks misrepresented its place and skill to capitalize on AI within the smartphone improve cycle, main buyers to buy the corporate’s securities at “artificially inflated costs.”
Quiero v. AppLovin Corp.
The same lawsuit was filed the following day towards cellular expertise firm AppLovin and sure of its executives. See Quiero v. AppLovin Corp. et al. Docket No. 4:25-cv-02294 (N.D. Cal. Mar. 5, 2025).
The Applovin grievance alleges, amongst different issues, that AppLovin misled buyers by misleadingly touting its use of “chopping–edge AI applied sciences” “to extra effectively match ads to cellular video games, along with increasing into net–primarily based advertising and e–commerce.” In response to the grievance, these deceptive statements coincided with the reporting of “spectacular monetary outcomes, outlooks, and steering to buyers, all whereas utilizing dishonest promoting practices.”
Danger Mitigation and the Function of D&O Insurance coverage
Our latest posts have proven how AI can implicate protection below all traces of economic insurance coverage. The Skyworks and AppLovin lawsuits underscore the particular significance of complete D&O legal responsibility insurance coverage as a part of any company danger administration answer.
As we mentioned in a earlier submit, firms could want to assess their D&O packages from a number of angles to maximise safety towards AI–washing lawsuits. Key issues embody:
- Coverage Evaluate: Guaranteeing that AI-related losses are coated and never excluded below exclusions like cyber or expertise exclusions.
- Regulatory Protection: Confirming that insurance policies present protection not just for shareholder claims but additionally regulator claims and authorities investigations.
- Coordinating Coverages: Evaluating legal responsibility coverages, particularly D&O and cyber insurance coverage, holistically to keep away from or get rid of gaps in protection.
- AI-Particular Insurance policies: Contemplating the acquisition of AI–targeted endorsements or standalone insurance policies for extra safety.
- Govt Safety: Verifying sufficient protection and limits, together with “Facet A” solely or difference-in-condition protection, to guard particular person officers and administrators, significantly if company indemnification is unavailable.
- New “Chief AI Officer” Positions: Chief data safety officers (CISOs) stay essential in monitoring cyber–associated dangers however are usually not the one rising positions to suit into current insurance coverage packages. Though not a standard C–suite place, increasingly firms are creating “chief AI officer” positions to handle the multi–faceted and evolving use of AI applied sciences. Guaranteeing that these positions are included throughout the scope of D&O and administration legal responsibility protection is important to affording safety towards AI–
In sum, a proactive method—particularly when putting or renewing insurance policies—might help mitigate the chance of protection denials and improve safety towards AI–associated authorized challenges. Partaking skilled insurance coverage brokers and protection counsel can additional strengthen coverage phrases, shut potential gaps and facilitate complete danger protection within the evolving AI panorama.