Stay ADvised: 2026, Issue 2
In This Issue:
- New York AG Scrutinizes Instacart's Algorithmic Pricing Practices
- New York Adopts Two Bills Regulating Commercial Use of AI-Generated Content
- Supergoop! Sunscreen Hit With False Ad Class Action Claiming "Mineral" Products Contain "Unnatural" Ingredients
- FTC Sets Aside Rytr Order, Finding Prior Order Was Not Supported by the Complaint
New York AG Scrutinizes Instacart's Algorithmic Pricing Practices
On January 8, 2026, the New York Attorney General's Office sent a formal information request to Instacart seeking details about its pricing practices and compliance with New York's recently enacted Algorithmic Pricing Disclosure Act. The AG's letter follows a December 2025 report by Consumer Reports and Groundwork Collaborative alleging that Instacart users were shown materially different prices for identical products at the same stores and at the same time.
The AG's letter raises concerns that Instacart's pricing experiments—and the tools it offers to retailers and CPG brands—may constitute "personalized algorithmic pricing" under New York law, which requires a clear and conspicuous disclosure whenever an algorithm sets prices using personal data. The Attorney General specifically questions whether Instacart's current disclosures, which appear only through fine-print links on certain pages, satisfy the statute's disclosure requirements.
Although Instacart has publicly stated that it ended item-level price testing in December and that its testing did not rely on personal or behavioral data, the AG notes that Instacart has also acknowledged that retail and brand partners may continue to test promotions and discounts on the platform. The letter seeks extensive information about Instacart's pricing tools (including Eversight and smart cart technology), agreements with retailers and brands, the scope of any price experiments, and how consumer data may be used in pricing or promotions.
Why it matters: This inquiry signals heightened enforcement attention on algorithmic pricing and transparency—particularly where platforms, retailers, or brands rely on automated tools to test prices, discounts, or promotions. Companies operating in New York should carefully evaluate whether any pricing variability could be viewed as "personalized algorithmic pricing" and whether disclosures are sufficiently clear, prominent, and consistently presented wherever prices are displayed.
New York Adopts Two Bills Regulating Commercial Use of AI-Generated Content
New York Governor Kathy Hochul signed two bills on December 11, 2025, that together expand New York's approach to AI transparency and post-mortem rights of publicity in entertainment-adjacent contexts.
Synthetic performers in advertising (S.8420-A / A.8887-B). The first law amends New York's General Business Law to require a clear and conspicuous disclosure when an advertisement includes an AI-generated "synthetic performer." The law includes carveouts for certain audio advertisements and for situations where AI is used solely to translate the language of a human performer, and it generally does not apply to ads or promotional materials for certain expressive works (e.g., films/TV/streaming/video games) where the use of the synthetic performer is consistent with its use in the expressive work. A first violation may result in a $1,000 civil penalty, with $5,000 for subsequent violations. The disclosure requirement becomes effective 180 days after enactment (approximately June 2026, based on the December 11, 2025, signing).
Post-mortem name/image/likeness and digital replicas (S.8391 / A.8882). The second law strengthens New York's right of publicity framework by requiring consent from heirs or executors for the commercial use of a deceased individual's name, image, or likeness after death. In addition, it updates the definitions and liability standards applicable to digital replicas—including tightening the rules for using a deceased performer's digital replica in certain expressive works by moving from a disclaimer-oriented approach toward a prior-consent model. The bill takes effect immediately.
Key Takeaways
These bills illustrate an accelerating state-level push to address AI-enabled impersonation and consumer confusion in advertising while simultaneously expanding protections against unauthorized post-mortem commercialization. For advertisers and studios, the practical compliance focus is (i) auditing whether campaigns use synthetic performers (and whether an exception applies), (ii) building disclosure into production workflows ahead of the June 2026 effective date, and (iii) obtaining appropriate rights and permissions—particularly where a campaign uses a deceased individual's identity or a digital replica in a way that could trigger New York's consent requirements.
Supergoop! Sunscreen Hit With False Ad Class Action Claiming "Mineral" Products Contain "Unnatural" Ingredients
Supergoop! was the target of a proposed California class action alleging that its marketing of certain sunscreen products as "100% Mineral" and "Mineral" is false and misleading under state consumer protection laws.
According to the complaint, the plaintiff alleges that reasonable consumers interpret "100% Mineral" claims to mean that the entire product formula, not just the active sunscreen ingredients, consists solely of mineral or natural ingredients and does not contain synthetic, chemically altered, or otherwise non-mineral components. The plaintiff further alleges that Supergoop's use of "100% Mineral" language creates a net impression that the products are free from non-mineral or highly processed ingredients.
This complaint challenges labeling and marketing for more than a dozen Supergoop sunscreen products sold nationwide, including products marketed for adults and children. Plaintiff alleges that, contrary to the challenged representations, the products contain numerous non-mineral ingredients. In addition, the complaint asserts that certain ingredients derived from minerals or natural sources have been subjected to significant chemical processing or modification, resulting in ingredients that are materially different from their original mineral or natural form.
The lawsuit asserts claims under California's Unfair Competition Law and False Advertising Law and seeks restitution, injunctive relief, punitive damages, and attorneys' fees on behalf of a proposed statewide class of California purchasers. Supergoop denies the allegations, and the case is in its early stages.
Key Takeaways
The case reflects continued litigation risk around "100%", "free-of," "natural," and "mineral" claims—particularly where plaintiffs argue that consumers reasonably interpret such claims to apply to an entire formulation rather than discrete components. While some courts have been open to crediting reasonable consumers with the ability to read ingredient declarations, this isn't universal and frequently turns on the specifics of the label. One thing is certain, which is that ingredient-based claims remain the most common risk point for CPG claims. Brands can manage this risk through thoughtful use of qualifiers and marketing copy.
FTC Sets Aside Rytr Order, Finding Prior Order Was Not Supported by the Complaint
The Federal Trade Commission has reopened and set aside its December 2024 consent order against Rytr LLC, concluding that the complaint underlying the order failed to plead facts sufficient to support a violation of Section 5 of the FTC Act.
Rytr offers an AI-enabled writing assistance service, including functionality that could be used to generate draft consumer reviews. In 2024, the FTC alleged that this service provided the "means and instrumentalities" for deception and constituted an unfair practice because it could generate large volumes of reviews without regard to accuracy. Rytr agreed to settle those allegations through a consent order that broadly barred it from offering any AI-enabled service designed to generate consumer reviews or testimonials.
In its December 2025 Order Reopening and Setting Aside the 2024 Order, the Commission determined that the complaint did not adequately plead either theory of liability. With respect to the means-and-instrumentalities claim, the FTC concluded that the complaint failed to allege that Rytr itself created deceptive marketing content, that its service was inherently deceptive, or that Rytr knew or had reason to know its customers would use the tool to violate the law. The Commission emphasized that providing a general-purpose tool with legitimate, pro-consumer uses—such as assisting users in drafting or editing review content—does not, without more, establish liability under Section 5.
The FTC also concluded that the complaint failed to plead an unfairness violation. The allegations did not establish that Rytr's service caused or was likely to cause substantial consumer injury, nor did they plausibly allege that any such injury outweighed countervailing benefits to consumers or competition. Because the complaint failed to plead a cognizable Section 5 violation, the Commission found that the consent order provided no consumer benefit and was therefore not in the public interest.
In reaching this conclusion, the FTC also cited recent federal policy directives calling on agencies to review and, where appropriate, set aside final orders that unduly burden artificial intelligence innovation. The Commission stressed, however, that its decision rested on the legal insufficiency of the complaint itself. The FTC reiterated that it would continue to pursue enforcement where AI tools are used to deceive consumers or where companies misrepresent the capabilities or outputs of AI-enabled products.
Key Takeaways
- The FTC's decision reflects both legal pleading concerns and a broader policy shift favoring AI innovation.
While the Commission grounded its analysis in Section 5 doctrine, the Order repeatedly cites Executive Order 14179 and the Administration's AI Action Plan, suggesting that current federal AI policy materially informed the public-interest analysis. - The outcome should not be read as a blanket rejection of means-and-instrumentalities theories.
The FTC's criticism focused on the specific allegations pleaded against Rytr and the nature of its AI tool; it remains unclear whether the Commission would apply the same analysis to non-AI companies or to tools with fewer legitimate consumer uses. - General-purpose AI tools appear to receive particular solicitude under the current framework.
The Commission emphasized that Rytr's product had pro-consumer applications and rejected theories premised on hypothetical misuse, but it is uncertain whether products outside the AI context—or AI tools marketed more aggressively for consumer-facing claims—would be treated similarly. - The Order narrows—but does not eliminate—unfairness and scienter theories in the AI context.
The FTC signaled skepticism toward speculative harm and attenuated knowledge allegations, yet stopped short of announcing new standards, leaving open how future complaints may be pleaded to address these deficiencies. - The decision is best viewed as fact- and moment-specific rather than a durable safe harbor.
Companies should be cautious about extrapolating broadly from Rytr, particularly given the role of current executive policy and the possibility that a future Commission could revisit similar conduct under a different enforcement posture.