Adjacent case · Amira Learning

The marketing of alignment.

States are spending tens of millions of dollars on an AI reading tutor whose headline evidence claim, whose vendor-funded studies, and whose privacy representations do not survive a two-hour audit.

The claim vs. the record
The claim

"Strong (Tier 1) ESSA evidence"

The Amira marketing site presents the product as holding ESSA’s top evidence tier, the language states use for purchasing decisions under federal Title I funding. Strong is the label that unlocks procurement without further review.

The record

Moderate (Tier 2)

Evidence for ESSA, the independent review maintained by the Johns Hopkins University Center for Research and Reform in Education, lists Amira at Moderate (Tier 2). The correction is visible in sixty seconds at evidenceforessa.org. It is not an interpretation.

The studies Amira cites

An RCT from 2001.

The single randomized controlled trial in Amira’s evidence base tested software built in 2001 at Carnegie Mellon. Amira Learning was founded in 2018. The product has been fundamentally redesigned across the seventeen years between the study and the company. The comparison condition in the RCT was Sustained Silent Reading — children sitting quietly with books, with no feedback, no tutor, no adult interaction at all. Any interactive tool beats that baseline.

Texas · 15,424 students

Effect size +0.26 / +0.06

Kindergartners showed a modest effect of +0.26. First graders showed +0.06 — functionally zero. Moving a student from the 50th percentile to roughly the 52nd.

Louisiana · 79,084 students

Effect size 0.03 / 0.05

Fourth graders: 0.03. Third graders: 0.05. Only 5 to 19 percent of students reached the recommended usage levels. The study is labeled "Independent Third-Party" on Amira’s site; the cover page states Amira contracted with Instructure to conduct it.

The privacy contradiction

Two public pages. Opposite claims.

Amira’s privacy page states: “Amira Learning never shares student data with public AI platforms like ChatGPT, Claude, Gemini, or any other third-party foundation models.” Its distribution partner Pearson Canada goes further: the models are “entirely internally developed, private models.”

Anthropic’s own customer story page, at claude.com/customers/amira, describes in detail how Amira uses Claude to generate comprehension dialogues, questions, hints, response pathways, word definitions, and custom rubrics. Amira’s Chief AI Scientist, Ran Liu, is quoted on that page explaining the benchmarking process that selected Claude.

Both of these things cannot be true at the same time. The architectural distinction Amira could reasonably make — that Claude is used offline to pre-generate content reviewed by humans before serving to students — is a legitimate design choice. But “entirely internally developed” is directly contradicted by the company’s own published partnership, and school districts making procurement decisions deserve to hear the real answer.

Read the long-form
Investigation · April 3, 2026

The evidence problem

What the studies actually say about Amira AI Tutor, and where the marketing diverges from the data.

Read →
Investigation · April 5, 2026

When the privacy page and the partnership page disagree

Amira says it does not share student data with Claude. Anthropic’s own customer story says otherwise.

Read →