The privacy contradiction
Two public pages. Opposite claims.
Amira’s privacy page states: “Amira Learning never shares student data with public AI platforms like ChatGPT, Claude, Gemini, or any other third-party foundation models.” Its distribution partner Pearson Canada goes further: the models are “entirely internally developed, private models.”
Anthropic’s own customer story page, at claude.com/customers/amira, describes in detail how Amira uses Claude to generate comprehension dialogues, questions, hints, response pathways, word definitions, and custom rubrics. Amira’s Chief AI Scientist, Ran Liu, is quoted on that page explaining the benchmarking process that selected Claude.
Both of these things cannot be true at the same time. The architectural distinction Amira could reasonably make — that Claude is used offline to pre-generate content reviewed by humans before serving to students — is a legitimate design choice. But “entirely internally developed” is directly contradicted by the company’s own published partnership, and school districts making procurement decisions deserve to hear the real answer.