Still Human · Analysis

The reading lesson inside the two-hour day

What the Alpha School coverage keeps missing, and why it matters more than the test scores.


I taught a kid once who could read anything you put in front of him in second grade. Word-perfect. Fluent on the page. His comprehension scores were in the high nineties on every benchmark. His parents were thrilled. I was thrilled.

In fourth grade, he fell off a cliff.

Not a metaphorical cliff. A real one. He couldn’t decode “photosynthesis.” He couldn’t break “parliament” into syllables. He had spent two years memorizing the shapes of words and guessing from pictures, and the test we were using couldn’t see it. The test measured comprehension. He had vocabulary. He had context. He had parents who read to him every night. So the numbers looked like a prodigy, and the decoding skills underneath looked like nothing, and nobody noticed until the pictures disappeared from the books.

That kid is in my head every time I read about Alpha School.


If you haven’t been following the story, here’s the version most people know. Alpha is a private school in Austin charging forty to seventy-five thousand dollars a year. Kids do academics for two hours on an iPad, then spend the afternoon on life skills. The school markets itself as the top one percent nationally. Parents are buying it.

Top annual tuition, per student$75K
Daily iPad academic block2 hours
Learning velocity claim (investor pitch)10×
Peer-reviewed studies0

The Secretary of Education visited the Austin campus in September and gave it the strongest endorsement of her fifty-state tour. A ten-year-old Alpha student sat with the First Lady at the State of the Union. The founders are now in the White House presenting at AI summits alongside Microsoft and Google.

That’s the story the press is telling. And it’s a real story, as far as it goes.

But I’ve been reading the audits. The charter applications. The state denial letters. The investigative work from Wired and 404 Media. And the detail that keeps me up isn’t the ten-times-learning-speed claim, which is mathematically impossible on its face. It isn’t the AI-hallucinated percentiles, though we should talk about those in a minute. It’s what’s happening inside the two-hour reading block for a five-year-old.

Nothing. That’s not hyperbole. Alpha School has no certified reading teachers.

They have no coherent structured literacy program. They outsource all foundational reading instruction to three off-the-shelf apps: Mntava, Lolo, and Clear Fluency.

I looked at each of these the way a literacy coach would look at a curriculum adoption. Mntava skips explicit phonemic awareness, which is the auditory foundation of every evidence-based reading program on earth. Lolo uses analogy phonics and word families, and the early texts include sounds the kids haven’t been taught yet, which forces children to guess from pictures when they hit an unfamiliar word. Clear Fluency uses speech recognition to assess oral reading, but it assumes the child already knows how to decode. It’s not a teaching tool. It’s a measurement tool masquerading as one.

Stack those three together and you have a system where one app skips the sound work, one app trains kids to guess from pictures, and one app grades them on skills nobody actually taught. No adult in the room is permitted to intervene. The staff at Alpha aren’t called teachers. They’re called guides, and Alpha’s own documentation says guides don’t teach academic content. In a comprehensive review of the founders’ public appearances, they have never once used the words phonics, phonemic awareness, or science of reading.

This is the foundation you’re paying seventy-five thousand dollars for.


The MAP Test Illusion

Here’s the part I wish more reporters would press on. Alpha’s parents aren’t stupid. They’re looking at a dashboard that tells them their kindergartener is reading in the ninety-ninth percentile. That dashboard is generated by the NWEA MAP Growth test, which is a legitimate assessment used in thousands of real schools. The percentile looks real because it is real, in the narrow sense that the test was scored correctly.

But MAP Growth’s reading assessment primarily measures comprehension. It leans on vocabulary, background knowledge, and the ability to infer meaning from context. It doesn’t measure phonemic awareness. It doesn’t measure decoding. It doesn’t measure oral reading fluency.

A five-year-old can score in the 99th percentile on MAP reading without being able to sound out a single unfamiliar word.

In the early grades, kids are brilliant compensators. They memorize word shapes. They use pictures. They leverage the family dinner conversations most of their peers never had. And then fourth grade arrives, and the pictures disappear, and the vocabulary gets technical, and the whole fragile scaffolding collapses. Educators call this the fourth grade wall. It’s well documented. It is exactly the failure mode the Science of Reading exists to prevent, and Alpha’s assessment system is structurally blind to it.

Remember the kid from the beginning? That’s him. That’s every kid this model will produce, just on a larger scale, and with a more expensive dashboard lying to their parents for longer.


What I’m Not Saying

I want to be clear about what I’m not saying. I’m not saying AI has no place in a classroom. I’ve built an entire curriculum pipeline using AI tools. I run a company whose thesis is that AI, used correctly, can help teachers do the work of the Science of Reading faster and at scale. I’m not a Luddite. I’m not protecting a guild.

What I’m saying is narrower and more specific.

The Alpha model is using AI to deliver pedagogy that independent experts haven’t validated, measured by a test that can’t see the skill gap, staffed by bachelor’s-degree contractors who aren’t permitted to teach, and sold to families at luxury prices with marketing percentiles that, in at least one documented case, were generated by pasting student data into a consumer chatbot and asking it to do statistics. I’m not exaggerating that last part. Alpha’s principal reportedly used Gemini and Claude to calculate the school’s “top 0.1% globally” claim. Those are text prediction models. They hallucinate arithmetic. Anyone with a basic understanding of how large language models work should refuse to stake an educational reputation on their numerical output, and anyone staking a child’s reading development on a model that can’t see decoding deficits is making a bet they can’t actually evaluate until fourth grade, by which point the damage takes years to undo.

The worst part is that the technology isn’t what’s driving the results. A parent with access to the same 2 Hour Learning software ran it as a homeschool pilot, without the luxury campus, without the intensive adult supervision, and without the cash rewards Alpha hands to kids for clicking through modules fast enough. The result was one-x growth. Standard national average. The “miracle” appears to be wealth, supervision, and financial incentives. Strip any of those away and the software performs like software.


The Room the Algorithm Built

I keep coming back to the story of the nine-year-old at the Brownsville campus. The one who got stuck in a multiplication loop on an app called IXL, who broke down sobbing that she’d rather die than keep going, who lost so much weight her pediatrician had to write a formal note instructing the school to feed her during the day. According to the mother’s account in the Wired reporting, the staff told the child she hadn’t earned her snacks yet.

The algorithm is perfect in this model. If the child is suffering, the child is the problem.

The guides in that room weren’t trained teachers. They weren’t empowered to bypass the software. They were following a protocol that, by design, puts the algorithm above human judgment.

I spent twenty years in classrooms and in a vice principal’s office. I watched systems fail kids in a dozen different ways. I left education because I couldn’t keep enforcing standards I no longer believed in. But I have never, in any of those buildings, seen an adult look at a child in that kind of distress and withhold food until a metric moved. Not in the worst school I ever worked in. Not once.

That’s not what happens when AI replaces teachers. That’s what happens when AI replaces the permission to be human in the room.


Follow the Money

The model I just described, the one with no certified teachers and no structured literacy program and a nine-year-old earning her snacks, has been rejected by seven out of eight state charter boards. Texas, Pennsylvania, North Carolina, Arkansas, Utah, South Carolina. All of them looked at the evidence and said no. Only Arizona approved it, on a four-to-three vote. If a model keeps failing at the state level, the rational business response is to fix the model. Alpha’s response has been to bypass the states.

The state-level resistance is exactly why the network has been pouring money into federal and gubernatorial politics instead. The single clearest data point in the entire paper trail is a Virginia donation whose timeline reads like a forensic exhibit. A shell company with no website, no employees, and no business operations was incorporated in Delaware on May 25, 2023. The next day, it donated one million dollars to a governor’s political action committee. By January 2026, that state became the first to opt into the federal education tax credit program.

This is not an innovation story. It is a regulatory arbitrage story that uses innovation as its marketing surface. The evidence that protects kids and taxpayers isn’t being beaten on the merits. It’s being bypassed by checkbook.


What Educators and Parents Should Take From This

When a school tells you its kids are in the ninety-ninth percentile, ask which test. Ask what that test measures. Ask what it doesn’t measure. Ask whether anyone in the building has heard of the fourth grade wall. Ask what structured literacy program the school uses, by name, and then look up whether that program has been validated by EdReports or the What Works Clearinghouse. If the answer is three iPad apps and a set of bachelor’s-degree contractors, you have your answer.

When a company tells you its AI is producing miracles, ask for independent peer-reviewed evidence. Not internal dashboards. Not blog posts. Not founders on investor podcasts. Real studies, run by people who don’t work for the company, published in places where other researchers can check the work. The evidentiary standard that has protected children and taxpayers for a century isn’t innovation-hostile. It’s the reason we know what works.

And when a model that fails every one of those standards gets invited to the White House and held up as the future of American public education, pay attention. Because the play isn’t to win state charter boards on the evidence. Seven out of eight have already said no. The play is to bypass the evidence entirely through federal endorsement and state voucher programs, and to use public tax dollars to scale a model that can’t survive independent review.

I’m still building with AI every day. I believe in the tools. I think they can make the Science of Reading accessible in classrooms that have never had a trained literacy coach. But the version of the future where a nine-year-old is told to earn her snacks by clicking faster is not the future I’m building. And I think most teachers and most parents, if they knew what was actually inside the two-hour day, wouldn’t be building it either.

The question isn’t whether AI belongs in schools. It’s already there. The question is whether we let the people running the biggest experiment in American K–12 education define what “working” means before anyone independent has checked the pedagogy inside the box.

Check the box. Look at the reading lesson. Then decide.


Sources & Further Reading

  • Wired — investigative reporting on the Brownsville campus and documented student distress cases.
  • 404 Media — leaked Alpha School internal documents, including the “no humans in the loop” goal statement.
  • Pennsylvania Department of Education — Unbound Academy charter application denial letter.
  • Astral Codex Ten — independent homeschool pilot of the 2 Hour Learning software showing 1× growth.
  • NWEA MAP Growth — technical documentation on what the reading assessment measures and what it does not.
  • State charter applications (Arizona, Pennsylvania, North Carolina, Texas, Utah, Arkansas, South Carolina) — public record.
  • Campaign finance filings for Future of Education LLC and JWL34 LLC — Virginia State Board of Elections.