Hacker Newsnew | past | comments | ask | show | jobs | submit | nvanlandschoot's commentslogin

Yeah, the progress is still incredibly impressive even if 15× is overstated. Curious to see how far it goes in the future.


“Haiku is faster than Opus” is fine as a simple statement. But if you’re going to say “15× faster” in a model card, it should be at similar accuracy. Otherwise you’re mostly comparing different settings, as opposed to a technological leap in model performance. It’s not technically wrong, it’s just not very useful and a bit misleading as a headline.


I still disagree. They clearly stated it was a smaller model and scored less on benchmarks. It was clear that this model is for people who want to trade off quality for speed.


I think they modified the page. If you search for GPT-5.3-Codex-Spark, Google still has it indexed with 15x. Searching: GPT-5.3-Codex-Spark + "15x" will show all the downstream sites that picked up the claim.


The Google snippet isn't outdated. It's from the <meta> tag. It's still there, and it still says "Introducing GPT-5.3-Codex-Spark—our first real-time coding model. 15x faster generation, 128k context, now in research preview for ChatGPT Pro users."

I don't think the visible page text ever said 15x faster. It's possible they modified it before I saw it, but it's not in the oldest Internet Archive version either.

The other news sites that mention 15x faster are probably either getting it from the same <meta> tag that shows up in search snippets, or from the RSS feed. Both would be generated from the same source text in whatever platform they use to write their posts.


Method: I used OpenAI’s published SWE-Bench Pro chart points and matched GPT-5.3-Codex-Spark to the baseline model at comparable accuracy levels by reasoning effort. At similar accuracy, the effective speedup is closer to ~1.37× rather than 15×.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: