OpenAI Drops GPT-5.3-Codex-Spark: 15x Faster Code Generation

OpenAI's new slimmed-down coding model promises massive speed gains for Pro subscribers who hate waiting.

OpenAI Drops GPT-5.3-Codex-Spark: 15x Faster Code Generation

OpenAI just dropped a research preview of GPT-5.3-Codex-Spark, and the pitch is simple: code generation that doesn't make you wait around.

The new model is a lighter version of GPT-5.3-Codex, built specifically for real-time coding conversations rather than slow, batch-style AI agents. The speed numbers are legitimately impressive—15x faster code generation overall, 80% faster roundtrip latency, and 50% faster time-to-first-token.

Translation: you ask for code, you get code. Fast.

The catch? It's Pro users only for now. OpenAI is clearly betting that developers want snappy back-and-forth interactions, not AI that thinks for ages before spitting out a solution.

Whether the speed trade-offs affect output quality remains to be seen. But for coders tired of watching spinners, Spark might be exactly what the doctor ordered.