OpenAI has just rolled out something pretty exciting: the o1-pro, a beefed-up version of its o1 “reasoning” AI model. This new model is now available through OpenAI’s developer API, and it’s designed to think harder and deliver better answers to tough problems. But there’s a catch—it’s only available to developers who’ve already spent at least $5 on OpenAI API services.
Now, let’s talk pricing. The o1-pro doesn’t come cheap. It costs $150 per million tokens for input and a hefty $600 per million tokens generated. To put that in perspective, that’s double the input cost of OpenAI’s GPT-4.5 and ten times the cost of the original o1 model. It’s a big investment, and OpenAI is betting that the model’s enhanced capabilities will be worth it for developers.
An OpenAI representative put it this way: “O1-pro in the API is a version of o1 that uses more computing to think harder and provide even better answers to the hardest problems.” This move is largely in response to developers asking for more reliable system responses.
But let’s keep it real—initial feedback from users who have been using o1-pro via the ChatGPT platform since December has been mixed. Some users found the model struggled with tasks like Sudoku puzzles and optical illusion jokes. And while OpenAI’s internal tests from last year showed only a slight improvement over the standard o1 in solving coding and math problems, the o1-pro did offer more consistent performance.
If you’re a developer thinking about making the switch to o1-pro, these mixed reviews might give you pause. It’s important to weigh these initial assessments against the potential benefits of its advanced reasoning capabilities.