OpenAI has launched the o1-Pro API, offering developers access to one of its most advanced AI models. This new release provides better responses than previous models by using higher computational power. However, the increased efficiency comes with a higher cost, making it OpenAI’s most expensive API to date.
The o1-Pro model was introduced after multiple developers requested API access to its capabilities. It offers advanced features such as function calling, structured outputs, and vision support. The model can process both text and images but only generates text output, with no support for audio input. It boasts a massive 200,000-token context window and has a knowledge cutoff of October 2023.
With its enhanced reasoning capabilities, the o1-Pro API is priced at $150 per million input tokens and $600 per million output tokens. This makes it significantly more expensive than other OpenAI models like o1-mini and o3-mini, which cost $1.10 per million input tokens and $4.40 per million output tokens. Developers using the API should also consider the additional costs of reasoning tokens, which are necessary for complex responses.
Despite its high cost, OpenAI has made o1-Pro available to select developers on its paid 1-5 tiers. However, ChatGPT Pro users, who pay $200 per month, can access the o1-Pro mode within the platform without additional charges, though with rate limits.
As AI technology advances, OpenAI continues to push the boundaries of innovation, providing developers with cutting-edge tools while also raising questions about cost accessibility. The o1-Pro API sets a new standard for AI-driven applications, but at a premium price.