I wish OpenAI would provide more clarity about the Assistants API deprecation, which has been announced as being sunset in spring of 2026 and replaced by the Responses API, but still no other updates on the timeline or migration plan.
Prior to the release of the Responses API, the Assistants API was the best way (for our use cases, at least) to interact with OpenAI's API, so hopefully some clarity on the plan for it is released soon (now that Responses API has some of the things that it was previously missing)
It's great to see more and more adoption for MCP. I'm not sure it's the most bulletproof protocol, but it feels like it's in a strong lead, especially with OpenAI support.
I've been using Codex for the last 24 hours, and background mode boosts your output. You can have Codex work on n+ features async. I had it building a database model alongside frontend authentication, and did both pretty well.
Wow background mode looks awesome. I'm excited to work that into our UX for people. Live Q&A is such a dead interface at this point.
Reasoning summaries also look great. Anything that provides extra explainability is a win in my book.
List of remote MCP servers to use here: https://github.com/jaw9c/awesome-remote-mcp-servers
It was never really clear what the difference between the chat and responses APIs were. Anyone know the difference?
Reasoning models can now call tools during the reasoning process.
Im quite surprised they’re actually going with hosted mcp versus just implementing the mcp server locally and interacting with the api
> Encrypted reasoning items: Customers eligible for Zero Data Retention (ZDR) can now reuse reasoning items across API requests
So, so weird that they still don't want you to see their models' reasoning process, to the point that even highly trusted organizations with ZDR contracts only get them in a black-box encrypted form. Gemini has no issue showing its work. Why can't OpenAI?