List of AI News about prompt caching
| Time | Details |
|---|---|
|
2025-11-14 00:55 |
OpenAI Launches Advanced API Models and Extended Prompt Caching for Developers: Boosting AI Application Performance in 2025
According to Greg Brockman on X (formerly Twitter), OpenAI is prioritizing developer needs by introducing excellent new models in their API and implementing extended prompt caching features (source: x.com/gdb/status/1989135114744573993). These advancements are designed to accelerate AI-powered application development and improve efficiency for businesses leveraging OpenAI's platform. By providing more robust model options and optimizing prompt caching, OpenAI enables developers to build faster, more reliable, and cost-effective AI solutions. This move positions OpenAI to better serve enterprise and startup clients seeking scalable, production-ready generative AI tools (source: x.com/gdb/status/1989135114744573993). |
|
2025-11-13 19:11 |
GPT-5.1 API Release: New Features, Codex Models, and 24-Hour Prompt Caching for Developers
According to Sam Altman (@sama), OpenAI has launched GPT-5.1 in the API, maintaining the same pricing as GPT-5. The release includes specialized models gpt-5.1-codex and gpt-5.1-codex-mini, designed for long-running coding tasks, which expands AI-driven software development capabilities. Additionally, prompt caching duration has been extended to 24 hours, significantly optimizing operational efficiency for businesses managing large-scale AI workflows. This update offers practical advantages for enterprise AI adoption, enabling more reliable automation and cost-effective coding solutions (Source: @sama, Nov 13, 2025). |