$ timeahead_
all sourcesAhead of AI (Sebastian Raschka)Anthropic NewsApple Machine Learning ResearchArs Technica AIAWS Machine Learning BlogCerebras BlogCohere BlogCrewAI BlogDeepSeek BlogDistill.pubfast.ai BlogFireworks AI BlogGoogle AI BlogGoogle Cloud AI BlogGoogle DeepMind BlogGroq BlogHaystack (deepset) BlogHugging Face BlogImport AI (Jack Clark)LangChain BlogLangFuse BlogLil'Log (Lilian Weng)LlamaIndex BlogMeta AI BlogMicrosoft AutoGen BlogMicrosoft Research BlogMistral AI NewsMIT Technology ReviewModal Blogn8n BlogNathan Lambert (RLHF)NVIDIA Developer BlogOllama BlogOpenAI BlogPerplexity AI BlogPyTorch BlogReplicate BlogSimon Willison BlogTensorFlow BlogThe Batch (DeepLearning.AI)The GradientThe Verge AITogether AI BlogVentureBeat AIvLLM BlogWeights & Biases BlogWired AIxAI (Grok) Blog
allapiagentsframeworkshardwareinframodelopen sourcereleaseresearchtutorial
★ TOP STORY[ MTR ]Release·2d ago

The Download: introducing the Nature issue

The Download: introducing the Nature issue Plus: Trump signaled he’s open to reversing the Anthropic ban. This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology. Introducing: the Nature issue When we talk about “nature,” we usually mean something untouched by humans. But little of that world exists today. From microplastics in rainforest wildlife to artificial light in the Arctic Ocean, human influence now reaches every corner of Earth. In this context, what even is nature? And should we employ technology to try to make the world more “natural”? In our new Nature issue, MIT Technology Review grapples with these questions. We investigate birds that can’t sing, wolves that aren’t wolves, and grass that isn’t grass. We look for the meaning of life under Arctic ice,…

MIT Technology Reviewread →
▲ trending · last 48hview all →
[ANT]Anthropic News· 1 articlesvisit →
46d ago
Mar 11, 2026 Announcements Introducing The Anthropic Institute
Introducing The Anthropic Institute We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies. The Anthropic Institute will draw on research from across Anthropic to provide information that other researchers and the public can use during our transition to a world containing much more powerful AI systems. In the five years since Anthropic began, AI progress has moved incredibly quickly. It took us two years to release our first commercial model, and just three more to develop models that can discover severe cybersecurity vulnerabilities, take on a wide range of real work, and even begin to accelerate the pace of AI development itself. We predict that far more dramatic progress will follow in the next two years. One of our company’s core convictions is that AI development is…
46dRelease
[ATA]Ars Technica AI· 5 articlesvisit →
4d ago
Florida probes ChatGPT role in mass shooting. OpenAI says bot "not responsible."
OpenAI now faces a criminal probe after ChatGPT advised a gunman ahead of a mass shooting at a university in Florida, where two people were killed and six were wounded last year. In a press release, Florida Attorney General James Uthmeier confirmed that the investigation into OpenAI’s potential criminal liability was launched after reviewing shocking chat logs between ChatGPT and an account linked to the suspected gunman, Phoenix Ikner. The 20-year-old Florida State University student is currently awaiting trial “on multiple charges of murder and attempted murder,” Politico reported. At a press conference, Uthmeier revealed that the logs showed that ChatGPT provided “significant advice” before Ikner allegedly “committed such heinous crimes.” The attorney general emphasized that under Florida’s aiding and abetting laws, “if ChatGPT were a person,” it too “would be facing charges for murder.” For OpenAI, the probe will…
4dRelease#gptby Ashley Belanger
5d ago
Anthropic's Mythos AI model sparks fears of turbocharged hacking
Anthropic’s new Mythos AI model is raising concern among governments and companies that it could outpace current cyber security defenses, turbocharge hacking, and expose weaknesses faster than they can be fixed. The San Francisco-based startup released a cyber-focused model this month, which has shown the ability to detect software flaws faster than humans but also demonstrated it can generate exploits needed to take advantage of them. In one alarming case, the Mythos model showed it could break out of a secure digital environment to contact an Anthropic worker and publicly reveal software glitches, overriding the intention of its human makers. This week, OpenAI also released its own advanced cyber model with similar capabilities. The developments have led senior international financial officials and government ministers around the world scrambling to understand the dangers, in some cases seeking access to the new…
5dReleaseby Cristina Criddle, Financial Times
8d ago
Meta's AI spending spree is helping make its Quest headsets more expensive
The rising costs of RAM and other computing components are pushing up the price of Meta’s Quest VR headsets, which the company says will increase by $50–$100 (about 12–20 percent) starting on April 19. In announcing that price increase on Thursday, the company cited the “global surge in the price of critical components—specifically memory chips—[that] is impacting almost every category of consumer electronics, including VR.” But unlike many of the other tech companies that have been pushed into similar price increases in recent months, Meta’s own spending priorities are at least partly to blame for the rising prices of those components. The company’s recent hard pivot to the “AI superintelligence” race has directly contributed to the conditions that are now making its own Quest headsets more expensive. Spending like a drunk sailor In January, Meta announced that it plans to…
8dReleaseby Kyle Orland
10d ago
Google releases new apps for Windows and MacOS
Most people access Google’s search and AI products through a browser, but you’ve got some new options today. Google has been testing a Windows search app for some months, and it’s now officially available. Over on the Apple side of the fence, Google has focused its efforts on designing a native Gemini app. That one is also available widely today with the same features you get in the Gemini web interface. The “Google app for desktop” first arrived on Windows in a beta form last September. It was pretty rough at first, and Google couldn’t even update the app’s early versions, forcing users to uninstall and reinstall new builds. That won’t be a concern with the official release, which brings assorted search capabilities to your Windows PC. You can open the Google app by pressing Alt + Space at any…
10dReleaseby Ryan Whitwam
12d ago
Meta spins up AI version of Mark Zuckerberg to engage with employees
Meta is building an artificial intelligence version of Mark Zuckerberg that can engage with employees in his stead, as part of a broader push to remake the Big Tech company around AI. The $1.6 trillion group has been working on developing photorealistic, AI-powered 3D characters that users can interact with in real time, according to four people familiar with the matter. The company recently began prioritizing a Zuckerberg AI character, three of the people said. The Meta chief is personally involved in training and testing his animated AI, which could offer conversation and feedback to employees, according to one person. They added that the character was being trained on the billionaire’s mannerisms, tone, and publicly available statements, as well as his own recent thinking on company strategies, so that employees might feel more connected to the founder through interactions with…
12dRelease#trainingby Hannah Murphy, Financial Times
[AWS]AWS Machine Learning Blog· 2 articlesvisit →
8d ago
Introducing granular cost attribution for Amazon Bedrock
Artificial Intelligence Introducing granular cost attribution for Amazon Bedrock As AI inference grows into a significant share of cloud spend, understanding who and what are driving costs is essential for chargebacks, cost optimization, and financial planning. Today, we’re announcing granular cost attribution for Amazon Bedrock inference. Amazon Bedrock now automatically attributes inference costs to the IAM principal that made the call. An IAM principal can be an IAM user, a role assumed by an application, or a federated identity from a provider like Okta or Entra ID. Attribution flows to your AWS Billing and works across models, with no resources to manage and no changes to your existing workflows. With optional cost allocation tags, you can aggregate costs by team, project, or custom dimension in AWS Cost Explorer and AWS Cost and Usage Reports (CUR 2.0). In this post, we…
8dReleaseby Ba'Carri Johnson
10d ago
Create rich, custom tooltips in Amazon Quick Sight
Artificial Intelligence Create rich, custom tooltips in Amazon Quick Sight Amazon Quick Sight, the business intelligence (BI) capability of Amazon Quick, is a unified BI service. It provides modern interactive dashboards, natural language querying, pixel-perfect reports, machine learning (ML) insights, and embedded analytics at scale. Amazon Quick brings together AI agents for business insights, research, and automation in one integrated experience, helping you work smarter and faster while maintaining security and access policies. Today, we’re announcing sheet tooltips in Amazon Quick Sight. Dashboard authors can now design custom tooltip layouts using free-form layout sheets. These layouts combine charts, key performance indicator (KPI) metrics, text, and other visuals into a single tooltip that renders dynamically when readers hover over data points. Sheet tooltips work with most chart types, including tables and pivot tables, and authors can reuse the same tooltip sheet…
10dReleaseby Meshan Khosla
[FB]fast.ai Blog· 1 articlesvisit →
429d ago
fasttransform: Reversible Pipelines Made Simple
Introducing fasttransform, a Python library that makes data transformations reversible and extensible through the power of multiple dispatch. technical Author Rens Dimmendaal, Hamel Husain, & Jeremy Howard Published February 20, 2025 fasttransform: Reversible Pipelines Made Simple Introducing fasttransform, a Python library that makes data transformations reversible and extensible through the power of multiple dispatch. “How did this image get misclassified?” If you’ve ever trained a machine learning model, you know what comes next: the frustrating journey of trying to understand what your model actually saw. You dig through layers of transformations - normalizations, resizes, augmentations - only to realize you’ll need to write inverse functions just to see your data again. It’s so painful that many of us skip it altogether, debugging our models based on abstract numbers rather than actual data. Or as OpenAI’s Greg Brockman puts it: Let’s…
429dReleaseby Rens Dimmendaal, Hamel Husain, & Jeremy Howard
[GDM]Google DeepMind Blog· 2 articlesvisit →
86d ago
Hear more about interactive world models in our latest podcast.
The latest episode of the Google AI: Release Notes podcast focuses on Genie 3, a real-time, interactive world model. Host Logan Kilpatrick chats with Diego Rivas, Shlomi Fruchter, and Jack Parker-Holder from the Project Genie team to discuss the evolution from passive video generation to playable, simulated environments. They dive deep into the technical challenges of maintaining world consistency and memory, the experience of “stepping inside” a 2D image, and the vision for world models as a critical training ground for future AI agents. Watch the episode below, or listen to the podcast on Apple Podcasts or Spotify.
89d ago
We’re announcing the 12 recipients of our AI for Science fund
We’re announcing the 12 recipients of our AI for Science fund Science is the cornerstone of human progress. Yet, while the world’s problems are becoming increasingly complex, the pace of new discovery is actually slowing. To help overcome this, Google.org created a $20 million AI for Science fund to support academic, nonprofit and startup organizations using AI to tackle the world’s most complex scientific challenges. We're equipping researchers with the right resources to use AI to unlock the impossible and achieve in years what used to take decades. Today, we’re announcing the twelve recipients of the AI for Science fund. These teams aren't just using AI to synthesize and process data; they are using it to break through the most significant obstacles across scientific domains like health, agriculture and biodiversity to turn discoveries into real-world solutions. Each of these organizations…
89dReleaseby Maggie Johnson
[HF]Hugging Face Blog· 24 articlesvisit →
25d ago
TRL v1.0: Post-Training Library Built to Move with the Field
TRL v1.0: Post-Training Library Built to Move with the Field TRL now implements more than 75 post-training methods. But coverage isn’t the goal by itself. What matters is making these methods easy to try, compare, and actually use in practice. The design of the library wasn’t decided upfront. It is the result of years of iteration — the first commit goes back more than six years — and it has been shaped by everything the field threw at it: new algorithms, new models, shifting paradigms. Over time, this pressure forced the codebase toward a very specific design. Parts of it might look unusual at first, but like in many evolutionary codebases, they exist for a reason. TRL is built for a field that doesn’t sit still. So the question is not how to design the perfect abstraction. It is how…
25dRelease#training
46d ago
Introducing Storage Buckets on the Hugging Face Hub
Introducing Storage Buckets on the Hugging Face Hub Storage Buckets are built exactly for this: mutable, S3-like object storage you can browse on the Hub, script from Python, or manage with the hf CLI. And because they are backed by Xet, they are especially efficient for ML artifacts that share content across files. Why we built Buckets Git starts to feel like the wrong abstraction pretty quickly when you're dealing with: - Training clusters writing checkpoints and optimizer states throughout a run - Data pipelines processing raw datasets iteratively - Agents storing traces, memory, and shared knowledge graphs The storage need in all these cases is the same: write fast, overwrite when needed, sync directories, remove stale files, and keep things moving. A Bucket is a non-versioned storage container on the Hub. It lives under a user or organization namespace,…
46dRelease#rag
51d ago
Introducing Modular Diffusers - Composable Building Blocks for Diffusion Pipelines
Introducing Modular Diffusers - Composable Building Blocks for Diffusion Pipelines DiffusionPipeline class with a more flexible, composable alternative. In this post, we'll walk through how Modular Diffusers works — from the familiar API to run a modular pipeline, to building fully custom blocks and composing them into your own workflow. We'll also show how it integrates with Mellon, a node-based visual workflow interface that you can use to wire Modular Diffusers blocks together. Table of contents Quickstart Here is a simple example of how to run inference with FLUX.2 Klein 4B using pre-built blocks: import torch from diffusers import ModularPipeline # Create a modular pipeline - this only defines the workflow, model weights have not been loaded yet pipe = ModularPipeline.from_pretrained( "black-forest-labs/FLUX.2-klein-4B" ) # Now load the model weights — configure dtype, quantization, etc in this step pipe.load_components(torch_dtype=torch.bfloat16) pipe.to("cuda") #…
51dRelease
79d ago
Introducing SyGra Studio
Introducing SyGra Studio What Studio lets you do - Configure and validate models with guided forms (OpenAI, Azure OpenAI, Ollama, Vertex, Bedrock, vLLM, custom endpoints). - Connect Hugging Face, file-system, or ServiceNow data sources and preview rows before execution. - Configure nodes by selecting models, writing prompts (with auto-suggested variables), and defining outputs or structured schemas. - Design downstream outputs using shared state variables and Pydantic-powered mappings. - Execute flows end-to-end and review generated results instantly with node-level progress. - Debug with inline logs, breakpoints, Monaco-backed code editors, and auto-saved drafts. - Monitor per-run token cost, latency, and guardrail outcomes with execution history stored in .executions/ . Let’s walk through this experience step by step. Step 1: Configure the data source Open Studio, click Create Flow, and Start/End nodes appear automatically. Before adding anything else: - Choose a connector (Hugging…
79dRelease
86d ago
Introducing Daggr: Chain apps programmatically, inspect visually
Introducing Daggr: Chain apps programmatically, inspect visually Table of Contents - Background - Getting Started - Sharing Your Workflows - End-to-End Example with Different Nodes - Next Steps Background If you've built AI applications that combine multiple models or processing steps, you know the pain: chaining API calls, debugging pipelines, and losing track of intermediate results. When something goes wrong in step 5 of a 10-step workflow, you often have to re-run everything just to see what happened. Most developers either build fragile scripts that are hard to debug or turn to heavy orchestration platforms designed for production pipelines—not rapid experimentation. We've been working on Daggr to solve problems we kept running into when building AI demos and workflows: Visualize your code flow: Unlike node-based GUI editors, where you drag and connect nodes visually, Daggr takes a code-first approach. You…
86dRelease
95d ago
Introducing Waypoint-1: Real-time interactive video diffusion from Overworld
Waypoint-1: Real-time Interactive Video Diffusion from Overworld Waypoint-1 Weights on the Hub - Waypoint-1-Small - Waypoint-1-Medium (Coming Soon!) Try Out The Model Overworld Stream: https://overworld.stream What is Waypoint-1? Waypoint-1 is Overworld’s real-time-interactive video diffusion model, controllable and prompted via text, mouse, and keyboard. You can give the model some frames, run the model, and have it create a world you can step into and interact with. The backbone of the model is a frame-causal rectified flow transformer trained on 10,000 hours of diverse video game footage paired with control inputs and text captions. Waypoint-1 is a latent model, meaning that it is trained on compressed frames. The standard among existing world models has become taking pre-trained video models and fine-tuning them with brief and simplified control inputs. In contrast, Waypoint-1 is trained from the get-go with a focus on interactive…
95dRelease#multimodal
141d ago
Introducing swift-huggingface: The Complete Swift Client for Hugging Face
Introducing swift-huggingface: The Complete Swift Client for Hugging Face You can start using it today as a standalone package, and it will soon integrate into swift-transformers as a replacement for its current HubApi implementation. The Problem When we released swift-transformers 1.0 earlier this year, we heard loud and clear from the community: - Downloads were slow and unreliable. Large model files (often several gigabytes) would fail partway through with no way to resume. Developers resorted to manually downloading models and bundling them with their apps — defeating the purpose of dynamic model loading. - No shared cache with the Python ecosystem. The Python transformers library stores models in~/.cache/huggingface/hub . Swift apps downloaded to a different location with a different structure. If you'd already downloaded a model using the Python CLI, you'd download it again for your Swift app. - Authentication…
141dRelease
156d ago
Introducing AnyLanguageModel: One API for Local and Remote LLMs on Apple Platforms
Introducing AnyLanguageModel: One API for Local and Remote LLMs on Apple Platforms Developers building AI-powered apps typically take a hybrid approach, adopting some combination of: - Local models using Core ML or MLX for privacy and offline capability - Cloud providers like OpenAI or Anthropic for frontier capabilities - Apple's Foundation Models as a system-level fallback Each comes with different APIs, different requirements, different integration patterns. It's a lot, and it adds up quickly. When I interviewed developers about building AI-powered apps, friction with model integration came up immediately. One developer put it bluntly: I thought I'd quickly use the demo for a test and maybe a quick and dirty build but instead wasted so much time. Drove me nuts. The cost to experiment is high, which discourages developers from discovering that local, open-source models might actually work great for…
156dRelease#local
180d ago
huggingface_hub v1.0: Five Years of Building the Foundation of Open Machine Learning
huggingface_hub v1.0: Five Years of Building the Foundation of Open Machine Learning huggingface_hub has reached v1.0 - a milestone that marks the library's maturity as the Python package powering 200,000 dependent libraries and providing core functionality for accessing over 2 million public models, 0.5 million public datasets, and 1 million public Spaces. This release introduces breaking changes designed to support the next decade of open machine learning, driven by a global community of almost 300 contributors and millions of users. 🚀 We highly recommend upgrading to v1.0 to benefit from major performance improvements and new capabilities. pip install --upgrade huggingface_hub Major changes in this release include the migration to httpx as the backend library, a completely redesigned hf CLI (which replaces the deprecated huggingface-cli ) featuring a Typer-based interface with a significantly expanded feature set, and full adoption of hf_xet…
180dRelease
226d ago
Introducing the Palmyra-mini family: Powerful, lightweight, and ready to reason!
Introducing the Palmyra-mini family: Powerful, lightweight, and ready to reason! The team at WRITER is thrilled to announce the release of three new open models in the Palmyra-mini family. These models are designed to be powerful, lightweight, and highly performant for their size (1.5B to 1.7B), making them ideal for a wide range of applications with efficient inference. - palmyra-mini: A powerful, lightweight non-thinking base model. - palmyra-mini-thinking-a: A specialized variant optimized for complex reasoning and logic. - palmyra-mini-thinking-b: Another specialized variant that excels at mathematical equations and reasoning. The "thinking" models have been trained with a Chain of Thought (CoT) approach, which improves their reasoning abilities. We're excited to see what the community will build with these new models! GGUF and MLX quantizations are also available for your convenience: Benchmark Highlights: palmyra-mini: Our non-reasoning improved base model, delivering a…
226dRelease
248d ago
NVIDIA Releases 6 Million Multi-Lingual Reasoning Dataset
NVIDIA Releases 6 Million Multi-Lingual Reasoning Dataset NVIDIA continues releasing permissive datasets in support of the open ecosystem with 6 Million Multilingual Reasoning Dataset. Continuing the success of the recent Nemotron Post-Training Dataset v1 release used in Llama Nemotron Super model, and our Llama Nemotron Post-Training Dataset release earlier this year, we’re excited to release the reasoning dataset translated into five target languages: French, Spanish, German, Italian, and Japanese. The newly released NVIDIA Nemotron Nano 2 9B brings these capabilities to the edge with leading accuracy and efficiency with a hybrid Transformer–Mamba architecture and a configurable thinking budget—so you can dial accuracy, throughput, and cost to match your real‑world needs. Model Highlights (TL;DR) - Model size: 9B parameters - Architecture: Hybrid Transformer–Mamba (Mamba‑2 + a small number of attention layers) for higher throughput at similar accuracy to Transformer‑only peers -…
248dRelease#gpu
260d ago
Introducing AI Sheets: a tool to work with datasets using open AI models!
Introducing AI Sheets: a tool to work with datasets using open AI models! Hugging Face AI Sheets is a new, open-source tool for building, enriching, and transforming datasets using AI models with no code. The tool can be deployed locally or on the Hub. It lets you use thousands of open models from the Hugging Face Hub via Inference Providers or local models, including gpt-oss from OpenAI! Useful links Try the tool for free (no installation required): https://huggingface.co/spaces/aisheets/sheets Install and run locally: https://github.com/huggingface/sheets What is AI Sheets AI Sheets is a no-code tool for building, transforming, and enriching datasets using (open) AI models. It’s tightly integrated with the Hub and the open-source AI ecosystem. AI Sheets uses an easy-to-learn user interface, similar to a spreadsheet. The tool is built around quick experimentation, starting with small datasets before running long/costly data…
260dRelease
282d ago
Five Big Improvements to Gradio MCP Servers
Five Big Improvements to Gradio MCP Servers To that end, here are some of the big improvements we've added to Gradio MCP servers as of version 5.38.0. Seamless Local File Support If you've tried to use a remote Gradio MCP server that takes a file as input (image, video, audio), you've probably encountered this error: This happens because the Gradio server is hosted on a different machine, meaning any input files must be accessible via a public URL so they can be downloaded remotely. While many ways exist to host files online, they all add a manual step to your workflow. In the age of LLM agents, shouldn't we expect them to handle this for you? Gradio now includes a "File Upload" MCP server that agents can use to upload files directly to your Gradio application. If any tools in…
361d ago
Introducing AutoRound: Intel’s Advanced Quantization for LLMs and VLMs
What is AutoRound? AutoRound is a weight-only post-training quantization (PTQ) method developed by Intel. It uses signed gradient descent to jointly optimize weight rounding and clipping ranges, enabling accurate low-bit quantization (e.g., INT2 - INT8) with minimal accuracy loss in most scenarios. For example, at INT2, it outperforms popular baselines by up to 2.1x higher in relative accuracy. The image below provides an overview of the core algorithm in AutoRound. For more details, please refer to our paper. Despite its strong performance, AutoRound is fast and lightweight — quantizing a 72B model takes just 37 minutes on an A100 GPU under light mode. It also supports mixed-bit tuning, lm-head quantization, GPTQ/AWQ/GGUF format exporting, and flexible tuning recipes. Key Advantages Superior Accuracy at Low Bit Widths AutoRound delivers highly promising results, particularly in low-bit quantization scenarios. Evaluations across a variety of…
361dRelease
374d ago
17 Reasons Why Gradio Isn't Just Another UI Library
17 Reasons Why Gradio Isn't Just Another UI Library Introduction "Oh, Gradio? That's a Python library for building UIs, right?" We hear this a lot, and while Gradio does let you create interactive UIs with minimal Python code, calling Gradio a "UI library" misses the bigger picture! Gradio is more than a UI library—it's a framework for interacting with machine learning models through both UIs and APIs, providing strong guarantees around performance, security, and responsiveness. In this article, we'll introduce features that are unique to Gradio and explain how they are essential for building powerful AI applications. We'll share links to Gradio's official documentation and release notes, so you can explore further if you're curious. 1. Universal API Access All Gradio apps are also APIs! When you build a Gradio app, you can also use Gradio's robust client libraries for…
374dRelease
374d ago
Introducing HELMET: Holistically Evaluating Long-context Language Models
Introducing HELMET: Holistically Evaluating Long-context Language Models Paper: https://arxiv.org/abs/2410.02694 Website: https://princeton-nlp.github.io/HELMET Code & Data: https://github.com/princeton-nlp/HELMET Since we first released HELMET last October, there has been more development on long-context language models than ever before, and we are thrilled to see the adoption of HELMET by the community, such as Microsoft's Phi-4 and AI21's Jamba 1.6. After the initial release, we have added more models to our evaluation suite and conducted additional analyses. We are excited to share our new results and present HELMET at ICLR 2025! In this blog, we will describe the construction of HELMET, our key findings, and how practitioners can use HELMET to differentiate between various LCLMs in future research and applications. Finally, we will conclude with a quickstart guide for using HELMET with HuggingFace. Evaluating long-context language models is challenging but important From summarizing numerous legal…
374dRelease
395d ago
Open R1: Update #4
Open R1: Update #4 Welcome DeepSeek-V3 0324 This week, a new model from DeepSeek silently landed on the Hub. It’s an updated version of DeepSeek-V3, the base model underlying the R1 reasoning model. There isn’t much information shared yet on this new model, but we do know a few things! What we know so far The model has the same architecture as the original DeepSeek-V3 and now also comes with an MIT license, while the previous V3 model had a custom model license. The focus of this model release was on improving the instruction following as well as code and math capabilities. Let’s have a look! How good is it? The DeepSeek team has evaluated the model on a range of math and coding tasks and we can see the model’s strong capabilities compared to other frontier models: Clearly, the…
395dRelease
397d ago
Introducing Gradio's new Dataframe!
Introducing Gradio's new Dataframe! gr.Dataframe component is one of our most popular components, we've seen it used in a variety of awesome apps, like leaderboards, dashboards, and interactive visualisations. Although we hadn't made any changes to the dataframe in quite some time, our backlog of issues had been growing, and some improvements had been in demand for a while. Well — we’re now super excited to release a host of new updates to Gradio’s dataframe component. Over the last 6 weeks, we’ve closed over 70 dataframe issues - including bugs, improvements and enhancements. 1. Multi-Cell Selection You can select multiple cells at once! Copy or delete values across your selection with ease. 2. Row Numbers & Column Pinning Add row number columns and keep critical columns in view while navigating wide datasets using the pinned_columns parameter. No more losing track…
397dRelease
410d ago
Open R1: Update #3
Open R1: Update #3 Over the last few weeks, we have focused our efforts on reproducing the competitive programming (code reasoning) aspects of the DeepSeek-R1 recipe. In this post, we are excited to share: - The construction of CodeForces-CoTs: a dataset of nearly 100k high-quality samples distilled from R1 to produce solutions in C++ and Python. - The IOI benchmark: a new benchmark of challenging problems from the 2024 International Olympiad in Informatics (IOI). - OlympicCoder: two fine-tuned 7B and 32B code models that outperform closed-source frontier models like Claude 3.7 Sonnet on IOI problems. Here’s an overview of how the OlympicCoder models stack up against various instruction fine-tuned and reasoning models. We find that training models on CodeForces-CoTs produces top-tier performance, with OlympicCoder-32B outperforming all open-weight models we tested, including some that are over 100x larger 🤯. Read on…
410dRelease
428d ago
SigLIP 2: A better multilingual vision language encoder
SigLIP 2: A better multilingual vision language encoder TL;DR Today Google releases a new and better family of multilingual vision-language encoders, SigLIP 2. The authors have extended the training objective of SigLIP (sigmoid loss) with additional objectives for improved semantic understanding, localization, and dense features. SigLIP 2 models outperform the older SigLIP ones at all model scales in core capabilities, including zero-shot classification, image-text retrieval, and transfer performance when extracting visual representations for Vision-Language Models (VLMs). A cherry on top is the dynamic resolution (naflex ) variant. This is useful for downstream tasks sensitive to aspect ratio and resolution. Here is a list of all the models released: Introduction Vision encoders are simple - they take an image, encode it into a representation, and that representation is used for downstream tasks like classification, object detection, image segmentation, and more vision…
429d ago
SmolVLM2: Bringing Video Understanding to Every Device
SmolVLM2: Bringing Video Understanding to Every Device TL;DR: SmolVLM can now watch 📺 with even better visual understanding SmolVLM2 represents a fundamental shift in how we think about video understanding - moving from massive models that require substantial computing resources to efficient models that can run anywhere. Our goal is simple: make video understanding accessible across all devices and use cases, from phones to servers. We are releasing models in three sizes (2.2B, 500M and 256M), MLX ready (Python and Swift APIs) from day zero. We've made all models and demos available in this collection. Want to try SmolVLM2 right away? Check out our interactive chat interface where you can test visual and video understanding capabilities of SmolVLM2 2.2B through a simple, intuitive interface. Table of Contents - SmolVLM2: Bringing Video Understanding to Every Device Technical Details We are introducing…
429dRelease#multimodal
431d ago
Introducing Three New Serverless Inference Providers: Hyperbolic, Nebius AI Studio, and Novita 🔥
Introducing Three New Serverless Inference Providers: Hyperbolic, Nebius AI Studio, and Novita 🔥 These partners join the ranks of our existing providers, including Together AI, Sambanova, Replicate, fal and Fireworks.ai. The new partners enable a swath of new models: DeepSeek-R1, Flux.1, and many others. Find all the models supported by them below: We're quite excited to see what you'll build with these new providers! How it works In the website UI - In your user account settings, you are able to: - Set your own API keys for the providers you’ve signed up with. If no custom key is set, your requests will be routed through HF. - Order providers by preference. This applies to the widget and code snippets in the model pages. - As mentioned, there are two modes when calling Inference APIs: - Custom key (calls go…
431dRelease#inference
439d ago
Open R1: Update #2
Open R1: Update #2 We are now two weeks into the Open R1 project which aims to reconstruct the missing pieces of DeepSeek R1—specifically, the training pipeline and synthetic data. In this post, we are happy to share the construction of OpenR1-Math-220k: our first large-scale dataset for mathematical reasoning! We also take a look at some exciting developments from the community towards curating small, high-quality datasets for fine-tuning, along with insights into how to control the length of the chain-of-thought from reasoning models at both train-time and inference-time. Let’s dive in! OpenR1-Math-220k dataset One of the key advantages of DeepSeek R1 is its ability to transfer advanced reasoning capabilities to smaller models through distillation. The DeepSeek team demonstrated this by generating 600k reasoning traces and fine-tuning a series of Qwen and Llama models, showing that direct distillation from R1 can…
439dRelease
445d ago
π0 and π0-FAST: Vision-Language-Action Models for General Robot Control
π0 and π0-FAST: Vision-Language-Action Models for General Robot Control Explore the model collection and the PyTorch Version of the model in our repository: Huggingface collection of Pi0 models | Huggingface collection of Pi0+FAST models | LeRobot repo Introduction Robert Heinlein suggests that a well-rounded person should be capable of handling a wide range of tasks—both intellectual and physical—rather than being narrowly specialized in one field. Drawing a parallel between a well-rounded person and machine intelligence: AI systems vary widely, but human intelligence excels in versatility—adapting to tasks, environments, and surprises. While large language and vision-language models (LLMs, VLMs) show promise, they lack interaction with the physical world. To bridge this gap, we need models trained on robotic data. Generalist robot models can enhance adaptability, using diverse data to improve generalization and robustness. Instead of training on isolated tasks, pre-training on…
[MTR]MIT Technology Review· 1 articlesvisit →
3d ago
The Download: introducing the 10 Things That Matter in AI Right Now
The Download: introducing the 10 Things That Matter in AI Right Now Plus: An unauthorized group has reportedly accessed Anthropic’s Mythos. This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology. Introducing: 10 Things That Matter in AI Right Now What actually matters in AI right now? It’s getting harder to tell amid the constant launches, hype, and warnings. To cut through the noise, MIT Technology Review’s reporters and editors have distilled years of analysis into a new essential guide: the 10 Things That Matter in AI Right Now. The list builds on our annual 10 Breakthrough Technologies, but takes a wider view of the ideas, topics, and research shaping AI, spotlighting the trends and breakthroughs shaping the world. We’ll be unpacking one item from the…
3dReleaseby Thomas Macaulay
[NB]n8n Blog· 1 articlesvisit →
87d ago
Introducing Chat Hub
If you’ve been following the rise of AI in the workplace, you know the challenge: AI usage is becoming commonplace, but it’s often unmanaged. Chat Hub changes that by providing a single, unified interface for your organization to direct users for all AI-related tasks and processes, bringing the power of n8n’s AI agents to every team member securely and simply. The Problem: The Rise of "Shadow AI" As AI capability and understanding grows, users across organizations are eager to use AI-powered tools to speed up their work. However, this can often lead to "Shadow AI": unmanaged and unmetered usage that causes headaches for IT and operations teams. This can include: - Inconsistency: Staff might regularly use their own AI-powered tools and models without consideration for their organization’s standards. - Security Risks: Without a centralized system, data integrity and security can…
87dReleaseby Paul Gordon
[OAI]OpenAI Blog· 52 articlesvisit →
3d ago
Introducing workspace agents in ChatGPT
Introducing workspace agents in ChatGPT Codex-powered agents for teams. Today, we’re introducing workspace agents in ChatGPT. Teams can now create shared agents that handle complex tasks and long-running workflows, all while operating within the permissions and controls set by their organization. Workspace agents are an evolution of GPTs. Powered by Codex, they can take on many of the tasks people already do at work—from preparing reports, to writing code, to responding to messages. They run in the cloud, so they can keep working even when you’re not. They’re also designed to be shared within an organization, so teams can build an agent once, use it together in ChatGPT or Slack, and improve it over time. AI has already helped people work faster on their own, but many of the most important workflows inside an organization depend on shared context, handoffs,…
3dRelease#gpt#agents
4d ago
Introducing ChatGPT Images 2.0
April 21, 2026ProductReleaseCompanyIntroducing ChatGPT Images 2.0A new era of image generationTry in ChatGPT(opens in a new window)ShareImage modeClassic modeHorizontalSquareVerticalPage 1Page 2Page 3Page 4
15d ago
Our response to the Axios developer tool compromise
Our response to the Axios developer tool compromise We recently identified a security issue involving a third-party developer tool, Axios, that was part of a widely reported, broader industry incident(opens in a new window). Out of an abundance of caution we are taking steps to protect the process that certifies our macOS applications are legitimate OpenAI apps. We found no evidence that OpenAI user data was accessed, that our systems or intellectual property was compromised, or that our software was altered. We are updating our security certificates, which will require all macOS users to update their OpenAI apps to the latest versions. This helps prevent any risk—however unlikely—of someone attempting to distribute a fake app that appears to be from OpenAI. You can update safely through an in-app update or at the official links below: The security and privacy of…
15dRelease#coding#local
17d ago
Introducing the Child Safety Blueprint
Introducing the Child Safety Blueprint A framework for combatting and preventing AI-enabled Child Sexual Exploitation Child sexual exploitation is one of the most urgent challenges of the digital age. AI is rapidly changing both how these harms emerge across the industry and how they can be addressed at scale. At OpenAI, we have built and continue to strengthen safeguards to prevent misuse of our systems, and we work closely with partners like the National Center for Missing and Exploited Children (NCMEC) and law enforcement to improve detection and reporting. This work has helped surface where stronger, shared standards are needed across the industry. Today, we’re introducing a policy blueprint that outlines a practical path forward for strengthening U.S. child protection frameworks in the age of AI. This blueprint reflects and incorporates feedback from several leading organizations and experts across the…
17dRelease#safety
39d ago
OpenAI Japan announces Japan Teen Safety Blueprint to put teen safety first
OpenAI Japan announces Japan Teen Safety Blueprint to put teen safety first Strengthening age-appropriate protections, parental support, and well-being-centered design in Japan. OpenAI Japan today announced the Japan Teen Safety Blueprint, a new framework to help teens use generative AI safely and with confidence. In Japan, where a growing number of teens are already using generative AI for learning, creativity, and everyday tasks, this work is especially important. As the first generation grows up alongside AI, it is critical to ensure that these technologies are designed with their safety and well-being in mind from the outset. Generative AI is already supporting people across a wide range of activities from learning and creative expression to everyday tasks that help individuals thrive at school, at work, and in their personal lives. At a broader level, it also has the potential to accelerate…
39dRelease#safety
51d ago
Introducing the Adoption news channel
Introducing the Adoption news channel Practical insights and frameworks to turn AI progress into business advantage A new phase of enterprise AI is underway. For the past two years, the story was largely about the pace of the technology: new models, new capabilities, new breakthroughs and demonstrations of what AI could do. That phase mattered. But it also created an information environment dominated by technical updates, product news, and benchmark performances which are not the bottleneck to adoption and value anymore. The defining question for leaders is no longer what AI can do but how to turn that capability into concrete operational change: better decisions, faster workflows, stronger execution, new forms of leverage, and ultimately new business models. That shift calls for a different kind of channel. That is why we are launching the Adoption channel, a new OpenAI business…
51dRelease
56d ago
Our agreement with the Department of War
Our agreement with the Department of War Update on March 2, 2026 Throughout our discussions, the Department made clear it shares our commitment to ensuring our tools will not be used for domestic surveillance. To make our principles as clear as possible, we worked together to add additional language to our agreement. This language makes explicit that our tools will not be used to conduct domestic surveillance of U.S. persons, including through the procurement or use of commercially acquired personal or identifiable information. The Department also affirmed that our services will not be used by Department of War intelligence agencies like the NSA. Any services to those agencies would require a new agreement. The new language reads: - Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978,…
56dRelease
57d ago
An update on our mental health-related work
An update on our mental health-related work Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems. Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery. Since introducing parental controls in September 2025, we’ve seen encouraging engagement from families and will continue building on these protections. Working closely with experts from our Council on Well-Being and AI and our Global Physicians Network, we will also soon be introducing a trusted contact feature, which will allow adult users to designate someone to receive notifications when they may need additional support. As a reminder, parents also receive safety notifications about their teens’ use of ChatGPT through parental controls. We’ll share…
57dRelease#safety
71d ago
Introducing Lockdown Mode and Elevated Risk labels in ChatGPT
As AI systems take on more complex tasks—especially those that involve the web and connected apps—the security stakes change. One emerging risk has become especially important: prompt injection. In these attacks, a third party attempts to mislead a conversational AI system into following malicious instructions or revealing sensitive information. Today, we’re introducing two new protections designed to help users and organizations mitigate prompt injection attacks, with clearer visibility into risk and stronger controls: - Lockdown Mode in ChatGPT, an advanced, optional security setting for higher-risk users - “Elevated Risk” labels for certain capabilities in ChatGPT, ChatGPT Atlas, and Codex that may introduce additional risk These additions build on our existing protections across the model, product, and system levels. This includes sandboxing, protections against URL-based data exfiltration, monitoring and enforcement, and enterprise controls like role-based access and audit logs. Lockdown Mode…
71dRelease#gpt
79d ago
Introducing Trusted Access for Cyber
GPT‑5.3‑Codex is our most cyber-capable frontier reasoning model to date. Cybersecurity is one of the clearest places where that progress can both meaningfully strengthen the broader ecosystem and introduce new risks. We’ve moved from models that can auto-complete a few lines in a code editor, to models that can work autonomously for hours or even days to accomplish complex tasks. These capabilities can dramatically strengthen cyber defense by accelerating vulnerability discovery and remediation. To unlock the full defensive potential of these capabilities while reducing the risk of misuse, we are piloting Trusted Access for Cyber: an identity and trust-based framework designed to help ensure enhanced cyber capabilities are being placed in the right hands. This reflects our broader approach to responsibly deploying highly capable models. In addition, we are committing $10 million in API credits to accelerate cyber defense. It…
79dRelease
79d ago
Introducing OpenAI Frontier
AI has let teams take on things they used to talk about but never execute. In fact, 75% of enterprise workers say AI helped them do tasks they couldn’t do before. We’re hearing this from every department, not just technical teams. The way work gets done has changed, and enterprises are starting to feel it in big ways. We’ve seen this in action with over 1 million businesses over the past few years. At a major manufacturer, agents reduced production optimization work from six weeks to one day. A global investment company deployed agents end-to-end across the sales process to open up over 90% more time for salespeople to spend with customers. And, at a large energy producer, agents helped increase output by up to 5%, which adds over a billion in additional revenue. This is happening for AI leaders…
79dRelease
82d ago
Introducing the Codex app
March 4, 2026 update: The Codex app is now available on Windows. Today, we’re introducing the Codex app for macOS—a powerful new interface designed to effortlessly manage multiple agents at once, run work in parallel, and collaborate with agents over long-running tasks. We're also excited to show more people what's now possible with Codex. For a limited time we're including Codex with ChatGPT Free and Go, and we're doubling the rate limits on Plus, Pro, Business, Enterprise, and Edu plans. Those higher limits apply everywhere you use Codex—in the app, from the CLI, in your IDE, and in the cloud. The Codex app changes how software gets built and who can build it—from pairing with a single coding agent on targeted edits to supervising coordinated teams of agents across the full lifecycle of designing, building, shipping, and maintaining software. Since…
82dRelease#agents#coding
94d ago
Introducing Edu for Countries
Introducing OpenAI’s Education for Countries Helping countries build future ready education systems and workforces with AI The history of technology suggests that the biggest economic gains come not from invention alone, but from turning new capabilities into scaled, everyday use. But even as AI capabilities have improved, we see a widening “capability overhang,” defined as the gap between what AI tools can do and how people are using them. Education systems are a critical route through which this gap is closed. Studies(opens in a new window) project that by 2030 nearly 40% of the core skills workers rely on today will change, driven largely by AI. By embedding AI tools, training, and research into the core infrastructure of schools and universities, education systems can evolve alongside these shifts and better prepare students to thrive in a world with AI. It…
94dRelease
107d ago
OpenAI for Healthcare
Introducing OpenAI for Healthcare Secure AI products to help healthcare organizations scale high-quality care, reduce admin work for teams, and power custom clinical solutions—while protecting health data. We’re introducing OpenAI for Healthcare, a set of products designed to help healthcare organizations deliver more consistent, high-quality care for patients—while supporting their HIPAA compliance requirements. This includes ChatGPT for Healthcare, available starting today and already rolling out to leading institutions like AdventHealth, Baylor Scott & White Health, Boston Children’s Hospital, Cedars-Sinai Medical Center, HCA Healthcare, Memorial Sloan Kettering Cancer Center, Stanford Medicine Children’s Health, and University of California, San Francisco (UCSF). It also includes the OpenAI API, which powers much of today’s healthcare ecosystem. Thousands of organizations have configured it to support HIPAA-compliant use—such as Abridge, Ambience, and EliseAI. Healthcare is under unprecedented strain. Demand is rising, clinicians are overwhelmed by administrative…
107dRelease#gpt
108d ago
Introducing ChatGPT Health
Introducing ChatGPT Health A dedicated experience in ChatGPT designed for health and wellness. We’re introducing ChatGPT Health, a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health. Health is already one of the most common ways people use ChatGPT, with hundreds of millions of people asking health and wellness questions each week. ChatGPT Health builds on the strong privacy, security, and data controls across ChatGPT with additional, layered protections designed specifically for health— including purpose-built encryption and isolation to keep health conversations protected and compartmentalized. You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you. Designed in close collaboration with physicians, ChatGPT Health helps people take a more active…
108dRelease#gpt#local
113d ago
Announcing OpenAI Grove Cohort 2
Apply to OpenAI Grove A program for individuals early in their company building journey. Update on January 12, 2026: Applications are now closed. Today, we’re opening the applications for the next cohort of OpenAI Grove, a program for technical talent at the very start of their company-building journey. The Grove is not a startup accelerator or traditional program: it offers pre-idea individuals deeply curious about building in AI a dense talent network, co-building with OpenAI researchers, and resources designed to accelerate your journey. As participants explore early concepts, they will receive counsel from the OpenAI team and community with peers in OpenAI Grove. This program is the starting point of a long-term network. It will begin with five weeks of content and programming hosted in the OpenAI San Francisco HQ, including in-person workshops, weekly office hours, and mentoring from OpenAI…
113dRelease
129d ago
Introducing OpenAI Academy for News Organizations
Introducing OpenAI Academy for News Organizations Working with the American Journalism Project and The Lenfest Institute to launch a new learning hub for journalists and publishers using AI. At OpenAI, we believe journalism is essential to a healthy democracy. People depend on reliable local and national reporting to understand their communities and the world around them, and we’re committed to being a strong partner to news organizations—supporting their work and convening the right people to move the industry forward. We’re building on our partnership with the American Journalism Project(opens in a new window) and The Lenfest Institute for Journalism(opens in a new window) to launch OpenAI Academy for News Organizations(opens in a new window), a new learning hub for journalists, editors, and publishers using AI. We shared this initiative yesterday at the AI and Journalism Summit(opens in a new window),…
129dRelease#training
157d ago
A free version of ChatGPT built for teachers
A free version of ChatGPT built for teachers A secure ChatGPT workspace that supports teachers in their everyday work so they can focus on what matters most—plus admin controls for school and district leaders. Free for verified U.S. K–12 educators through June 2027. Today we’re introducing ChatGPT for Teachers and making it free through June 2027. Of the 800 million people who use ChatGPT each week, teachers are some of the earliest and most active adopters. Three in five (opens in a new window)already use an AI tool, and those that use it weekly report saving hours each week—giving them more time to spend with students. They also play a critical role in helping students and families understand how AI can support learning. ChatGPT for Teachers is built for both educators and school leaders. Teachers get a secure workspace to…
157dRelease#gpt#local
162d ago
Introducing OpenAI for Ireland
Introducing OpenAI for Ireland Today we are launching ‘OpenAI for Ireland’—a new initiative from OpenAI working with the Irish Government, Dogpatch Labs and Patch to help Irish SMEs and founders seize the opportunity of AI to grow, innovate and build for the future. Jason Kwon, Chief Strategy Officer at OpenAI, announced the initiative in Dublin alongside Jack Chambers T.D., Minister for Public Expenditure, Infrastructure, Public Service Reform and Digitalisation, Patrick Walsh, CEO of Dogpatch Labs and Tom McCarthy, Chair and Co-Founder of Patch. Ireland has a long history of global technology leadership, innovation and entrepreneurship and the country has now embraced AI as the next big technological leap. Over one million people in Ireland—from college students to entrepreneurs—are using ChatGPT every week to learn, get help with everyday tasks, and to scale traditional and new AI powered businesses. Ireland is…
162dRelease
163d ago
Introducing group chats in ChatGPT
Introducing group chats in ChatGPT Collaborate with others, and ChatGPT, in the same conversation. Update on November 20, 2025: Early feedback from the pilot has been positive, so we’re expanding group chats to all logged-in users on ChatGPT Free, Go, Plus and Pro plans globally over the coming days. We will continue refining the experience as more people start using it. Today, we’re beginning to pilot a new experience in a few regions that makes it easy for people to collaborate with each other—and with ChatGPT—in the same conversation. With group chats, you can bring friends, family, or coworkers into a shared space to plan, make decisions, or work through ideas together. Whether you’re organizing a group dinner or drafting an outline with coworkers, ChatGPT can help. Group chats are separate from your private conversations, and your personal ChatGPT memory…
163dRelease#gpt
166d ago
Free ChatGPT for transitioning U.S. servicemembers and veterans
Free ChatGPT for transitioning U.S. servicemembers and veterans Every year, hundreds of thousands of U.S. servicemembers transition from military to civilian life. It’s a big change with lots of promise, but there’s also a lot to figure out: job hunting, evaluating education options, navigating earned benefits, housing, finances, and getting used to a new routine. Nearly half of post-9/11 veterans say their adjustment to civilian life was difficult. OpenAI’s mission is to ensure that AI tools benefit all of humanity, and that includes making the transition a little easier for those who have served. Today we’re announcing that U.S. servicemembers and veterans can receive a free year of ChatGPT Plus if they’re within 12 months of retirement or separation. This initiative originated with veterans working at OpenAI who use these tools every day. They know firsthand how ChatGPT can help…
166dRelease#gpt
170d ago
Introducing the Teen Safety Blueprint
Introducing the Teen Safety Blueprint A framework for building AI that protects, empowers, and creates safer experiences for teens. We are introducing the Teen Safety Blueprint(opens in a new window), a roadmap for building AI tools responsibly and a practical starting point for policymakers who are working to set standards for teen use of AI. Young people deserve technology that expands opportunity and protects their well-being. The Blueprint helps define how AI should work for teens, including age-appropriate design, meaningful product safeguards, and ongoing research and evaluation. The decisions made today will shape how teens use and are protected by this technology for years to come. We aren’t waiting for regulation to catch up, we're putting this framework into action across our products. We’re anticipating risks and proactively strengthening protections for young people. In recent months we have strengthened safeguards…
170dRelease#safety
171d ago
1 million business customers putting AI to work
1 million business customers: the fastest-growing business platform in history Customers such as Amgen, Commonwealth Bank, Booking.com, Cisco, Lowe’s, Morgan Stanley, T-Mobile, Target, and Thermo Fisher Scientific are already on board—with more joining every week. Today, we’re announcing that more than 1 million business customers around the world are directly using OpenAI—the fastest-growing business platform in history. This includes all organizations that actively pay OpenAI for business use—either through ChatGPT for Work, or through direct consumption of our models through our developer platform. We’re proud to work with category leaders in industries like financial services, healthcare, retail, and more, where our technology is making intelligence central to their customer experiences, internal operations, and team-level workflows. Our enterprise momentum is fueled in part by consumer adoption. With more than 800 million weekly users already familiar with ChatGPT, adoption and ROI within…
185d ago
The next chapter for UK sovereign AI
The next chapter for UK sovereign AI Earlier this year, we signed an MOU with the UK Government to help more British people, businesses and institutions benefit from the potential of AI technology, and to deliver on the goals of the UK AI Action Plan. Together with the Government, we want to drive AI-led growth, accelerate adoption across the private and public sector, and expand the UK’s sovereign AI capabilities so that AI is widely available and served in the UK, for the UK. Today, building on this momentum, we’re announcing a new agreement with the UK Ministry of Justice for civil servants to use and benefit from ChatGPT, and the option of UK data residency for customers using our API Platform, ChatGPT Enterprise and ChatGPT Edu. Many of our leading British customers will gather at the OpenAI Frontiers event…
185dRelease#gpt
186d ago
Introducing ChatGPT Atlas, the browser with ChatGPT built in
Introducing ChatGPT Atlas The browser with ChatGPT built in. Today we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core. AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals. With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page.…
186dRelease#gpt#local
201d ago
Accelerating AI adoption in Europe
Accelerating AI adoption in Europe OpenAI and Allied for Startups today announced the release of the Hacktivate AI report(opens in a new window)—a collection of 20 ideas to accelerate broad-based AI adoption in Europe and boost the bloc’s competitiveness. The release comes just days before the European Commission is expected to unveil its Apply AI Strategy, a plan to encourage the real use of AI across business and the public sector. There is already significant demand for OpenAI technology in Europe with EU Member States ranking amongst our top markets globally for subscribers, API developers and business customers. Every day, people, developers, institutions, start-ups and leading enterprises are using OpenAI’s tools—and primarily, our freely available tools—to create economic opportunities for themselves and others throughout the continent, from speeding up the development of life-saving medical treatments with Sanofi(opens in a new…
201dRelease
221d ago
Introducing Stargate UK
Introducing Stargate UK We’re announcing Stargate UK—an AI infrastructure partnership with NVIDIA and Nscale that strengthens the UK’s sovereign compute capabilities. Stargate UK ensures OpenAI’s world-leading AI models can run on local computing power in the UK, for the UK—particularly for specialist use cases where jurisdiction matters. This will help power the UK’s future economy, boost its global competitiveness and deliver on the country’s national AI Opportunities Action Plan. The initiative marks a major step forward in the US-UK technology partnership and is the latest rollout of OpenAI for Countries to support governments that want to build out their sovereign AI capabilities. It follows the MoU we signed with the UK Government in July 2025 to explore the UK’s infrastructure priorities and accelerate the adoption of AI. Nscale is set to significantly expand its planned UK capacity for Stargate UK…
221dRelease
222d ago
Introducing upgrades to Codex
Introducing upgrades to Codex Codex just got faster, more reliable, and better at real-time collaboration and tackling tasks independently anywhere you develop—whether via the terminal, IDE, web, or even your phone. Update on September 23, 2025: GPT‑5‑Codex is now available to developers using Codex via API key (in addition to being available to developers using Codex via their ChatGPT subscription). GPT‑5 Codex is available at the same price as GPT‑5, and is available in the Responses API only. The underlying model snapshot will be regularly updated. Check out the Codex developer documentation(opens in a new window) and changelog(opens in a new window) for more details. Today, we’re releasing GPT‑5‑Codex—a version of GPT‑5 further optimized for agentic coding in Codex. GPT‑5‑Codex was trained with a focus on real-world software engineering work; it’s equally proficient at quick, interactive sessions and at independently…
222dRelease
232d ago
OpenAI and Greek Government launch ‘OpenAI for Greece’
OpenAI and Greek Government launch ‘OpenAI for Greece’ Today we’re launching ‘OpenAI for Greece’—a new partnership between OpenAI, the Government of the Hellenic Republic, the Onassis Foundation, and Endeavor Greece to expand access to high-quality AI tools in secondary education and accelerate innovation across Greece’s start-up ecosystem. AI is a foundational technology for countries with the potential to support learning, fuel innovation, and drive economic growth. In Greece, the number of weekly active ChatGPT users has increased seven-fold over the past year and the Government has already developed a national blueprint(opens in a new window) to help the country seize the opportunity, spanning innovation and entrepreneurship, education and research, and AI integration into public sector services. To help realise this vision, the ‘OpenAI for Greece’ Memorandum of Understanding was signed today at the Hellenic Expo by Prime Minister Kyriakos Mitsotakis…
262d ago
Providing ChatGPT to the Entire U.S. Federal Workforce
Providing ChatGPT to the entire U.S. federal workforce First-of-its-kind partnership with General Services Administration will give federal agencies access to ChatGPT Enterprise for $1 for the next year Today, OpenAI for Government is announcing a new partnership with the U.S. General Services Administration (GSA) to launch a transformative initiative. For the next year, ChatGPT Enterprise will be available to the entire federal executive branch workforce at essentially no cost. Participating U.S. federal agencies will be able to use our leading frontier models through ChatGPT Enterprise, for the nominal cost of $1 per agency for the next year. This effort delivers on a core pillar of the Trump Administration’s AI Action Plan(opens in a new window) by making powerful AI tools available across the federal government so that workers can spend less time on red tape and paperwork, and more time…
262dRelease#gpt
276d ago
Announcing OpenAI DevDay 2025
OpenAI DevDay is back and bigger than ever We’re hosting our third annual OpenAI DevDay on October 6, 2025 at Fort Mason in San Francisco. From day one, developers have been central to OpenAI’s mission to ensure AGI benefits all of humanity. You’ve used our tools to build first-of-their-kind products, launch startups, accelerate research, and reimagine what software can do. OpenAI DevDay is our way of celebrating and building upon that work—by sharing what’s new, surfacing what’s possible, and spending time with the people building at the frontier of AI. This year, we’re bringing together more than 1,500 developers for our biggest DevDay yet. Speakers will include Sam Altman, Chief Executive Officer, Greg Brockman, President, and many more to come! At OpenAI DevDay, you’ll get an early look at what’s coming next from OpenAI, hear directly from our research, product…
276dRelease
313d ago
Introducing OpenAI for Government
Introducing OpenAI for Government Today we’re launching OpenAI for Government, a new initiative focused on bringing our most advanced AI tools to public servants across the United States. We're supporting the U.S. government's efforts in adopting best-in-class technology and deploying these tools in service of the public good. Our goal is to unlock AI solutions that enhance the capabilities of government workers, help them cut down on the red tape and paperwork, and let them do more of what they come to work each day to do: serve the American people. OpenAI for Government consolidates our existing efforts to provide our technology to the U.S. government—including previously announced customers and partnerships as well as our ChatGPT Gov product—under one umbrella as we expand this work. Our established collaborations with the U.S. National Labs, the Air Force Research Laboratory, NASA, NIH,…
313dRelease
317d ago
Bringing the magic of AI to Mattel’s iconic brands
Bringing the magic of AI to Mattel’s iconic brands OpenAI is teaming up with Mattel, a leading global toy and family entertainment company known for capturing the imagination of generations, to bring a new dimension of AI-powered innovation and magic to Mattel’s iconic brands. Mattel has more than 80 years of experience introducing products and experiences that delight and captivate fans in a safe, thoughtful way. By tapping into OpenAI’s AI capabilities, Mattel aims to reimagine how fans can experience and interact with its cherished brands, with careful consideration to ensure positive, enriching experiences. “Each of our products and experiences is designed to inspire fans, entertain audiences, and enrich lives through play. AI has the power to expand on that mission and broaden the reach of our brands in new and exciting ways,” said Josh Silverman, chief franchise officer of…
317dRelease
331d ago
Creating websites in minutes with AI Website Builder
Wix helps anyone create fully functional websites in minutes with GPT‑4o Since its founding, Wix(opens in a new window) has aimed to simplify website creation for individuals and businesses. In 2016, the company introduced Wix ADI (Artificial Design Intelligence), one of the first AI-driven solutions for generating a site’s UI. Over the years, Wix has expanded its AI capabilities, integrating OpenAI models to enhance content generation, image processing, and customer support. By 2023, Wix launched AI text creator, which allowed users to generate website text, including headlines, taglines, and descriptions, by answering just a few prompts. This paved the way for Wix’s latest innovation: an AI website builder, powered by ChatGPT, that lets users create an entire website just by chatting with AI. Today, Wix offers a fully-fledged AI website builder(opens in a new window) that makes creating a site…
331dRelease#gpt
344d ago
Addendum to o3 and o4-mini system card: Codex
Addendum to OpenAI o3 and o4-mini system card: Codex Codex is a cloud-based coding agent. Codex is powered by codex-1, a version of OpenAI o3 optimized for software engineering. codex-1 was trained using reinforcement learning on real-world coding tasks in a variety of environments to generate code that closely mirrors human style and PR preferences, adheres precisely to instructions, and iteratively runs tests until passing results are achieved. Users can ask Codex to perform coding tasks or to answer questions about a codebase. Each agent runs in its own cloud container with no internet access. The container is preloaded with the user’s code and a development environment defined by the user, including any dependencies, configuration, or tooling they specify. After setup, internet access is disabled and the model trajectory begins. Within that environment, Codex can read and edit files, as…
344dRelease#coding
344d ago
Introducing Codex
Introducing Codex A cloud-based software engineering agent that can work on many tasks in parallel, powered by codex-1. Available to ChatGPT Pro, Business, and Enterprise users today, and Plus users soon. Update on June 3, 2025: Codex is now available to ChatGPT Plus users. We’re also enabling users to provide Codex with internet access during task execution. Please refer to the changelog(opens in a new window) and docs(opens in a new window) for more details. Today we’re launching a research preview of Codex: a cloud-based software engineering agent that can work on many tasks in parallel. Codex can perform tasks for you such as writing features, answering questions about your codebase, fixing bugs, and proposing pull requests for review; each task runs in its own cloud sandbox environment, preloaded with your repository. Codex is powered by codex-1, a version of…
344dRelease
353d ago
Introducing data residency in Asia
Introducing data residency in Asia Update on November 25, 2025 We’ve expanded our at-rest data residency options globally. For details on availability across additional regions, see our latest update. We’re announcing data residency in Japan, India, Singapore, and South Korea for ChatGPT Enterprise, ChatGPT Edu, and the API Platform. This helps organizations operating in these countries meet local data sovereignty requirements when using OpenAI products in their businesses and building new solutions with AI. Data residency builds on OpenAI’s existing enterprise-grade data privacy, security, and compliance features. With data residency, eligible API customers and new ChatGPT Enterprise and Edu customers can choose to have customer content stored at rest in supported countries. ChatGPT Enterprise and Edu New ChatGPT workspaces can be set up with data residency in supported countries, allowing customer content to be stored at rest in the region.…
353dRelease#local
353d ago
Introducing OpenAI for Countries
Introducing OpenAI for Countries A new initiative to support countries around the world that want to build on democratic AI rails. Our Stargate project, an unprecedented investment in America’s AI infrastructure announced in January with President Trump and our partners Oracle and SoftBank, is now underway with our first supercomputing campus in Abilene, Texas, and more sites to come. We’ve heard from many countries asking for help in building out similar AI infrastructure—that they want their own Stargates and similar projects. It’s clear to everyone now that this kind of infrastructure is going to be the backbone of future economic growth and national development. Technological innovation has always driven growth by helping people do more than they otherwise could—AI will scale human ingenuity itself and drive more prosperity by scaling our freedoms to learn, think, create and produce all at…
353dRelease
354d ago
Introducing AI stories: daily benefits shine a light on bigger opportunities
Sam Altman has written that we are entering the Intelligence Age(opens in a new window), a time when AI will help people become dramatically more capable. The biggest problems of today—across science, medicine, education, national defense—will no longer seem intractable, but will in fact be solvable. New horizons of possibility and prosperity will open up. It won’t happen all at once, but the real news is that it’s happening already. Millions of Americans are already using tools like ChatGPT (powered by OpenAI’s most advanced technology) to help solve their problems, and are seeing real benefits in their daily lives. They share their stories with us every day. Some stories are of small, personal wins; some are consequential on a much larger scale. Whether it’s a cancer patient using AI to advocate for herself during treatment, a scientist making breakthrough discoveries,…
354dRelease
355d ago
Evolving OpenAI’s structure
Evolving OpenAI’s structure The OpenAI Board has an updated plan for evolving OpenAI’s structure. - OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit. Going forward, it will continue to be overseen and controlled by that nonprofit. - Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission. - The nonprofit will control and also be a large shareholder of the PBC, giving the nonprofit better resources to support many benefits. - Our mission remains the same, and the PBC will have the same mission. We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices…
355dRelease#safety
374d ago
Introducing OpenAI o3 and o4-mini
Update on June 10, 2025: OpenAI o3‑pro is now available to Pro users in ChatGPT, as well as in our API. Like OpenAI o1‑pro, o3‑pro is a version of our most intelligent model, OpenAI o3, designed to think longer and provide the most reliable responses. Full details can be found in our release notes(opens in a new window). Today, we’re releasing OpenAI o3 and o4-mini, the latest in our o-series of models trained to think for longer before responding. These are the smartest models we’ve released to date, representing a step change in ChatGPT's capabilities for everyone from curious users to advanced researchers. For the first time, our reasoning models can agentically use and combine every tool within ChatGPT—this includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images.…
374dRelease
375d ago
OpenAI announces nonprofit commission advisors
OpenAI announces nonprofit commission advisors OpenAI is appointing four new advisors to help inform OpenAI’s philanthropic efforts. At OpenAI, we believe AI should help people solve humanity’s hardest problems—and that includes empowering the organizations on the front lines of that work. That’s why we’re working to ensure that our already existing nonprofit is backed by resources that could be historic, and powered by our AI technology. Today, we’re announcing the members of our recently formed nonprofit commission: an experienced group of advisors convened to inform OpenAI’s philanthropic efforts. As we’ve said, OpenAI’s nonprofit isn’t going anywhere—and this commission will be key to expanding its reach and impact. The commission’s goal is to help ensure that our nonprofit becomes a force multiplier for communities and mission-driven organizations tackling urgent global challenges—from health and education to public service and scientific discovery. We’re…
375dRelease
383d ago
Canva enables creativity with AI
Canva enables creativity with AI A conversation with Cameron Adams, Chief Product Officer and Co-founder of Canva. Our Executive Function series features perspectives from leaders driving transformation through AI. Launched in 2013, Canva is an online design and visual communication platform with a mission to empower everyone in the world to design anything and publish anywhere. We spoke with Cameron Adams about Canva’s evolution, AI powering more end-to-end workflows, and enabling the creative process with new AI-powered tools. As a successful tech company, Canva continuously adapts to changing user needs and emerging trends by embracing new technologies to stay at the forefront. Our mission to empower the world to design remains technology-agnostic, allowing us to leverage the best available tools to fulfill that goal. The rise of AI is particularly exciting for us because it allows us to fulfill our…
383dRelease#rag#agents
396d ago
Introducing 4o Image Generation
Introducing 4o Image Generation Unlocking useful and valuable image generation with a natively multimodal model capable of precise, accurate, photorealistic outputs. At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful. From the first cave paintings to modern infographics, humans have used visual imagery to communicate, persuade, and analyze—not just to decorate. Today's generative models can conjure surreal, breathtaking scenes, but struggle with the workhorse imagery people use to share and create information. From logos to diagrams, images can convey precise meaning when augmented with symbols that refer to shared language and experience. GPT‑4o image generation excels at accurately rendering text, precisely following prompts, and leveraging 4o’s inherent knowledge base and…
396dRelease#multimodal
401d ago
Introducing next-generation audio models in the API
Introducing next-generation audio models in the API A new suite of audio models to power voice agents, now available to developers worldwide. Update on August 28, 2025: We announced the general availability of the Realtime API. Learn more here. Over the past few months, we’ve invested in advancing the intelligence, capabilities, and usefulness of text-based agents—or systems that independently accomplish tasks on behalf of users—with releases like Operator, Deep Research, Computer-Using Agents, and the Responses API with built-in tools. However, in order for agents to be truly useful, people need to be able to have deeper, more intuitive interactions with agents beyond just text—using natural spoken language to communicate effectively. Today, we’re launching new speech-to-text and text-to-speech audio models in the API—making it possible to build more powerful, customizable, and intelligent voice agents that offer real value. Our latest speech-to-text…
408d ago
OpenAI’s proposals for the U.S. AI Action Plan
OpenAI’s proposals for the U.S. AI Action Plan Recommendations build on OpenAI’s Economic Blueprint to strengthen America’s AI leadership. Today, OpenAI shared our recommendations with the White House Office of Science and Technology (OSTP) for the upcoming US AI Action Plan(opens in a new window). As our CEO Sam Altman has written(opens in a new window), we are at the doorstep of the next leap in prosperity: the Intelligence Age. But we must ensure that people have freedom of intelligence, by which we mean the freedom to access and benefit from AI as it advances, protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them. Our submission builds on our Economic Blueprint released in January and includes a set of proposals covering critical areas such as national…
408dRelease
409d ago
Personalizing travel at scale with OpenAI
Booking.com and OpenAI personalize travel at scale By integrating its data systems with OpenAI’s LLMs, Booking.com delivers smarter search, faster support, and intent-driven travel experiences. As one of the world’s largest travel marketplaces, Booking.com makes it easier for millions of travelers to experience the world, offering seamless access to flights, stays, and activities in one place. With OpenAI, the company saw an opportunity to become a true travel companion, addressing the discovery phase to help travelers uncover destinations and experiences they didn’t even know they wanted. “When ChatGPT launched in 2022, I got this tingle,” says Adrienne Enggist, Senior Director of Product Marketplace at Booking.com. “It reminded me of the early days of broadband access—this massive opportunity to change how people engage with travel. We knew this could help us finally crack the discovery challenge.” To date, Booking.com has launched…
409dRelease#gpt
417d ago
Introducing NextGenAI
Introducing NextGenAI: A consortium to advance research and education with AI OpenAI commits $50M in funding and tools to leading institutions. Today, we’re launching NextGenAI, a first-of-its-kind consortium with 15 leading research institutions dedicated to using AI to accelerate research breakthroughs and transform education. AI has the power to drive progress in research and education—but only when people have the right tools to harness it. That’s why OpenAI is committing $50M in research grants, compute funding, and API access to support students, educators, and researchers advancing the frontiers of knowledge. Uniting institutions across the U.S. and abroad, NextGenAI aims to catalyze progress at a rate faster than any one institution would alone. This initiative is built not only to fuel the next generation of discoveries, but also to prepare the next generation to shape AI’s future. NextGenAI’s founding partners are…
417dRelease
424d ago
Estonia and OpenAI to bring ChatGPT to schools nationwide
Estonia and OpenAI to bring ChatGPT to schools nationwide OpenAI to work with Estonia’s government on the world’s first initiative to integrate ChatGPT Edu into a national education system. OpenAI is proud to work with Estonia’s government on a world-first initiative to provide all students and teachers in the secondary school system with access to ChatGPT Edu, a version of ChatGPT customized for education systems, starting with 10th and 11th graders by September 2025. To date, ChatGPT has become a go-to tool for students globally to personalize their education and advance their personal development. Most ChatGPT users—nearly four in five—are under the age of 35 and the majority of conversations are focused on learning and schoolwork. By supporting AI literacy programs, expanding access to AI, and developing policies to make AI training accessible and affordable, we can ensure students will…
424dRelease#gpt#training
440d ago
Introducing the Intelligence Age
Introducing the Intelligence Age The wheel, the plow. The compass, the telescope. The train, the car, the plane. The light bulb, the telephone, the internet, the smartphone. And so many others. All innovations created or harnessed by humans, for humans—used as tools to explore, discover, and create better lives for ourselves, our families and others. All progress has a starting point. Artificial intelligence is the most powerful tool humans have ever invented. In just over two years, more than 300 million people around the world have used ChatGPT’s freely available intelligence to ideate, discover, and break through beyond what we’re currently capable of doing on our own; in January, one in seven American adults used ChatGPT.1 Most of our users—nearly eight in 10—are under the age of 35.2 California State University is making ChatGPT available to half a million students(opens…
440dRelease#multimodal
444d ago
Introducing data residency in Europe
Introducing data residency in Europe Update on January 16, 2026 We’ve expanded our data residency offering with options for in-region GPU inference in the U.S. or Europe for eligible ChatGPT Enterprise, ChatGPT Edu, and ChatGPT for Healthcare customers. Learn more(opens in a new window). Update on November 25, 2025 We’ve expanded our at-rest data residency options globally. For details on availability across additional regions, see our latest update. Update on October 27, 2025 In addition to Europe, we’ve expanded at-rest data residency to the UK, US, Japan, Canada, South Korea, Singapore, Australia, India, and the UAE—for eligible API customers and for new ChatGPT Enterprise and Edu workspaces. We’re announcing data residency in Europe for ChatGPT Enterprise, ChatGPT Edu, and the API Platform. This helps organizations operating in Europe meet local data sovereignty requirements when using OpenAI products in their businesses…
444dRelease#local
445d ago
Creating nail art with ChatGPT
Ten tiny canvases Using ChatGPT to find inspiration for nail art. In a tiny salon lit with blue light, two women sit across from each other, hand-in-hand. They talk in quiet voices among rainbow stacks of tiny nail polish bottles. The sharp scent of solvents and adhesives hangs in the air, but neither seem to notice. It’s been two hours already of intense concentration. Taby’s client arrived with last month’s nails already out of season, eager and excited to update the color and shape of what Taby call’s “ten tiny canvases” at the end of her fingertips. With careful precision and detailed steps, Taby has removed the loud acrylics from the soft pink keratin beneath. After cleaning and polishing her client’s natural nail beds, she’s spent the last hour applying a fresh set of acrylics with delicate strokes of glue.…
445dRelease#gpt
[RB]Replicate Blog· 1 articlesvisit →
221d ago
Introducing our new search API
Introducing our new search API We’ve added a new search API to help you find the best models. This API is currently in beta, but it’s already available to all users in our TypeScript and Python SDKs, and our MCP servers. Here’s an example of how to use it with cURL: curl -s \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ "https://api.replicate.com/v1/search?query=lip+sync" Here’s a video of the search API in action using our MCP server with Claude Desktop: More metadata The new search API returns results for models, collections, and documentation pages that match your query. { query: "lip sync", models: [ {model: { url, run_count, etc }, metadata: { tags, score, etc }}, {model: { url, run_count, etc }, metadata: { tags, score, etc }}, {model: { url, run_count, etc }, metadata: { tags, score, etc }}, ], collections: [ {name,…
221dRelease
[SWB]Simon Willison Blog· 6 articlesvisit →
5d ago
llm-openrouter 0.6
20th April 2026 llm openrouter refresh command for refreshing the list of available models without waiting for the cache to expire. I added this feature so I could try Kimi 2.6 on OpenRouter as soon as it became available there. Here's its pelican - this time as an HTML page because Kimi chose to include an HTML and JavaScript UI to control the animation. Transcript here. Recent articles - DeepSeek V4 - almost on the frontier, a fraction of the price - 24th April 2026 - Extract PDF text in your browser with LiteParse for the web - 23rd April 2026 - A pelican for GPT-5.5 via the semi-official Codex backdoor API - 23rd April 2026
5dRelease
8d ago
datasette 1.0a28
17th April 2026 I was upgrading Datasette Cloud to 1.0a27 and discovered a nasty collection of accidental breakages caused by changes in that alpha. This new alpha addresses those directly: - Fixed a compatibility bug introduced in 1.0a27 where execute_write_fn() callbacks with a parameter name other thanconn were seeing errors. (#2691)- The database.close() method now also shuts down the write connection for that database. - New datasette.close() method for closing down all databases and resources associated with a Datasette instance. This is called automatically when the server shuts down. (#2693) - Datasette now includes a pytest plugin which automatically calls datasette.close() on temporary instances created in function-scoped fixtures and during tests. See Automatic cleanup of Datasette instances for details. This helps avoid running out of file descriptors in plugin test suites that were written before theDatabase(is_temp_disk=True) feature introduced in Datasette…
8dRelease
10d ago
datasette-export-database 0.3a1
15th April 2026 This plugin was using the ds_csrftoken cookie as part of a custom signed URL, which needed upgrading now that Datasette 1.0a27 no longer sets that cookie. Recent articles - DeepSeek V4 - almost on the frontier, a fraction of the price - 24th April 2026 - Extract PDF text in your browser with LiteParse for the web - 23rd April 2026 - A pelican for GPT-5.5 via the semi-official Codex backdoor API - 23rd April 2026
10dRelease
10d ago
datasette 1.0a27
15th April 2026 Two major changes in this new Datasette alpha. I covered the first of those in detail yesterday - Datasette no longer uses Django-style CSRF form tokens, instead using modern browser headers as described by Filippo Valsorda. The second big change is that Datasette now fires a new RenameTableEvent any time a table is renamed during a SQLite transaction. This is useful because some plugins (like datasette-comments) attach additional data to table records by name, so a renamed table requires them to react in appropriate ways. Here are the rest of the changes in the alpha: - New actor= parameter for datasette.client methods, allowing internal requests to be made as a specific actor. This is particularly useful for writing automated tests. (#2688)- New Database(is_temp_disk=True) option, used internally for the internal database. This helps resolve intermittent database locked errors…
10dRelease
10d ago
Zig 0.16.0 release notes: "Juicy Main"
15th April 2026 - Link Blog Zig 0.16.0 release notes: "Juicy Main" (via) Zig has really good release notes - comprehensive, detailed, and with relevant usage examples for each of the new features. Of particular note in the newly released Zig 0.16.0 is what they are calling "Juicy Main" - a dependency injection feature for your program's main() function where accepting a process.Init parameter grants access to a struct of useful properties: const std = @import("std"); pub fn main(init: std.process.Init) !void { /// general purpose allocator for temporary heap allocations: const gpa = init.gpa; /// default Io implementation: const io = init.io; /// access to environment variables: std.log.info("{d} env vars", .{init.environ_map.count()}); /// access to CLI arguments const args = try init.minimal.args.toSlice( init.arena.allocator() ); } Recent articles - DeepSeek V4 - almost on the frontier, a fraction of the price -…
10dRelease
14d ago
SQLite 3.53.0
11th April 2026 - Link Blog SQLite 3.53.0 (via) SQLite 3.52.0 was withdrawn so this is a pretty big release with a whole lot of accumulated user-facing and internal improvements. Some that stood out to me: ALTER TABLE can now add and removeNOT NULL andCHECK constraints - I've previously used my own sqlite-utils transform() method for this.- New json_array_insert() function and its jsonb equivalent. - Significant improvements to CLI mode, including result formatting. The result formatting improvements come from a new library, the Query Results Formatter. I had Claude Code (on my phone) compile that to WebAssembly and build this playground interface for trying that out. Recent articles - DeepSeek V4 - almost on the frontier, a fraction of the price - 24th April 2026 - Extract PDF text in your browser with LiteParse for the web - 23rd April…
14dRelease
[TVA]The Verge AI· 2 articlesvisit →
4d ago
OpenAI’s updated image generator can now pull information from the web
OpenAI is rolling out the latest version of its AI-powered image generator with new “thinking capabilities,” allowing it to search the web to help it create multiple images from a single prompt. On Tuesday, OpenAI announced that ChatGPT Images 2.0 can now create more “sophisticated” images, with improvements to its ability to follow instructions, preserve details of your choosing, and generate text. OpenAI’s updated image generator can now pull information from the web The update allows ChatGPT Images 2.0 to create a series of images based on one prompt. The update allows ChatGPT Images 2.0 to create a series of images based on one prompt. It’s powered by OpenAI’s new GPT Image 2 model, with new thinking capabilities available to ChatGPT Plus, Pro, Business, and Enterprise subscribers. When a thinking model is selected, the chatbot’s image generator can pull information…
4dRelease#gptby Emma Roth
4d ago
Yelp is making its AI chatbot way more useful
Yelp is giving its chatbot assistant a major upgrade, turning the platform into something closer to a digital concierge with a suite of new features designed for “getting things done.” The move, one of several AI-focused updates in recent months, is part of a broader industry push to make AI more relevant and practically useful to consumers while turning huge troves of user-generated data into a competitive edge. Yelp is making its AI chatbot way more useful The platform says it wants people to use the AI chatbot to ‘search less and do more.’ The platform says it wants people to use the AI chatbot to ‘search less and do more.’ In a press release, Yelp says the Yelp Assistant chatbot will be at “the center of the app experience,” where it can answer questions, make recommendations, and even handle…
4dReleaseby Robert Hart
[WA]Wired AI· 1 articlesvisit →
4d ago
Mozilla Used Anthropic’s Mythos to Find and Fix 151 Bugs in Firefox
Amid a raging debate over the impact that new AI models will have on cybersecurity, Mozilla said on Tuesday that its Firefox 150 browser release this week includes protections for 271 vulnerabilities identified using early access to Anthropic's Mythos Preview. The Firefox team says that it has taken resources and discipline to adjust to the firehose of bugs that new AI tools can uncover, but that this big lift is necessary for the security of Mozilla’s users, given that the capabilities will inevitably be in attackers’ hands soon. Both Anthropic and OpenAI have announced new AI models in recent weeks that the companies say have advanced cybersecurity capabilities that could represent a turning point in how defenders—and, crucially, attackers—find vulnerabilities and misconfigurations in software systems. With this in mind, the companies have so far only done limited private releases of…
4dRelease#ragby Lily Hay Newman