$ timeahead_
all sourcesAhead of AI (Sebastian Raschka)Anthropic NewsApple Machine Learning ResearchArs Technica AIAWS Machine Learning BlogCerebras BlogCohere BlogCrewAI BlogDeepSeek BlogDistill.pubfast.ai BlogFireworks AI BlogGoogle AI BlogGoogle Cloud AI BlogGoogle DeepMind BlogGroq BlogHaystack (deepset) BlogHugging Face BlogImport AI (Jack Clark)LangChain BlogLangFuse BlogLil'Log (Lilian Weng)LlamaIndex BlogMeta AI BlogMicrosoft AutoGen BlogMicrosoft Research BlogMistral AI NewsMIT Technology ReviewModal Blogn8n BlogNathan Lambert (RLHF)NVIDIA Developer BlogOllama BlogOpenAI BlogPerplexity AI BlogPyTorch BlogReplicate BlogSimon Willison BlogTensorFlow BlogThe Batch (DeepLearning.AI)The GradientThe Verge AITogether AI BlogVentureBeat AIvLLM BlogWeights & Biases BlogWired AIxAI (Grok) Blog
allapiagentsframeworkshardwareinframodelopen sourcereleaseresearchtutorial
★ TOP STORY[ HF ]Tutorial·3d ago

Gemma 4 VLA Demo on Jetson Orin Nano Super

Gemma 4 VLA Demo on Jetson Orin Nano Super You speak → Parakeet STT → Gemma 4 → [Webcam if needed] → Kokoro TTS → Speaker Press SPACE to record, SPACE again to stop. This is a simple VLA: the model decides on its own whether to act based on the context of what you asked, no keyword triggers, no hardcoded logic. If your question needs Gemma to open her eyes, she'll decide to take a photo, interpret it, and answer you with that context in mind. She's not describing the picture, she's answering your actual question using what she saw. And honestly? It's pretty impressive that this runs on a Jetson Orin Nano. :) Get the code The full script for this tutorial lives on GitHub, in my Google_Gemma repo next to the Gemma 2 demos: 👉 github.com/asierarranz/Google_Gemma Grab…

Hugging Face Blogread →
▲ trending · last 48hview all →
[HF]Hugging Face Blog· 99 articlesvisit →
4d ago
AI and the Future of Cybersecurity: Why Openness Matters
AI and the Future of Cybersecurity: Why Openness Matters What is Mythos? Mythos is a “frontier AI model”, a large language model (LLM) that can be used to process software code (among many other things). This follows a general trend in LLM development, where LLM performance on code-related tasks has recently skyrocketed. What’s particularly significant about Mythos is the system it’s embedded within: It's the system, not the model alone, that has enabled Mythos to rapidly find and patch software vulnerabilities. Understanding this distinction is key to understanding the current landscape of AI cybersecurity. What Mythos demonstrates is that the following system recipe is powerful: - substantial compute power - models trained on troves of software-relevant data - scaffolding built to handle software vulnerability probing and patching - speed (enabled by compute power and the capital behind it) - some…
4dInfra#coding
4d ago
How to Ground a Korean AI Agent in Real Demographics with Synthetic Personas
How to Ground a Korean AI Agent in Real Demographics with Synthetic Personas Nemotron-Personas-Korea fixes this. The dataset provides 6 million fully synthetic personas grounded in official statistics and seed data from the Korean Statistical Information Service (KOSIS), the Supreme Court of Korea, the National Health Insurance Service, and the Korea Rural Economic Institute. NAVER Cloud contributed seed data and domain expertise during design. Every persona is demographically accurate but contains zero personally identifiable information (PII). It’s designed with Korea's Personal Information Protection Act (PIPA) in mind. South Korea is also one of the few countries to publish an official Synthetic Data Generation guide, establishing governance for grounding models with synthetic versions of sensitive data. This dataset follows that approach. In this tutorial, we'll turn a synthetic persona into a deployed Korean agent — from filtering the dataset to inference…
4dTutorial#agents
4d ago
QIMMA قِمّة ⛰: A Quality-First Arabic LLM Leaderboard
QIMMA قِمّة ⛰: A Quality-First Arabic LLM Leaderboard QIMMA validates benchmarks before evaluating models, ensuring reported scores reflect genuine Arabic language capability in LLMs. 🏆 Leaderboard · 🔧 GitHub · 📄 Paper If you've been tracking Arabic LLM evaluation, you've probably noticed a growing tension: the number of benchmarks and leaderboards is expanding rapidly, but are we actually measuring what we think we're measuring? We built QIMMA قمّة (Arabic for "summit"), to answer that question systematically. Instead of aggregating existing Arabic benchmarks as-is and running models on them, we applied a rigorous quality validation pipeline before any evaluation took place. What we found was sobering: even widely-used, well-regarded Arabic benchmarks contain systematic quality issues that can quietly corrupt evaluation results. This post walks through what QIMMA is, how we built it, what problems we found, and what the model rankings…
4dResearch#benchmark
8d ago
Building a Fast Multilingual OCR Model with Synthetic Data
Building a Fast Multilingual OCR Model with Synthetic Data Synthetic data generation offers a way out of these tradeoffs. By rendering text onto images programmatically, we get both the scale of web scraping and the label purity of hand annotation. Every bounding box, transcription, and reading order relationship is known exactly because we placed it there, and we have full control over which layouts, font styles, and edge cases appear in the training set. The challenge is realism. Simulating diverse layouts and realistic document scenarios is difficult, but with the right rendering engine and strong randomization across fonts, colors, backgrounds, augmentations, and layout structures, it is possible to build enough invariance that models trained on synthetic data generalize well to real-world documents. Using this approach, we built Nemotron OCR v2, a multilingual OCR model that is both accurate and fast.…
9d ago
Ecom-RLVE: Adaptive Verifiable Environments for E-Commerce Conversational Agents
Ecom-RLVE: Adaptive Verifiable Environments for E-Commerce Conversational Agents TL;DR — We extend the RLVE framework from single-turn reasoning puzzles to multi-turn, tool-augmented e-commerce conversations. EcomRLVE-GYM provides 8 verifiable environments — product discovery, substitution, cart building, returns, order tracking, policy QA, bundle planning, and multi-intent journeys — each with procedural problem generation, a 12-axis difficulty curriculum, and algorithmically verifiable rewards. We train a Qwen 3 8B model with DAPO over 300 steps and present early results demonstrating that environment scaling and adaptive difficulty transfer to agentic, real-world task completion. This project originated in the Pytorch OpenEnv Hackathon and is still evolving, follow us for updates 🔥 Why RL for shopping agents? Large language models can hold fluent conversations, yet deploying them as shopping assistants reveals a persistent gap: fluency ≠ task completion. A customer who asks "find me a USB-C charger…
9d ago
The PR you would have opened yourself
The PR you would have opened yourself TL;DR We provide a Skill and a test harness to help port language models from transformers to mlx-lm, so they become (almost) instantly available the moment they are added to transformers. The Skill is designed to support contributors and reviewers as an aide, not an automation. We explain why we did it, how, and comment about how to meaningfully contribute to open source in the age of agents. The advent of code agents In 2026, code agents started to actually work. What used to be auto-completion at the side of your editor turned into a system that one-shots reasonable solutions from brief specifications. The generated code usually works out of the box, covers what you asked for, and makes reasonable assumptions about details you didn't specify. This is great. As Jensen Huang puts…
9d ago
Training and Finetuning Multimodal Embedding & Reranker Models with Sentence Transformers
Training and Finetuning Multimodal Embedding & Reranker Models with Sentence Transformers As a practical example, I'll walk through finetuning Qwen/Qwen3-VL-Embedding-2B for Visual Document Retrieval (VDR), the task of retrieving relevant document pages (as images, with charts, tables, and layout intact) for a given text query. The resulting tomaarsen/Qwen3-VL-Embedding-2B-vdr demonstrates how much performance you can gain by finetuning on your own domain. On my evaluation data, the finetuned model achieves an NDCG@10 of 0.947 compared to the base model's 0.888, and outperforms all existing VDR models I tested against, including models up to 4x its size. If you're new to multimodal models in Sentence Transformers, I recommend reading Multimodal Embedding & Reranker Models with Sentence Transformers first. For training text-only embedding, reranker, or sparse embedding models, see the Prior Blogposts section at the end. Table of Contents - Why Finetune? -…
10d ago
Inside VAKRA: Reasoning, Tool Use, and Failure Modes of Agents
Inside VAKRA: Reasoning, Tool Use, and Failure Modes of Agents VAKRA Dataset | LeaderBoard | Release Blog | GitHub | Submit to Leaderboard We recently introduced VAKRA, a tool-grounded, executable benchmark for evaluating how well AI agents reason and act in enterprise-like environments. Unlike traditional benchmarks that test isolated skills, VAKRA measures compositional reasoning across APIs and documents, using full execution traces to assess whether agents can reliably complete multi-step workflows. VAKRA provides an executable environment where agents interact with over 8,000+ locally hosted APIs backed by real databases spanning 62 domains, along with domain-aligned document collections. Tasks can require 3-7 step reasoning chains that combine structured API interaction with unstructured retrieval under natural-language tool-use constraints. As can be seen below, models perform poorly on VAKRA - in this blog, we include additional dataset details about the tasks in VAKRA…
10dAPI
10d ago
Meet HoloTab by HCompany. Your AI browser companion.
Meet HoloTab by HCompany. Your AI browser companion. We built one of the most powerful computer-use AIs in the world. And made it directly accessible from your browser. On March 31st, we released Holo3, our most advanced computer-use model to date. Building something powerful is one thing; making it accessible and easy to use is another. We’re doing both. HoloTab is a Chrome extension that navigates the web just like a person would. It automates tasks across any website with zero setup or technical skills required. You describe what you want, and the agent handles it directly inside your browser, navigating interfaces, filling fields, and making decisions the same way you would. The vision models, the action planning, the interface understanding, all of it is running underneath, working for you, and all you ever see is the result. Routines: Show…
16d ago
Waypoint-1.5: Higher-Fidelity Interactive Worlds for Everyday GPUs
Waypoint-1.5: Higher-Fidelity Interactive Worlds for Everyday GPUs Try it What is Waypoint-1.5? Waypoint-1.5 is Overworld’s next real-time video world model, built to bring interactive generative worlds to the hardware people actually own. The first release of Waypoint showed that real-time generative worlds were possible. It proved that interactive world models could be more than passive video demos, and that locally runnable systems could begin to close the gap between generating a world and actually stepping into one. Waypoint-1.5 builds directly on that foundation. This release improves visual fidelity, expands the range of hardware that can run the model locally, and takes another step toward interactive world simulation without datacenter-scale compute. On desktop hardware including RTX 3090 through 5090, Waypoint-1.5 can generate real-time environments at up to 720p and 60 FPS. This release also introduces a 360p tier designed to run…
16dHardware
16d ago
Multimodal Embedding & Reranker Models with Sentence Transformers
Multimodal Embedding & Reranker Models with Sentence Transformers Multimodal embedding models map inputs from different modalities into a shared embedding space, while multimodal reranker models score the relevance of mixed-modality pairs. This opens up use cases like visual document retrieval, cross-modal search, and multimodal RAG pipelines. If you want to train your own multimodal models, check out the companion blogpost: Training and Finetuning Multimodal Embedding & Reranker Models with Sentence Transformers. Table of Contents - What are Multimodal Models? - Installation - Multimodal Embedding Models - Multimodal Reranker Models - Retrieve and Rerank - Input Formats and Configuration - Supported Models - Additional Resources What are Multimodal Models? Traditional embedding models convert text into fixed-size vectors. Multimodal embedding models extend this by mapping inputs from different modalities (text, images, audio, or video) into a shared embedding space. This means you…
17d ago
Safetensors is Joining the PyTorch Foundation
Safetensors is Joining the PyTorch Foundation How we got here Safetensors started as a Hugging Face project born out of a concrete need: a way to store and share model weights that couldn't execute arbitrary code. The pickle-based formats that dominated the ecosystem at the time meant that there was a very real risk you’d be running malicious code. While this was an acceptable risk when ML was still budding, it would become unacceptable as open model sharing became central to how the ML community works. The format we built is intentionally simple: a JSON header with a hard limit of 100MB, describing tensor metadata, followed by raw tensor data. Zero-copy loading that maps tensors directly from disk. Lazy loading so you can read individual weights without deserializing an entire checkpoint. What we didn't fully anticipate was how broadly it…
17dOpen Source
23d ago
Welcome Gemma 4: Frontier multimodal intelligence on device
Welcome Gemma 4: Frontier multimodal intelligence on device These models are the real deal: truly open with Apache 2 licenses, high quality with pareto frontier arena scores, multimodal including audio, and sizes you can use everywhere including on-device. Gemma 4 builds on advances from previous families and makes them click together. In our tests with pre-release checkpoints we have been impressed by their capabilities, to the extent that we struggled to find good fine-tuning examples because they are so good out of the box. We collaborated with Google and the community to make them available everywhere: transformers, llama.cpp, MLX, WebGPU, Rust; you name it. This blog post will show you how to build with your favorite tools so let us know what you think! Table of Contents - What is New with Gemma 4? - Overview of Capabilities and Architecture…
24d ago
Falcon Perception
Falcon Perception TL;DR — Falcon Perception is a 0.6B-parameter early-fusion Transformer for open-vocabulary grounding and segmentation from natural language prompts. The model processes image patches + text in one sequence using a hybrid attention mask, and produces variable numbers of instances with a small, structured token interface and lightweight output heads. On SA-Co, Falcon Perception reaches 68.0 Macro-F1 (vs. 62.3 for SAM 3) with the main remaining gap being presence calibration (MCC 0.64 vs. 0.82). We also introduce PBench, a diagnostic benchmark that breaks down performance by capability (attributes, OCR-guided disambiguation, spatial constraints, relations) and by dense long-context crowded scenes. We also relase Falcon OCR, a 0.3B-parameter model which reaches a score of 80.3 and 88.6 on the olmOCR benchmark and OmniDocBench respectively, while having the highest throughput of any open source OCR model. This post is a brief, practical…
24dModel
24d ago
Any Custom Frontend with Gradio's Backend
gradio.Server: Any Custom Frontend with Gradio's Backend gr.HTML : building rich, interactive frontends entirely inside Gradio using custom HTML, CSS, and JavaScript. That unlocked a lot. But what if that's not enough? What if you want to build with your own frontend framework entirely like React, Svelte, or even plain HTML/JS, while still benefiting from Gradio's queuing system, API infrastructure, MCP support, and ZeroGPU on Spaces? That's exactly the problem gradio.Server solves. And it changes what's possible with Gradio and Hugging Face Spaces. What We Wanted to Build Text Behind Image : an editor where you upload a photo, the background gets removed using an ML model, and then you place stylized text between the foreground subject and the background. The text appears to sit behind the person or object in the image. This needs: - A drag-and-drop canvas with…
24dTutorial#rag
25d ago
Granite 4.0 3B Vision: Compact Multimodal Intelligence for Enterprise Documents
Granite 4.0 3B Vision: Compact Multimodal Intelligence for Enterprise Documents - Table Extraction: Accurately parsing complex table structures (e.g., multi-row, multi-column, etc.) from document images - Chart Understanding: Converting charts and figures into structured machine-readable formats, summaries, or executable code - Semantic Key-Value Pair (KVP) Extraction: Identifying and grounding semantically meaningful key-value field pairs across diverse document layouts The model ships as a LoRA adapter on top of Granite 4.0 Micro, our dense language model, keeping vision and language modular for text-only fallbacks and seamless integration into mixed pipelines. It continues to support vision-language tasks such as producing detailed natural-language descriptions from images (e.g., “Describe this image in detail”). The model can be used standalone or in tandem with Docling to enhance document processing pipelines with deep visual understanding capabilities. How Granite 4.0 3B Vision Was Built Granite 4.0 3B…
25dInfra#multimodal
25d ago
Training mRNA Language Models Across 25 Species for $165
Training mRNA Language Models Across 25 Species for $165 Part II: Building the Pipeline, From Structure Prediction to Codon Optimization By OpenMed, Open-Source Agentic AI for Healthcare & Life Sciences TL;DR: We built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization. After comparing multiple transformer architectures for codon-level language modeling, CodonRoBERTa-large-v2 emerged as the clear winner with a perplexity of 4.10 and a Spearman CAI correlation of 0.40, significantly outperforming ModernBERT. We then scaled to 25 species, trained 4 production models in 55 GPU-hours, and built a species-conditioned system that no other open-source project offers. Complete results, architectural decisions, and runnable code below. Contents - What We Built - The Architecture Exploration - The Pipeline - Scaling to Multi-Species - The End-to-End Workflow - Where This Stands and What's Next - References Imagine going from…
25d ago
TRL v1.0: Post-Training Library Built to Move with the Field
TRL v1.0: Post-Training Library Built to Move with the Field TRL now implements more than 75 post-training methods. But coverage isn’t the goal by itself. What matters is making these methods easy to try, compare, and actually use in practice. The design of the library wasn’t decided upfront. It is the result of years of iteration — the first commit goes back more than six years — and it has been shaped by everything the field threw at it: new algorithms, new models, shifting paradigms. Over time, this pressure forced the codebase toward a very specific design. Parts of it might look unusual at first, but like in many evolutionary codebases, they exist for a reason. TRL is built for a field that doesn’t sit still. So the question is not how to design the perfect abstraction. It is how…
25dRelease#training
29d ago
Liberate your OpenClaw
Liberate your OpenClaw 🦀 If you've been cut off and your OpenClaw, Pi, or Open Code agents need resuscitation, you can move them to open models in two ways: - Use an open model served through Hugging Face Inference Providers. - Run a fully local open model on your own hardware. The hosted route is the fastest way back to a capable agent. The local route is the right fit if you want privacy, zero API costs, and full control. To do so, just tell your claude code, your cursor or your favorite agent: help me move my OpenClaw agents to Hugging Face models, and link this page. Hugging Face Inference Providers Hugging Face inference providers is an open platform that routes to providers of open source models. It’s the right choice if you want the best models or you…
32d ago
A New Framework for Evaluating Voice Agents (EVA)
A New Framework for Evaluating Voice Agents (EVA) Introduction Conversational voice agents present a distinct evaluation challenge: they must simultaneously satisfy two objectives — accuracy (completing the user's task correctly and faithfully) and conversational experience (doing so naturally, concisely, and in a way appropriate for spoken interaction). These objectives are deeply intertwined: mishearing a confirmation code renders perfect LLM reasoning meaningless, a wall of options overwhelms a caller who can't skim spoken output, and delayed responses can pass every accuracy check while remaining unusable in practice. Existing frameworks treat these as separate concerns — evaluating task success or conversational dynamics, but not both. We introduce EVA, an end-to-end evaluation framework for conversational voice agents that evaluates complete, multi-turn spoken conversations using a realistic bot-to-bot architecture. EVA produces two high-level scores, EVA-A (Accuracy) and EVA-X (Experience), and is designed to surface…
32dResearch#coding
36d ago
Build a Domain-Specific Embedding Model in Under a Day
Build a Domain-Specific Embedding Model in Under a Day With a single GPU and less than a day of training time, you can transform a general-purpose embedding model into one that truly understands your domain, no manual labeling required. To help you hit the ground running, we are also releasing a ready-to-use synthetic training dataset generated from NVIDIA's public documentation using this exact pipeline. Using this data and the recipe, we saw over 10% improvement in both Recall@10 and NDCG@10. Atlassian applied this recipe to fine-tune on their JIRA dataset, increasing Recall@60 from 0.751 to 0.951, a 26% improvement - on a single GPU. 🔗Quick Links to Dataset and Code: 🧑💻Open Source Projects Recipe Integrates: - NeMo Data Designer for synthetic data generation - NeMo Automodel for embedding model training - BEIR for Information retrieval evaluation - NeMo Export-Deploy for…
39d ago
State of Open Source on Hugging Face: Spring 2026
State of Open Source on Hugging Face: Spring 2026 This post builds on an earlier analysis conducted mid-2025, available here, which examined what the Hugging Face Community is building. We recommend reading additional perspectives on the open source ecosystem in and outside of Hugging Face from the Data Provenance Initiative, Interconnects, OpenRouter and a16z, and MIT and the Linux Foundation. As the Hugging Face ecosystem is distributed, analyses are a combination of Hugging Face and community members' work, each of which is appropriately credited. Activity in the open source AI ecosystem has rapidly grown, with the number of users, model, and dataset repositories all close to doubling. In 2025, Hugging Face grew to 13 million users, more than 2 million public models, and over 500,000 public datasets. This growth signals more than increased interest in open source; it reflects a…
39dOpen Source#open-source
39d ago
Holotron-12B - High Throughput Computer Use Agent
Holotron-12B - High Throughput Computer Use Agent We're thrilled to release Holotron-12B, a multimodal computer-use model from H Company. Post-trained from the open NVIDIA Nemotron-Nano-2 VL model on H Company’s proprietary data mixture, Holotron-12B is the result of a close collaboration between our research labs to engineer a new type of model optimized primarily for scale and performance in production. H Company is part of the NVIDIA Inception Program. The model is now available on Hugging Face. Why We Built Holotron-12B Most multimodal models today optimize primarily for static vision or following instructions. Holotron-12B, just like our Holo2 model, however, has a different goal: serving as a policy model for computer-use agents that must perceive, decide, and act efficiently in interactive environments. With Holotron-12B, we wanted to create a model that could efficiently and effectively scale in production while handling…
46d ago
Introducing Storage Buckets on the Hugging Face Hub
Introducing Storage Buckets on the Hugging Face Hub Storage Buckets are built exactly for this: mutable, S3-like object storage you can browse on the Hub, script from Python, or manage with the hf CLI. And because they are backed by Xet, they are especially efficient for ML artifacts that share content across files. Why we built Buckets Git starts to feel like the wrong abstraction pretty quickly when you're dealing with: - Training clusters writing checkpoints and optimizer states throughout a run - Data pipelines processing raw datasets iteratively - Agents storing traces, memory, and shared knowledge graphs The storage need in all these cases is the same: write fast, overwrite when needed, sync directories, remove stale files, and keep things moving. A Bucket is a non-versioned storage container on the Hub. It lives under a user or organization namespace,…
46dRelease#rag
46d ago
Keep the Tokens Flowing: Lessons from 16 Open-Source RL Libraries
Keep the Tokens Flowing: Lessons from 16 Open-Source RL Libraries TL;DR -- For those of you who don't have time to read 5,000 words about async RL plumbing (we get it, you have models to train): - The problem: In synchronous RL (reinforcement learning) training, data generation (model inference to create data samples) dominates wall-clock time -- a single batch of 32K-token rollouts on a 32B (32-billion parameter) model can take hours, while the GPUs used for training remain idle. - The solution everyone converged on: Disaggregate (separate) inference and training onto different GPU pools, connect them with a rollout buffer (temporary storage for model outputs), and transfer weights asynchronously (without waiting), so neither side waits for the other. - We surveyed 16 open-source libraries that implement this pattern and compared them across 7 axes: orchestration primitives, buffer design, weight…
46dOpen Source#open-source
47d ago
Ulysses Sequence Parallelism: Training with Million-Token Contexts
Ulysses Sequence Parallelism: Training with Million-Token Contexts Ulysses Sequence Parallelism (part of the Arctic Long Sequence Training (ALST) protocol from Snowflake AI Research) provides an elegant solution by distributing the attention computation across multiple GPUs through attention head parallelism. In this post, we'll explore how Ulysses works and how it's been integrated across the Hugging Face ecosystem—from Accelerate to the Transformers Trainer and TRL's SFTTrainer. Contents - The Challenge of Long Sequence Training - How Ulysses Works - Integration with Accelerate - Integration with Transformers Trainer - Integration with TRL's SFTTrainer - Comparing Ulysses and Ring Attention - Best Practices - Benchmarks - Resources The Challenge of Long Sequence Training The attention mechanism in transformers scales quadratically with sequence length. For a sequence of length , standard attention requires FLOPs and memory to compute and store the attention score matrix.…
47d ago
LeRobot v0.5.0: Scaling Every Dimension
LeRobot v0.5.0: Scaling Every Dimension TL;DR LeRobot v0.5.0 adds full Unitree G1 humanoid support (whole-body control models), new policies –including Pi0-FAST autoregressive VLAs and Real-Time Chunking for responsive inference–, and streaming video encoding that eliminates wait times between recording episodes. The release also introduces EnvHub for loading simulation environments from the Hugging Face Hub, NVIDIA IsaacLab-Arena integration, and a major codebase modernization with Python 3.12+, Transformers v5, and third-party policy plugins. Table of Contents - LeRobot v0.5.0: Scaling Every Dimension Hardware: More Robots Than Ever LeRobot v0.5.0 dramatically expands the roster of supported hardware — from arms and mobile robots to a full humanoid. Unitree G1 Humanoid The biggest hardware addition in this release: full Unitree G1 humanoid support. This is LeRobot's first humanoid integration, and it's comprehensive: - Locomotion: Walk, navigate, and move through environments. - Manipulation: Perform dexterous…
47dInfra
51d ago
Bringing Robotics AI to Embedded Platforms: Dataset Recording, VLA Fine‑Tuning, and On‑Device Optimizations
Bringing Robotics AI to Embedded Platforms: Dataset Recording, VLA Fine‑Tuning, and On‑Device Optimizations Authors: Enzo Ruedas, Tess Boivin Recent advances in Large Language Models have enabled the transition from text-only reasoning to multimodal systems. First, with the integration of visual perception in Vision–Language Models (VLMs), and more recently with the generation of robot actions in Vision–Language–Action (VLA) models. Deploying these models on embedded robotic platforms remains a challenge due to tight constraints in terms of compute, memory, and power, as well as real-time control requirements. In synchronous control pipelines, while the VLA is running inference, the arm is idle awaiting commands leading to oscillatory behavior and delayed corrections. To tackle that, asynchronous Inference can enable smooth and continuous motion by dissociating generation from execution. However, to be effective, the end-to-end inference latency must remain shorter than the action execution duration.…
51d ago
Introducing Modular Diffusers - Composable Building Blocks for Diffusion Pipelines
Introducing Modular Diffusers - Composable Building Blocks for Diffusion Pipelines DiffusionPipeline class with a more flexible, composable alternative. In this post, we'll walk through how Modular Diffusers works — from the familiar API to run a modular pipeline, to building fully custom blocks and composing them into your own workflow. We'll also show how it integrates with Mellon, a node-based visual workflow interface that you can use to wire Modular Diffusers blocks together. Table of contents Quickstart Here is a simple example of how to run inference with FLUX.2 Klein 4B using pre-built blocks: import torch from diffusers import ModularPipeline # Create a modular pipeline - this only defines the workflow, model weights have not been loaded yet pipe = ModularPipeline.from_pretrained( "black-forest-labs/FLUX.2-klein-4B" ) # Now load the model weights — configure dtype, quantization, etc in this step pipe.load_components(torch_dtype=torch.bfloat16) pipe.to("cuda") #…
51dRelease
53d ago
PRX Part 3 — Training a Text-to-Image Model in 24h!
PRX Part 3 — Training a Text-to-Image Model in 24h! Introduction Welcome back 👋 In the last two posts (Part 1 and Part 2), we explored a wide range of architectural and training tricks for diffusion models. We tried to evaluate each idea in isolation, measuring throughput, convergence speed, and final image quality, and tried to understand what actually moves the needle. In this post, we want to answer a much more practical question: What happens when we combine all the tricks that worked? Instead of optimizing one dimension at a time, we’ll stack the most promising ingredients together and see how far we can push performance under a strict compute budget. To make things concrete, we’re doing a 24-hour speedrun: - 32 H200 - ~$1500 total compute budget (2$/hour/GPU) This is very far from the early diffusion days, where…
58d ago
Mixture of Experts (MoEs) in Transformers
Mixture of Experts (MoEs) in Transformers Introduction Over the past few years, scaling dense language models has driven most progress in LLMs. From early models like the original ULMFiT (~30M parameters) or GPT-2 (1.5B parameters, which at the time was considered "too dangerous to release" 🧌), and eventually to today’s hundred-billion–parameter systems, the recipe was simple: More data + more parameters gives better performance. Scaling laws reinforced this trend, but dense scaling has practical limits: - Training becomes increasingly expensive. - Inference latency grows. - Deployment requires significant memory and hardware. This is where Mixture of Experts (MoEs) enter the picture. If you're already familiar with MoEs and want to jump straight into the engineering work done in transformers, you can head directly to Transformers and MoEs. From Dense to Sparse: What Are MoEs? A Mixture of Experts model keeps…
64d ago
GGML and llama.cpp join HF to ensure the long-term progress of Local AI
GGML and llama.cpp join HF to ensure the long-term progress of Local AI Georgi Gerganov and team are joining HF with the goal of scaling and supporting the community behind ggml and llama.cpp as Local AI continues to make exponential progress in the coming years. We've been working with Georgi and team for quite some time (we even have awesome core contributors to llama.cpp like Son and Alek in the team already) so this has been a very natural process. llama.cpp is the fundamental building block for local inference, and transformers is the fundamental building block for model definition, so this is basically a match made in heaven. ❤️ What will change for llama.cpp, the open source project and the community? Not much – Georgi and team still dedicate 100% of their time maintaining llama.cpp and have full autonomy and…
64dModel#llama#local
64d ago
Train AI models with Unsloth and Hugging Face Jobs for FREE
Train AI models with Unsloth and Hugging Face Jobs for FREE LiquidAI/LFM2.5-1.2B-Instruct ) through coding agents like Claude Code and Codex. Unsloth provides ~2x faster training and ~60% less VRAM usage compared to standard methods, so training small models can cost just a few dollars. Why a small model? Small language models like LFM2.5-1.2B-Instruct are ideal candidates for fine-tuning. They are cheap to train, fast to iterate on, and increasingly competitive with much larger models on focused tasks. LFM2.5-1.2B-Instruct runs under 1GB of memory and is optimized for on-device deployment, so what you fine-tune can be served on CPUs, phones, and laptops. You will need We are giving away free credits to fine-tune models on Hugging Face Jobs. Join the Unsloth Jobs Explorers organization to claim your free credits and one-month Pro subscription. - A Hugging Face account (required for…
66d ago
IBM and UC Berkeley Diagnose Why Enterprise Agents Fail Using IT-Bench and MAST
IBM and UC Berkeley Diagnose Why Enterprise Agents Fail Using IT-Bench and MAST ITBench HF Space ITBench HF Dataset MAST HF Dataset ITBench Github MAST Github IBM Research and UC Berkeley collaborated to study how agentic LLM systems break in real-world IT automation, for tasks involving incident triage, logs/metrics queries, and Kubernetes actions in long-horizon tool loops. Benchmarks typically reduce performance to a single number, telling you whether an agent failed but never why. To solve this black-box problem, we applied MAST (Multi-Agent System Failure Taxonomy), an emerging practice for diagnosing agentic reliability ). By leveraging MAST to analyze ITBench—the industry benchmark for SRE, Security, and FinOps automation—we turned raw execution traces into structured failure signatures, revealing exactly what broke and how to fix it. We annotated 310 ITBench SRE traces across three distinct model classes: Gemini-3-Flash, Kimi-K2, and GPT-OSS-120B.…
66d ago
One-Shot Any Web App with Gradio's gr.HTML
One-Shot Any Web App with Gradio's gr.HTML gr.HTML now supports custom templates, scoped CSS, and JavaScript interactivity. Which means you can build pretty much any web component — and Claude (or any other frontier LLM) can generate the whole thing in one shot: frontend, backend, and state management, all in a single Python file. We tested this by building different types of apps. Each one is a single Python file, no build step, deployable to Hugging Face Spaces in seconds. Productivity Apps Pomodoro Timer: A focus timer where a pixel-art tree grows as you work. Starts as a seed, sprouts branches, grows leaves. Complete a session and the tree joins your forest. Session tracking, theme switching, break modes — all interactive, all in one file. The tree animation alone would normally require a separate React component. Here it's just CSS…
66dModel#claude
71d ago
Custom Kernels for All from Codex and Claude
Custom Kernels for All from Codex and Claude tl;dr: We built an agent skill that teaches coding agents how to write production CUDA kernels. Then we pointed Claude and Codex at two real targets: a diffusers pipeline and a transformers model. The agents produced working kernels for both, with correct PyTorch bindings and benchmarks, end to end. Writing CUDA kernels is hard. Writing CUDA kernels that correctly integrate with transformers and diffusers is harder. There are architecture-specific memory access patterns, vectorization strategies, warp shuffle reductions, and a dozen integration pitfalls that trip up even experienced developers. It is exactly the kind of specialized, high-stakes problem where agent skills shine. We gave coding agents the domain knowledge they need, like which GPU architecture to target, how to structure a kernel-builder project, when to use shared memory versus registers, and how to…
71dModel#claude
72d ago
OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Environments
OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Environments OpenEnv is an open-source framework from Meta and Hugging Face designed to address this challenge by standardizing how agents interact with real environments. As part of this collaboration, Turing contributed a production-grade calendar management environment to study tool-using agents under realistic constraints such as access control, temporal reasoning, and multi-agent coordination. In this post, we explore how OpenEnv works in practice, why calendars serve as a powerful benchmark for real-world agent evaluation, and what our findings reveal about the current limitations of tool-using agents. What Is OpenEnv? OpenEnv is a framework for evaluating AI agents against real systems rather than simulations. It provides a standardized way to connect agents to real tools and workflows while preserving the structure needed for consistent and reliable evaluation. OpenEnv uses a gym-oriented API (reset ,…
75d ago
Transformers.js v4: Now Available on NPM!
Transformers.js v4: Now Available on NPM! npm i @huggingface/transformers Performance & Runtime Improvements The biggest change is undoubtedly the adoption of a new WebGPU Runtime, completely rewritten in C++. We've worked closely with the ONNX Runtime team to thoroughly test this runtime across our ~200 supported model architectures, as well as many new v4-exclusive architectures. In addition to better operator support (for performance, accuracy, and coverage), this new WebGPU runtime allows the same transformers.js code to be used across a wide variety of JavaScript environments, including browsers, server-side runtimes, and desktop applications. That's right, you can now run WebGPU-accelerated models directly in Node, Bun, and Deno! We've proven that it's possible to run state-of-the-art AI models 100% locally in the browser, and now we're focused on performance: making these models run as fast as possible, even in resource-constrained environments. This…
75dHardware#rag#coding
79d ago
Introducing SyGra Studio
Introducing SyGra Studio What Studio lets you do - Configure and validate models with guided forms (OpenAI, Azure OpenAI, Ollama, Vertex, Bedrock, vLLM, custom endpoints). - Connect Hugging Face, file-system, or ServiceNow data sources and preview rows before execution. - Configure nodes by selecting models, writing prompts (with auto-suggested variables), and defining outputs or structured schemas. - Design downstream outputs using shared state variables and Pydantic-powered mappings. - Execute flows end-to-end and review generated results instantly with node-level progress. - Debug with inline logs, breakpoints, Monaco-backed code editors, and auto-saved drafts. - Monitor per-run token cost, latency, and guardrail outcomes with execution history stored in .executions/ . Let’s walk through this experience step by step. Step 1: Configure the data source Open Studio, click Create Flow, and Start/End nodes appear automatically. Before adding anything else: - Choose a connector (Hugging…
79dRelease
80d ago
Community Evals: Because we're done trusting black-box leaderboards over the community
Community Evals: Because we're done trusting black-box leaderboards over the community TL;DR: Benchmark datasets on Hugging Face can now host leaderboards. Models store their own eval scores. Everything links together. The community can submit results via PR. Verified badges prove that the results can be reproduced. Evaluation is broken Let's be real about where we are with evals in 2026. MMLU is saturated above 91%. GSM8K hit 94%+. HumanEval is conquered. Yet some models that ace benchmarks still can't reliably browse the web, write production code, or handle multi-step tasks without hallucinating, based on usage reports. There is a clear gap between benchmark scores and real-world performance. Furthermore, there is another gap within reported benchmark scores. Multiple sources report different results. From Model Cards, to papers, to evaluation platforms, there is no alignment in reported scores. The result is that…
80dResearch#benchmark
81d ago
H Company's new Holo2 model takes the lead in UI Localization
H Company's new Holo2 model takes the lead in UI Localization Two months since releasing our first batch of Holo2 models, H Company is back with our largest UI localization model yet: Holo2-235B-A22B Preview. This model achieves a new State-of-the-Art (SOTA) record of 78.5% on Screenspot-Pro and 79.0% on OSWorld G. Available on Hugging Face, Holo2-235B-A22B Preview is a research release focused on UI element localization. Agentic Localization High-resolution 4K interfaces are challenging for localization models. Small UI elements can be difficult to pinpoint on a large display. With agentic localization, however, Holo2 can iteratively refine its predictions, improving accuracy with each step and unlocking 10-20% relative gains across all Holo2 model sizes. Holo2-235B-A22B's Performance on ScreenSpot-Pro Holo2-235B-A22B Preview reaches 70.6% accuracy on ScreenSpot-Pro in a single step. In agent mode, it achieves 78.5% within 3 steps, setting a new…
81dResearch#agents
81d ago
The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+
The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+ This is the third and final blog in a three-part series on China's open source community's historical advancements since January 2025's "DeepSeek Moment." The first blog on strategic changes and open artifact growth is available here, and the second blog on architectural and hardware shifts is available here. In this third article, we examine paths and trajectories of prominent Chinese AI organizations, and posit future directions for open source. For AI researchers and developers contributing to and relying on the open source ecosystem and for policymakers understanding the rapidly changing environment, due to intraorganizational and global community gains, open source is the dominant and popular approach for Chinese AI organizations for the near future. Openly sharing artifacts from models to papers to deployment infrastructure maps to a strategy…
81dOpen Source#open-source
81d ago
Training Design for Text-to-Image Models: Lessons from Ablations
Training Design for Text-to-Image Models: Lessons from Ablations Welcome back! This is the second part of our series on training efficient text-to-image models from scratch. In the first post of this series, we introduced our goal: training a competitive text-to-image foundation model entirely from scratch, in the open, and at scale. We focused primarily on architectural choices and motivated the core design decisions behind our model PRX. We also released an early, small (1.2B parameters) version of the model as a preview of what we are building (go try it if you haven't already 😉). In this post, we shift our focus from architecture to training. The goal is to document what actually moved the needle for us when trying to make models train faster, converge more reliably, and learn better representations. The field is moving quickly and the list…
81dTutorial#training
86d ago
Introducing Daggr: Chain apps programmatically, inspect visually
Introducing Daggr: Chain apps programmatically, inspect visually Table of Contents - Background - Getting Started - Sharing Your Workflows - End-to-End Example with Different Nodes - Next Steps Background If you've built AI applications that combine multiple models or processing steps, you know the pain: chaining API calls, debugging pipelines, and losing track of intermediate results. When something goes wrong in step 5 of a 10-step workflow, you often have to re-run everything just to see what happened. Most developers either build fragile scripts that are hard to debug or turn to heavy orchestration platforms designed for production pipelines—not rapid experimentation. We've been working on Daggr to solve problems we kept running into when building AI demos and workflows: Visualize your code flow: Unlike node-based GUI editors, where you drag and connect nodes visually, Daggr takes a code-first approach. You…
86dRelease
87d ago
We Got Claude to Build CUDA Kernels and teach open models!
We got Claude to teach open models how to write CUDA kernels! - You can take Opus 4.5 or other SOTA models and tackle the hardest problems out there. - You can take models that run on your laptop and upskill them to harder problems. In this blog post, we’ll show you how to take on the latter. This blog post walks through the process of using a new tool, upskill , to generate and evaluate agent skills with large models and use them with smaller models. We will benchmark upskill on the task of writing CUDA kernels for diffusers models, but the process is generally useful for cutting costs, or using smaller models on hard and domain-specific problems. What are agent skills? In case you missed it, agent skills are taking the coding agent game by storm. In fact,…
87dHardware#claude#gpu
88d ago
Architectural Choices in China's Open-Source AI Ecosystem: Building Beyond DeepSeek
Architectural Choices in China's Open-Source AI Ecosystem: Building Beyond DeepSeek This is the second blog in a three-part series on China's open source community's historical advancements since January 2025's "DeepSeek Moment." The first blog is available here, and the third blog is available here. In this second piece we turn our focus from models to the architectural and hardware choices Chinese companies have made as openness becomes the norm. For AI researchers and developers contributing to and relying on the open source ecosystem and for policymakers understanding the rapidly changing environment, architectural preferences, modality diversification, license permissiveness, small model popularity, and growing adoption of Chinese hardware point to leadership strategies across a multitude of paths. DeepSeek R1's own characteristics inspired overlap and competition, and contributed to heavier focus on domestic hardware in China. Mixture of Experts (MoE) as the Default…
88dOpen Source#open-source
88d ago
Alyah ⭐️: Toward Robust Evaluation of Emirati Dialect Capabilities in Arabic LLMs
Alyah ⭐️: Toward Robust Evaluation of Emirati Dialect Capabilities in Arabic LLMs Arabic is one of the most widely spoken languages in the world, with hundreds of millions of speakers across more than twenty countries. Despite this global reach, Arabic is not a monolithic language. Modern Standard Arabic coexists with a rich landscape of regional dialects that differ significantly in vocabulary, syntax, phonology, and cultural grounding. These dialects are the primary medium of daily communication, oral storytelling, poetry, and social interaction. However, most existing benchmarks for Arabic large language models focus almost exclusively on Modern Standard Arabic, leaving dialectal Arabic largely under-evaluated and under-represented. This gap is particularly problematic as large language models increasingly interact with users in informal, culturally grounded, and conversational settings. A model that performs well on formal newswire text may still fail to understand a greeting,…
88dResearch
88d ago
Unlocking Agentic RL Training for GPT-OSS: A Practical Retrospective
Unlocking Agentic RL Training for GPT-OSS: A Practical Retrospective LinkedIn is an AI-first company that's built agents to help professionals be more successful. In this setting, models must reason over incomplete information, interact with structured services, and adapt to evolving user intent across multiple steps rather than produce a single static response. These capabilities are especially critical for agents that support the goals of recruiters, job and knowledge seekers, and learners end users, such as retrieving information, refining queries, coordinating tools, and executing multi-step workflows. By learning robust decision policies through interaction, agentic RL provides a principled foundation for building scalable, reliable, and adaptable AI systems through end-to-end optimization. The GPT-OSS model has shown comparable performance to OpenAI o3-mini and o4-mini [ref], but its suitability for agentic reinforcement learning training has not yet been validated. Most recent work focuses on…
94d ago
AssetOpsBench: Bridging the Gap Between AI Agent Benchmarks and Industrial Reality
AssetOpsBench: Bridging the Gap Between AI Agent Benchmarks and Industrial Reality Introduction While existing AI benchmarks excel at isolated tasks such as coding or web navigation, they often fail to capture the complexity of real-world industrial operations. To bridge this gap, we introduce AssetOpsBench, a framework specifically designed to evaluate agent performance across six critical dimensions of industrial applications. Unlike traditional benchmarks, AssetOpsBench emphasizes the need for multi-agent coordination—moving beyond `lone wolf' models to systems that can handle complex failure modes, integrate multiple data streams, and manage intricate work orders. By focusing on these high-stakes, multi-agent dynamics, the benchmark ensures that AI agents are assessed on their ability to navigate the nuances and safety-critical demands of a true industrial environment. AssetOpsBench is built for asset operations such as chillers and air handling units. It comprises: - 2.3M sensor telemetry points…
95d ago
One Year Since the “DeepSeek Moment”
One Year Since the “DeepSeek Moment” The first blog addresses strategic changes and the explosion of new open models and open source players. The second covers architectural and hardware choices largely by Chinese companies made in the wake of a growing open ecosystem, available here. The third analyzes prominent organizations’ trajectories and the future of the global open source ecosystem, available here. For AI researchers and developers contributing to and relying on the open source ecosystem and for policymakers understanding the rapidly changing environment, there has never been a better time to build and release open models and artifacts, as proven by the past year’s immense growth catalyzed by DeepSeek. Notably, geopolitics has driven adoption; while models developed in China have been dominating across metrics throughout 2025 and new players leapfrogging each other, Western AI communities are seeking commercially deployable…
95dModel
95d ago
Differential Transformer V2
Differential Transformer V2 Notion Link (for better readability) Code We compare DIFF V2 with DIFF V1 below: (For simplicity, we omit the batch dimension and assume that both the input and output of the following flash_attn_func are three-dimensional tensors (tokens, heads, head dimension) . Heads belonging to the same GQA group are arranged contiguously in the output) Note DIFF V2 subtracts two heads that are in the same GQA group, which means they share the same key and value. This is crucial to performance. See design ablations section and Github code. def DiffAttnV1( layer_index, q1, q2, k1, k2, v, lam_q1, lam_k1, lam_q2, lam_k2, ): """ q1, q2: (N, h/2, d) k1, k2: (N, h_kv/2, d) v: (N, h_kv/2, 2d) lam_*: (d,) """ attn1 = flash_attn_func(q1, k1, v) attn2 = flash_attn_func(q2, k2, v) lam_init = 0.8 - 0.6 * \ exp(-0.3…
95dHardware#coding
95d ago
Introducing Waypoint-1: Real-time interactive video diffusion from Overworld
Waypoint-1: Real-time Interactive Video Diffusion from Overworld Waypoint-1 Weights on the Hub - Waypoint-1-Small - Waypoint-1-Medium (Coming Soon!) Try Out The Model Overworld Stream: https://overworld.stream What is Waypoint-1? Waypoint-1 is Overworld’s real-time-interactive video diffusion model, controllable and prompted via text, mouse, and keyboard. You can give the model some frames, run the model, and have it create a world you can step into and interact with. The backbone of the model is a frame-causal rectified flow transformer trained on 10,000 hours of diverse video game footage paired with control inputs and text captions. Waypoint-1 is a latent model, meaning that it is trained on compressed frames. The standard among existing world models has become taking pre-trained video models and fine-tuning them with brief and simplified control inputs. In contrast, Waypoint-1 is trained from the get-go with a focus on interactive…
95dRelease#multimodal
100d ago
Open Responses: What you need to know
Open Responses: What you need to know The era of the chatbot is long gone, and agents dominate inference workloads. Developers are shifting toward autonomous systems that reason, plan, and act over long-time horizons. Despite this shift, much of the ecosystem still uses the Chat Completion format, which was designed for turn-based conversations and falls short for agentic use cases. The Responses format was designed to address these limitations, but it is closed and not as widely adopted. The Chat Completion format is still the de facto standard despite the alternatives. This mismatch between the agentic workflow requirements and entrenched interfaces motivates the need for an open inference standard. Over the coming months, we will collaborate with the community and inference providers to implement and adapt Open Responses to a shared format, practically capable of replacing chat completions. Open Responses…
110d ago
NVIDIA Cosmos Reason 2 Brings Advanced Reasoning To Physical AI
NVIDIA Cosmos Reason 2 Brings Advanced Reasoning To Physical AI NVIDIA today released Cosmos Reason 2, the latest advancement in open, reasoning vision language models for physical AI. Cosmos Reason 2 surpasses its previous version in accuracy and tops the Physical AI Bench and Physical Reasoning leaderboards as the #1 open model for visual understanding. NVIDIA Cosmos Reason 2: Reasoning Vision Language Model for Physical AI Since their introduction, vision-language models have rapidly improved at tasks like object and pattern recognition in images. But they still struggle with tasks humans find natural, like planning several steps ahead, dealing with uncertainty or adapting to new situations. Cosmos Reason is designed to close this gap by giving robots and AI agents stronger common sense and reasoning to solve complex problems step by step. Cosmos Reason 2 is a state-of-the-art, open reasoning vision-language…
110d ago
Introducing Falcon-H1-Arabic: Pushing the Boundaries of Arabic Language AI with Hybrid Architecture
Introducing Falcon-H1-Arabic: Pushing the Boundaries of Arabic Language AI with Hybrid Architecture Discover more in our official blogpost, featuring an interactive experience The journey of building world-class Arabic language models has been one of continuous learning and iteration. Today, we're excited to announce Falcon-H1-Arabic, our most advanced Arabic language model family to date, representing a significant leap forward in both architecture and capabilities. This release embodies months of research, community feedback, and technical innovation, culminating in three powerful models that set new standards for Arabic natural language processing. Building on Success: The Evolution from Falcon-Arabic When we launched Falcon-Arabic a few months ago, the response from the community was both humbling and enlightening. Developers, researchers and students across the Arab world used the model for real use cases, pushing them to its limits and providing invaluable feedback. We learned where…
110dModel
110d ago
NVIDIA brings agents to life with DGX Spark and Reachy Mini
NVIDIA brings agents to life with DGX Spark and Reachy Mini Today at CES 2026, NVIDIA unveiled a world of new open models to enable the future of agents, online and in the real world. From the recently released NVIDIA Nemotron reasoning LLMs to the new NVIDIA Isaac GR00T N1.6 open reasoning VLA and NVIDIA Cosmos world foundation models, all the building blocks are here today for AI Builders to build their own agents. But what if you could bring your own agent to life, right at your desk? An AI buddy that can be useful to you and process your data privately? In the CES keynote today, Jensen Huang showed us how we can do exactly that, using the processing power of NVIDIA DGX Spark with Reachy Mini to create your own little office R2D2 you can talk to…
110dAgents#agents#gpu
123d ago
AprielGuard: A Guardrail for Safety and Adversarial Robustness in Modern LLM Systems
AprielGuard: A Guardrail for Safety and Adversarial Robustness in Modern LLM Systems In this work, we introduce AprielGuard, an 8B parameter safety–security safeguard model designed to detect: - 16 categories of safety risks, spanning toxicity, hate, sexual content, misinformation, self-harm, illegal activities, and more. - Wide range of adversarial attacks, including prompt injection, jailbreaks, chain-of-thought corruption, context hijacking, memory poisoning, and multi-agent exploit sequences. - Safety violations and adversarial attacks in agentic workflows, including tool calls and model reasoning traces. AprielGuard is available in both reasoning and non-reasoning modes, enabling explainable classification when needed and low-latency classification for production pipelines. - Model: https://huggingface.co/ServiceNow-AI/AprielGuard - Technical Paper: https://arxiv.org/abs/2512.20293 Table of Contents - Motivation - AprielGuard Overview - Taxonomy - Training Dataset - Model Architecture - Training Setup - Evaluation - Conclusion - Limitations Motivation Traditional safety classifiers primarily focus on a…
128d ago
Tokenization in Transformers v5: Simpler, Clearer, and More Modular
Tokenization in Transformers v5: Simpler, Clearer, and More Modular Transformers v5 redesigns how tokenizers work. The big tokenizers reformat separates tokenizer design from trained vocabulary (much like how PyTorch separates neural network architecture from learned weights). The result is tokenizers you can inspect, customize, and train from scratch with far less friction. TL;DR: This blog explains how tokenization works in Transformers and why v5 is a major redesign, with clearer internals, a clean class hierarchy, and a single fast backend. It’s a practical guide for anyone who wants to understand, customize, or train model-specific tokenizers instead of treating them as black boxes. Table of Contents - What is Tokenization? - The Tokenization Pipeline - Tokenization Algorithms - Accessing tokenizers throughtransformers - The Tokenizer Class Hierarchy in transformers AutoTokenizer Automatically Selects the Correct Tokenizer Class- v5 Separates Tokenizer Architecture from Trained…
128dTutorial
129d ago
The Open Evaluation Standard: Benchmarking NVIDIA Nemotron 3 Nano with NeMo Evaluator
The Open Evaluation Standard: Benchmarking NVIDIA Nemotron 3 Nano with NeMo Evaluator NVIDIA released Nemotron 3 Nano 30B A3B with an explicitly open evaluation approach to make that distinction clear. Alongside the model card, we are publishing the complete evaluation recipe used to generate the results, built with the NVIDIA NeMo Evaluator library, so anyone can rerun the evaluation pipeline, inspect the artifacts, and analyze the outcomes independently. We believe that open innovation is the foundation of AI progress. This level of transparency matters because most model evaluations omit critical details. Configs, prompts, harness versions, runtime settings, and logs are often missing or underspecified, and even small differences in these parameters can materially change results. Without a complete recipe, it’s nearly impossible to tell whether a model is genuinely more intelligent or simply optimized for a benchmark. This blog shows…
129dResearch#benchmark#gpu
131d ago
CUGA on Hugging Face: Democratizing Configurable AI Agents
CUGA on Hugging Face: Democratizing Configurable AI Agents Introduction AI agents are rapidly becoming essential for building intelligent applications, but creating robust, adaptable agents that scale across domains remains a challenge. Many existing frameworks struggle with brittleness, tool misuse, and failures when faced with complex workflows.CUGA (Configurable Generalist Agent) was designed to overcome these limitations. It's an open-source, AI Agent that combines flexibility, reliability, and ease of use for enterprise use cases. By abstracting orchestration complexity, CUGA empowers developers to focus on domain requirements rather than the internals of agent building. And now, with its integration into 🚀Hugging Face Spaces🚀, experimenting with CUGA and open models has never been easier. What is CUGA?CUGA is a configurable, general-purpose AI agent that supports complex, multi-step tasks across web and API environments. It has achieved state-of-the-art performance on leading benchmarks: 🥇 #1 on…
135d ago
New in llama.cpp: Model Management
New in llama.cpp: Model Management Reminder: llama.cpp server is a lightweight, OpenAI-compatible HTTP server for running LLMs locally. This feature was a popular request to bring Ollama-style model management to llama.cpp. It uses a multi-process architecture where each model runs in its own process, so if one model crashes, others remain unaffected. Quick Start Start the server in router mode by not specifying a model: llama-server This auto-discovers models from your llama.cpp cache (LLAMA_CACHE or ~/.cache/llama.cpp ). If you've previously downloaded models via llama-server -hf user/model , they'll be available automatically. You can also point to a local directory of GGUF files: llama-server --models-dir ./my-models Features - Auto-discovery: Scans your llama.cpp cache (default) or a custom --models-dir folder for GGUF files - On-demand loading: Models load automatically when first requested - LRU eviction: When you hit --models-max (default: 4), the…
135dModel#llama
135d ago
Codex is Open Sourcing AI models
Codex is Open Sourcing AI models Building on our work to get Claude Code to train open source models, we are now getting Codex to go further. We gave Codex access to the Hugging Face Skills repository, which contains skills for Machine Learning and AI tasks such as training or evaluating models. With HF skills, a coding agent can: - Fine-tune and apply RL alignment on language models - Review, explain, and act on live training metrics from Trackio - Evaluate checkpoints and act on evaluation results - Create reports from experiments - Export to and quantize models with GGUF for local deployment - Publish models to the Hub This tutorial dives even deeper and shows you how it works and how to use it yourself. So let's get started. Codex uses AGENTS.md files to accomplish specialized tasks, whilst Claude…
141d ago
Introducing swift-huggingface: The Complete Swift Client for Hugging Face
Introducing swift-huggingface: The Complete Swift Client for Hugging Face You can start using it today as a standalone package, and it will soon integrate into swift-transformers as a replacement for its current HubApi implementation. The Problem When we released swift-transformers 1.0 earlier this year, we heard loud and clear from the community: - Downloads were slow and unreliable. Large model files (often several gigabytes) would fail partway through with no way to resume. Developers resorted to manually downloading models and bundling them with their apps — defeating the purpose of dynamic model loading. - No shared cache with the Python ecosystem. The Python transformers library stores models in~/.cache/huggingface/hub . Swift apps downloaded to a different location with a different structure. If you'd already downloaded a model using the Python CLI, you'd download it again for your Swift app. - Authentication…
141dRelease
142d ago
DeepMath: A lightweight math reasoning Agent with smolagents
DeepMath: A lightweight math reasoning Agent with smolagents By Intel AI Software Group DeepMath is an aligned math reasoning agent built on Qwen3-4B Thinking and fine-tuned with GRPO (Group Relative Policy Optimization). Instead of verbose text, the model emits tiny Python snippets for intermediate steps, runs them in a secure sandbox, and folds the results back into its reasoning, reducing errors and output length. The agent is implemented using the smolagents library. We evaluate DeepMath on four math datasets: MATH500, AIME, HMMT, and HLE, and show that: 🤖 The math agent alone reduces output lengths by up to 66%, while often improving accuracy. ⚡ GRPO training improves the agent performance even further, in almost all benchmarks. 👉 Code and evaluation scripts: https://github.com/IntelLabs/DeepMath 👉 Model: https://huggingface.co/Intel/deepmath-v1 Why DeepMath? Large language models (LLMs) have advanced reasoning capabilities, but mathematical problem-solving remains challenging;…
142dAgents#agents
142d ago
We Got Claude to Fine-Tune an Open Source LLM
We Got Claude to Fine-Tune an Open Source LLM We gave Claude the ability to fine-tune language models using a new tool called Hugging Face Skills. Not just write training scripts, but to actually submit jobs to cloud GPUs, monitor progress, and push finished models to the Hugging Face Hub. This tutorial shows you how it works and how to use it yourself. Claude Code can use "skills"—packaged instructions, scripts, and domain knowledge—to accomplish specialized tasks. The hf-llm-trainer skill teaches Claude everything it needs to know about training: which GPU to pick for your model size, how to configure Hub authentication, when to use LoRA versus full fine-tuning, and how to handle the dozens of other decisions that go into a successful training run. With this skill, you can tell Claude things like: Fine-tune Qwen3-0.6B on the dataset open-r1/codeforces-cots And…
145d ago
Transformers v5: Simple model definitions powering the AI ecosystem
Transformers v5: Simple model definitions powering the AI ecosystem Today, as we launch v5, Transformers is installed more than 3 million times each day via pip - up from 20,000/day in v4 🤯. Altogether, it has now surpassed 1.2 billion installs! The ecosystem has expanded from 40 model architectures in v4 to over 400 today, and the community has contributed more than 750,000 model checkpoints on the Hub compatible with Transformers, up from roughly 1,000 at the time of v4. This growth is powered by the evolution of the field and the now mainstream access to AI. As a leading model-definition library in the ecosystem, we need to continuously evolve and adapt the library to continue being relevant. Reinvention is key for longevity in AI. We’re fortunate to collaborate with many libraries and apps built on transformers, in no specific…
145dOpen Source
151d ago
Diffusers welcomes FLUX-2
Welcome FLUX.2 - BFL’s new open image generation model 🤗 🚨 FLUX.2 is not meant to be a drop-in replacement of FLUX.1, but a new image generation and editing model. Table of contents FLUX.2: A Brief Introduction FLUX.2 can be used for both image-guided and text-guided image generation. Furthermore, it can take multiple images as reference inputs, while producing the final output image. Below, we briefly discuss the key changes introduced in FLUX.2. Text encoder First, instead of two text encoders as in Flux.1, it uses a single text encoder — Mistral Small 3.1. Using a single text encoder greatly simplifies the process of computing prompt embeddings. The pipeline allows for a max_sequence_length of 512. Instead of using a single-layer output for the prompt embedding, FLUX.2 stacks outputs from intermediate layers, which have been known to be more beneficial. DiT…
151d ago
Continuous batching from first principles
Continuous batching TL;DR: in this blog post, starting from attention mechanisms and KV caching, we derive continuous batching by optimizing for throughput. If you've ever used Qwen, Claude, or any other AI chatbot, you've probably noticed something: it takes a while for the first word of the response to appear, and then words appear one-by-one on your screen with (hopefully) a regular and fast-paced frequency. That's because at the heart of it, all LLMs are just fancy next token predictors. An LLM first processes your entire prompt to produce one new token. Then it keeps adding tokens one by one, each time reading everything that came before, until it decides generation is over. This generation process is computationally expensive: it requires passing the input through billions of parameters for each token generated. To make these models practical for real-world applications,…
152d ago
Building Deep Research: How we Achieved State of the Art
Building Deep Research: How we Achieved State of the Art Building for the Future Agent Harness The task of building an agent harness is to create a software layer that enhances a model’s runtime execution through context management, tool invocations, loop control, orchestration, and error handling. Building applications on top of rapidly improving models is, however, a modern engineering challenge. How can we design software today that absorbs the performance gains from future model releases? This requires forecasting how models will evolve, staying optimistic about their progress, limiting assumptions, and avoiding hand-crafted optimizations. We learned this the hard way seven months ago, when we had to abandon our first attempt at deep research and rebuild the entire system from scratch. The first architecture was complicated and sophisticated (we thought this was a good thing), but its assumptions became bottlenecks when…
152dResearch
152d ago
OVHcloud on Hugging Face Inference Providers 🔥
OVHcloud on Hugging Face Inference Providers 🔥 We're thrilled to share that OVHcloud is now a supported Inference Provider on the Hugging Face Hub! OVHcloud joins our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub's model pages. Inference Providers are also seamlessly integrated into our client SDKs (for both JS and Python), making it super easy to use a wide variety of models with your preferred providers. This launch makes it easier than ever to access popular open-weight models like gpt-oss, Qwen3, DeepSeek R1, and Llama — right from Hugging Face. You can browse OVHcloud's org on the Hub at https://huggingface.co/ovhcloud and try trending supported models at https://huggingface.co/models?inference_provider=ovhcloud&sort=trending. OVHcloud AI Endpoints are a fully managed, serverless service that provides access to frontier AI models from leading research labs via simple API calls. The service…
155d ago
20x Faster TRL Fine-tuning with RapidFire AI
20x Faster TRL Fine-tuning with RapidFire AI Why this matters When fine-tuning or post-training LLMs, teams often do not have the time and/or budget to compare multiple configs even though that can significantly boost eval metrics. RapidFire AI lets you launch multiple TRL configs concurrently--even on a single GPU--and compare them in near real time via a new adaptive, chunk-based scheduling and execution scheme. In internal benchmarks referenced in the TRL page, this delivers ~16–24× higher experimentation throughput than sequentially comparing configs one after another, enabling you to reach much better metrics much faster. RapidFire AI establishes live three-way communication between your IDE, a metrics dashboard, and a multi-GPU execution backend What you get, out of the box Drop-in TRL wrappers — Use RFSFTConfig ,RFDPOConfig , andRFGRPOConfig as near-zero-code replacements for TRL's SFT/DPO/GRPO configs.Adaptive chunk-based concurrent training — RapidFire AI…
155dModel#fine-tuning
155d ago
Open ASR Leaderboard: Trends and Insights with New Multilingual & Long-Form Tracks
Open ASR Leaderboard: Trends and Insights with New Multilingual & Long-Form Tracks Most benchmarks focus on short-form English transcription (<30s), and overlook other important tasks, such as (1) multilingual performance and (2) model throughput, which can a be deciding factor for long-form audio like meetings and podcasts. Over the past two years, the Open ASR Leaderboard has become a standard for comparing open and closed-source models on both accuracy and efficiency. Recently, multilingual and long-form transcription tracks have been added to the leaderboard 🎉 TL;DR - Open ASR Leaderboard - 📝 New preprint on ASR trends from the leaderboard: https://hf.co/papers/2510.06961 - 🧠 Best accuracy: Conformer encoder + LLM decoders (open-source ftw 🥳) - ⚡ Fastest: CTC / TDT decoders - 🌍 Multilingual: Comes at the cost of single-language performance - ⌛ Long-form: Closed-source systems still lead (for now 😉) -…
155dResearch#benchmark
156d ago
Introducing AnyLanguageModel: One API for Local and Remote LLMs on Apple Platforms
Introducing AnyLanguageModel: One API for Local and Remote LLMs on Apple Platforms Developers building AI-powered apps typically take a hybrid approach, adopting some combination of: - Local models using Core ML or MLX for privacy and offline capability - Cloud providers like OpenAI or Anthropic for frontier capabilities - Apple's Foundation Models as a system-level fallback Each comes with different APIs, different requirements, different integration patterns. It's a lot, and it adds up quickly. When I interviewed developers about building AI-powered apps, friction with model integration came up immediately. One developer put it bluntly: I thought I'd quickly use the demo for a test and maybe a quick and dirty build but instead wasted so much time. Drove me nuts. The cost to experiment is high, which discourages developers from discovering that local, open-source models might actually work great for…
156dRelease#local
157d ago
Apriel-H1: The Surprising Key to Distilling Efficient Reasoning Models
Apriel-H1: The Surprising Key to Distilling Efficient Reasoning Models When MiniMax published their M2 post-mortem in October explaining why they abandoned efficient attention at 230B scale, the narrative briefly became "efficient attention is dead." Within days, Kimi Linear proved otherwise. The real lesson: it depends on your constraints. Our constraint was simple: we had a strong 15B reasoning model and needed to make it efficient without starting over. No infinite compute for 20T-token pretraining. No luxury of architectural co-design from day one. Just a practical question: can you retrofit efficiency into an existing model through distillation? Spoilers: yes, but only if you ignore your intuition about what data to use. What We Built The Apriel-H1 family: seven checkpoints spanning 25-40 Mamba layers (out of 50 total), showing the complete efficiency-quality frontier. Our flagship Apriel-H1-15b-Thinker-SFT achieves 2.1x throughput with minimal quality…
159d ago
Easily Build and Share ROCm Kernels with Hugging Face
Easily Build and Share ROCm Kernels with Hugging Face Intoduction Custom kernels are the backbone of high-performance deep learning, enabling GPU operations tailored precisely to your workload; whether that’s image processing, tensor transformations, or other compute-heavy tasks. But compiling these kernels for the right architectures, wiring all the build flags, and integrating them cleanly into PyTorch extensions can quickly become a mess of CMake/Nix, compiler errors, and ABI issues, which is not fun. Hugging Face’s kernels library makes it easy to build (with kernel-builder) and share these kernels with the kernels-community, with support for multiple GPU and accelerator backends, including CUDA, ROCm, Metal, and XPU. This ensures your kernels are fast, portable, and seamlessly integrated with PyTorch. In this guide, we focus exclusively on ROCm-compatible kernels and show how to build, test, and share them using kernels. You’ll learn how…
159dTutorial#gpu
163d ago
Join the AMD Open Robotics Hackathon
Join the AMD Open Robotics Hackathon Looking to show off your robotics aptitude? The AMD Open Robotics Hackathon hosted by AMD, Hugging Face, and Data Monsters is the place to do it. Whether you’re a student, hobbyist, startup founder, or seasoned engineer, this event brings together makers, coders, and roboticists for a fast-paced, hands-on competition that turns bold ideas into functioning demos. The first of two in-person hackathons will take place from December 5-7, 2025 in Tokyo Japan. Our next stop will be in Paris France from December 12-14, 2025. Preparing for the Hackathon: Form a team of up to four roboticists (ages 18+) to take on two missions over the course of 3 days. Mission 1 — An instructor-led exploration and preparation session. Learn how to set up the LeRobot development environment using AMD AI solutions Mission 2 —…
163dTutorial#fine-tuning
163d ago
Building for an Open Future - our new partnership with Google Cloud
Building for an Open Future - our new partnership with Google Cloud “Google has made some of the most impactful contributions to open AI, from the OG transformer to the Gemma models. I believe in a future where all companies will build and customize their own AI. With this new strategic partnership, we’re making it easy to do on Google Cloud.” says Jeff Boudier, at Hugging Face. “Hugging Face has been the driving force enabling companies large and small all over the world to access, use and customize now more than 2 million open models, and we’ve been proud to contribute over 1,000 of our models to the community”, says Ryan J. Salva, Senior Director of Product Management at Google Cloud. “Together we will make Google Cloud the best place to build with open models.” A Partnership for Google Cloud…
163dTutorial
177d ago
Aligning to What? Rethinking Agent Generalization in MiniMax M2
Aligning to What? Rethinking Agent Generalization in MiniMax M2 The Real Agent Alignment Problem: Benchmarks or Reality? If you've worked with LLM Agents, you've felt this pain: the same model can feel brilliant in one framework and useless in another. An agent might crush a tool-use leaderboard but fail spectacularly at a simple, real-world task. This gap between benchmark performance and practical usability is one of the biggest challenges in the field. When we designed M2, we knew we had to tackle this problem head-on. This led us to two core, and sometimes conflicting, objectives: - Excel on Open-Source Benchmarks. Benchmarks are essential for measuring "pure" capabilities. A benchmark like BrowseComp, for instance, tests for sophisticated search skills. While users will rarely ask a question as contrived as, "Find the paper where the third letter of the nth author's name…
177dAgents#agents
178d ago
On the Shifting Global Compute Landscape
On the Shifting Global Compute Landscape Summary The status quo of AI chip usage, that was once almost entirely U.S.-based, is changing. China’s immense progress in open-weight AI development is now being met with rapid domestic AI chip development. In the past few months, highly performant open-weight AI models’ inference in China has started to be powered by chips such as Huawei’s Ascend and Cambricon, with some models starting to be trained using domestic chips. There are two large implications for policymakers and AI researchers and developers respectively: U.S. export controls correlates with expedited Chinese chip production, and chip scarcity in China likely incentivized many of the innovations that are open-sourced and shaping global AI development. China’s chip development correlates highly with stronger export controls from the U.S. Under uncertainty of chip access, Chinese companies have innovated with both chip…
178dInfra
178d ago
Building a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac
Building a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac TL;DR A hands-on guide to collecting data, training policies, and deploying autonomous medical robotics workflows on real hardware Table-of-Contents - Building a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac Introduction Simulation has been a cornerstone in medical imaging to address the data gap. However, in healthcare robotics until now, it's often been too slow, siloed, or difficult to translate into real-world systems. NVIDIA Isaac for Healthcare, a developer framework for AI healthcare robotics, enables healthcare robotics developers in solving these challenges via offering integrated data collection, training, and evaluation pipelines that work across both simulation and hardware. Specifically, the Isaac for Healthcare v0.4 release provides healthcare developers with an end-to-end SO - ARM based starter workflow and the bring your own operating room tutorial. The SO-ARM starter…
178dInfra#gpu
179d ago
How to Build a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac for Healthcare
How to Build a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac for Healthcare A hands-on guide to collecting data, training policies, and deploying autonomous medical robotics workflows on real hardware Simulation has been a cornerstone in medical imaging to address the data gap. However, in healthcare robotics until now, it's often been too slow, siloed, or difficult to translate into real-world systems. That’s now changing. With new advances in GPU-accelerated simulation and digital twins, developers can design, test, and validate robotic workflows entirely in virtual environments - reducing prototyping time from months to days, improving model accuracy, and enabling safer, faster innovation before a single device reaches the operating room. That's why NVIDIA introduced Isaac for Healthcare earlier this year, a developer framework for AI healthcare robotics, that enables developers in solving these challenges via integrated data collection,…
179dTutorial#gpu
179d ago
Granite 4.0 Nano: Just how small can you go?
Granite 4.0 Nano: Just how small can you go? Today we are excited to share Granite 4.0 Nano, our smallest models yet, released as part of IBM's Granite 4.0 model family. Designed for the edge and on-device applications, these models demonstrate excellent performance for their size and represent IBM's continued commitment to develop powerful, useful, models that don't require hundreds of billions of parameters to get the job done. Like all Granite 4.0 models, the Nano models are released under an Apache 2.0 license with native architecture support on popular runtimes like vLLM, llama.cpp, and MLX. The models were trained with the same improved training methodologies, pipelines, and over 15T tokens of training data developed for the original Granite 4.0 models. This release includes variants benefiting from the Granite 4.0’s new, efficient hybrid architecture, and like all Granite language models,…
179d ago
Voice Cloning with Consent
Voice Cloning with Consent Realistic voice generation technology has gotten uncannily good in the past few years. In some situations, it’s possible to generate a synthetic voice that sounds almost exactly like the voice of a real person. And today, what once felt like science fiction is reality: Voice cloning. With just a few seconds of recorded speech, anyone’s voice can be made to say almost anything. Voice generation, and in particular the subtask of voice cloning, has notable risks and benefits. The risks of “deepfakes”, such as the cloned voice of former President Biden used in robocalls, can mislead people into thinking that people have said things that they haven’t said. On the other hand, voice cloning can be a powerful beneficial tool, helping people who’ve lost the ability to speak communicate in their own voice again, or assisting…
179d
180d ago
Streaming datasets: 100x More Efficient
Streaming datasets: 100x More Efficient TLDR We boosted load_dataset('dataset', streaming=True) , streaming datasets without downloading them with one line of code!Start training on multi-TB datasets immediately, without complex setups, downloading, no "disk out of space", or 429 “stop requesting!” errors. It's super fast! Outrunning our local SSDs when training on 64xH100 with 256 workers downloading data. We've improved streaming to have 100x fewer requests, → 10× faster data resolution → 2x sample/sec, → 0 worker crashes at 256 concurrent workers. Loading data, especially at the terabyte scale, is a major pain in any machine learning workflow. We suffered this while training SmolLM3, at one point we had to wait 3 hours before each run to download enough data. Streaming has always been possible in the datasets library, but large scale training with massive datasets remained a challenge. That changes today…
180d ago
huggingface_hub v1.0: Five Years of Building the Foundation of Open Machine Learning
huggingface_hub v1.0: Five Years of Building the Foundation of Open Machine Learning huggingface_hub has reached v1.0 - a milestone that marks the library's maturity as the Python package powering 200,000 dependent libraries and providing core functionality for accessing over 2 million public models, 0.5 million public datasets, and 1 million public Spaces. This release introduces breaking changes designed to support the next decade of open machine learning, driven by a global community of almost 300 contributors and millions of users. 🚀 We highly recommend upgrading to v1.0 to benefit from major performance improvements and new capabilities. pip install --upgrade huggingface_hub Major changes in this release include the migration to httpx as the backend library, a completely redesigned hf CLI (which replaces the deprecated huggingface-cli ) featuring a Typer-based interface with a significantly expanded feature set, and full adoption of hf_xet…
180dRelease
183d ago
LeRobot v0.4.0: Supercharging OSS Robot Learning
LeRobot v0.4.0: Supercharging OSS Robot Learning TL;DR LeRobot v0.4.0 delivers a major upgrade for open-source robotics, introducing scalable Datasets v3.0, powerful new VLA models like PI0.5 and GR00T N1.5, and a new plugin system for easier hardware integration. The release also adds support for LIBERO and Meta-World simulations, simplified multi-GPU training, and a new Hugging Face Robot Learning Course. Table-of-Contents - LeRobot v0.4.0: Supercharging OSS Robot Learning - TL;DR - Table-of-Contents - Datasets: Ready for the Next Wave of Large-Scale Robot Learning - Simulation Environments: Expanding Your Training Grounds - Codebase: Powerful Tools For Everyone - Policies: Unleashing Open-World Generalization - Robots: A New Era of Hardware Integration with the Plugin System - The Hugging Face Robot Learning Course - Final thoughts from the team Datasets: Ready for the Next Wave of Large-Scale Robot Learning We've completely overhauled our dataset…
184d ago
Building the Open Agent Ecosystem Together: Introducing OpenEnv
Building the Open Agent Ecosystem Together: Introducing OpenEnv Agentic environments define everything an agent needs to perform a task: the tools, APIs, credentials, execution context, and nothing else. They bring clarity, safety, and sandboxed control to agent behavior. These environments can be used for both training and deployment, and serve as the foundation for scalable agentic development. The Problem Modern AI agents can act autonomously across thousands of tasks. However, a large language model isn’t enough to get those tasks to actually run — it needs access to the right tools. Exposing millions of tools directly to a model isn’t reasonable (or safe). Instead, we need agentic environments: secure, semantically clear sandboxes that define exactly what’s required for a task, and nothing more. These environments handle the critical details: - Clear semantics about what a task needs - Sandboxed execution…
184dAgents#agents
185d ago
Hugging Face and VirusTotal collaborate to strengthen AI security
Hugging Face and VirusTotal collaborate to strengthen AI security TL;DR - Starting today, every one of the 2.2M+ public model and datasets repositories on the Hugging Face Hub is being continuously scanned with VirusTotal. Why this matters AI models are powerful but they’re also complex digital artifacts that can include large binary files, serialized data, and dependencies that sometimes carry hidden risks. As of today HF Hub hosts 2.2 Million Public model artifacts. As we continue to grow into the world’s largest open platform for Machine Learning models and datasets, ensuring that shared assets remain safe is essential. Threats can take many forms: - Malicious payloads disguised as model files or archives - Files that have been compromised before upload - Binary assets linked to known malware campaigns - Dependencies or serialized objects that execute unsafe code when loaded By…
185d ago
Sentence Transformers is joining Hugging Face!
Sentence Transformers is joining Hugging Face! Sentence Transformers (a.k.a. SentenceBERT or SBERT) is a popular open-source library for generating high-quality embeddings that capture semantic meaning. Since its inception by Nils Reimers in 2019, Sentence Transformers has been widely adopted by researchers and practitioners for various natural language processing (NLP) tasks, including semantic search, semantic textual similarity, clustering, and paraphrase mining. After years of development and training by and for the community, over 16,000 Sentence Transformers models are publicly available on the Hugging Face Hub, serving more than a million monthly unique users. "Sentence Transformers has been a huge success story and a culmination of our long-standing research on computing semantic similarities for the whole lab. Nils Reimers has made a very timely discovery and has produced not only outstanding research outcomes, but also a highly usable tool. This continues to…
186d ago
Supercharge your OCR Pipelines with Open Models
Supercharge your OCR Pipelines with Open Models Chandra and OlmOCR-2 to this blog, as well as OlmOCR Scores of the models 🫡We have added TL;DR: The rise of powerful vision-language models has transformed document AI. Each model comes with unique strengths, making it tricky to choose the right one. Open-weight models offer better cost efficiency and privacy. To help you get started with them, we’ve put together this guide. In this guide, you’ll learn: - The landscape of current models and their capabilities - When to fine-tune models vs. use models out-of-the-box - Key factors to consider when selecting a model for your use case - How to move beyond OCR with multimodal retrieval and document QA By the end, you’ll know how to choose the right OCR model, start building with it, and gain deeper insights into document AI.…
186d ago
Unlock the power of images with AI Sheets
Unlock the power of images with AI Sheets 🧭TL;DR: Hugging Face AI Sheets is an open-source tool for supercharging datasets with AI models, no code required. Now with vision support: extract data from images (receipts, documents), generate visuals from text, and edit images—all in a spreadsheet. Powered by thousands of open models via Inference Providers. We are excited to release a massive update to Hugging Face AI Sheets, the open-source tool for building, transforming, and enriching data with open AI models. AI Sheets leverages Inference Providers, which means you can use thousands of open models powered by the best inference providers on the planet. The first version of AI Sheets made structuring and enriching textual content a breeze. Now, we're adding vision to AI Sheets. Images are everywhere—product photos, receipts, screenshots, diagrams, charts, logos. These documents contain structured information waiting…
191d ago
AI for Food Allergies
AI for Food Allergies So, what can we do about it? In recent years, biomedical research has made several remarkable advances: from experimental vaccines and desensitization-based immunotherapies to improved diagnostic tools capable of identifying specific allergen sensitivities with unprecedented precision. These developments are pointing us in the right direction toward building long-term immune tolerance, but we’re not quite there yet. In the meantime, we’ve also witnessed groundbreaking progress in artificial intelligence applied to biology and medicine. Models like AlphaFold and Boltz-1 have revolutionized protein structure prediction, while AI-driven approaches in genomics, drug discovery, and molecular modeling are accelerating the pace of biomedical innovation. The convergence of these worlds is opening up new possibilities for understanding, predicting, and ultimately treating complex immune conditions such as food allergies. Four among the major allergenic proteins folded by AlphaFold. Up left to bottom right:…
191dResearch
191d ago
Google Cloud C4 Brings a 70% TCO improvement on GPT OSS with Intel and Hugging Face
Google Cloud C4 Brings a 70% TCO improvement on GPT OSS with Intel and Hugging Face C4 Virtual Machine (VM) running on Intel® Xeon® 6 processors (codenamed Granite Rapids (GNR)). We specifically wanted to benchmark improvements in the text generation performance of OpenAI GPT OSS Large Language Model(LLM). The results are in, and they are impressive, demonstrating a 1.7x improvement in Total Cost of Ownership(TCO) over the previous-generation Google C3 VM instances. The Google Cloud C4 VM instance further resulted in: - 1.4x to 1.7x TPOT throughput/vCPU/dollar - Lower price per hour over C3 VM Introduction GPT OSS is a common name for an open-source Mixture of Experts (MoE) model released by OpenAI. An MoE model is a deep neural network architecture that uses specialized “expert” sub-networks and a “gating network” to decide which experts to use for a given…
192d ago
Get your VLM running in 3 simple steps on Intel CPUs
Get your VLM running in 3 simple steps on Intel CPUs While running AI models on your own device can be difficult as these models are often computationally demanding, it also offers significant benefits: including improved privacy since your data stays on your machine, and enhanced speed and reliability because you're not dependent on an internet connection or external servers. This is where tools like Optimum Intel and OpenVINO come in, along with a small, efficient model like SmolVLM. In this blog post, we'll walk you through three easy steps to get a VLM running locally, with no expensive hardware or GPUs required (though you can run all the code samples from this blog post on Intel GPUs). Deploy your model with Optimum Small models like SmolVLM are built for low-resource consumption, but they can be further optimized. In this…
192dHardware#coding#local
194d ago
Nemotron-Personas-India: Synthesized Data for Sovereign AI
Nemotron-Personas-India: Synthesized Data for Sovereign AI Open Data for India's AI Future India represents one of the world's largest AI opportunities — with over 700 million internet users, a multitude of languages, and a rapidly growing developer ecosystem. Yet, most open datasets reflect Western norms and English-only contexts, creating a data gap that limits AI adoption in India's multilingual, multi-script environment. Today, we're releasing Nemotron-Personas-India, the first open synthetic dataset of Indic personas aligned to India's real-world demographic, geographic, and cultural distributions. Licensed under CC BY 4.0, this dataset offers a privacy-preserving, regulation-ready foundation for scaling AI systems that reflect Indian society—without relying on sensitive personal data. Built with NeMo Data Designer, NVIDIA's enterprise-grade synthetic data generation microservice, Nemotron-Personas-India extends our global collection of Sovereign AI datasets. It builds on the success of our US and Japan persona datasets and…
197d ago
Arm will be @ PyTorch Conference, Join Us!
Arm will be @ PyTorch Conference, Join Us! Co-Authored by Michelle Yung @ Arm Join us on site October 22-23 to see how Arm empowers developers to build and deploy AI applications with ease using PyTorch and ExecuTorch. Learn about the latest AI technologies from Arm and our ecosystem while expanding your professional network alongside like-minded AI engineers. Connect, Chat, Chill Fuel up ahead of the conference with an evening of food, drinks, and good conversation . Whether you’re looking to relax or network, you’ll be in good company with fellow AI engineers. This pre-conference gathering, offers a relaxed setting where participants can meet Arm experts and fellow professionals in artificial intelligence, share experiences, and make valuable connections before the formal sessions begin. The event will include a variety of delicious refreshments to enjoy, creating a welcoming atmosphere for both…
197dTutorial#coding
200d ago
BigCodeArena: Judging code generations end to end with code executions
BigCodeArena: Judging code generations end to end with code executions Inspired by LMArena for LLMs, we've built a platform that allows anyone to compare code generation models side-by-side, but with a crucial difference: you can actually run the code and see what it produces. Just submit a coding task, watch two different models generate solutions, execute both programs, and vote on which model produced better results. The outcomes are organized into a leaderboard that displays the community's highest-rated models. Motivation The field of code generation has long struggled with reliable evaluation methods. Traditional benchmarks like HumanEval test code against predefined test cases, but these represent only a tiny fraction of real-world programming tasks. Human evaluation platforms exist for general chatbots, but they fall short for code: reading raw source code and mentally simulating its execution is cognitively demanding and error-prone,…
200dResearch#coding#benchmark
205d ago
SOTA OCR with Core ML and dots.ocr
SOTA OCR with Core ML and dots.ocr Enter the Neural Engine, Apple's custom AI accelerator that has shipped with every Apple device since 2017. This accelerator is designed for high performance whilst sipping battery power. Some of our testing has found the Neural Engine to be 12x more power efficient than CPU, and 4x more power efficient than GPU. Whilst this all sounds very appealing, unfortunately the Neural Engine is only accessible through Core ML, Apple's closed source ML framework. Furthermore, even just converting a model from PyTorch to Core ML can present some challenges, and without a preconverted model or some knowledge of the sharp edges it can be arduous for developers. Luckily, Apple also offers MLX, a more modern and flexible ML framework that targets the GPU (not the Neural Engine), and can be used in conjunction with…
205dHardware#coding
206d ago
Introducing RTEB: A New Standard for Retrieval Evaluation
TL;DR – We’re excited to introduce the beta version of the Retrieval Embedding Benchmark (RTEB), a new benchmark designed to reliably evaluate the retrieval accuracy of embedding models for real-world applications. Existing benchmarks struggle to measure true generalization, while RTEB addresses this with a hybrid strategy of open and private datasets. Its goal is simple: to create a fair, transparent, and application-focused standard for measuring how models perform on data they haven’t seen before. The performance of many AI applications, from RAG and agents to recommendation systems, is fundamentally limited by the quality of search and retrieval. As such, accurately measuring the retrieval quality of embedding models is a common pain point for developers. How do you really know how well a model will perform in the wild? This is where things get tricky. The current standard for evaluation often…
206dResearch