$ timeahead_
all sourcesAhead of AI (Sebastian Raschka)Anthropic NewsApple Machine Learning ResearchArs Technica AIAWS Machine Learning BlogCerebras BlogCohere BlogCrewAI BlogDeepSeek BlogDistill.pubfast.ai BlogFireworks AI BlogGoogle AI BlogGoogle Cloud AI BlogGoogle DeepMind BlogGroq BlogHaystack (deepset) BlogHugging Face BlogImport AI (Jack Clark)LangChain BlogLangFuse BlogLil'Log (Lilian Weng)LlamaIndex BlogMeta AI BlogMicrosoft AutoGen BlogMicrosoft Research BlogMistral AI NewsMIT Technology ReviewModal Blogn8n BlogNathan Lambert (RLHF)NVIDIA Developer BlogOllama BlogOpenAI BlogPerplexity AI BlogPyTorch BlogReplicate BlogSimon Willison BlogTensorFlow BlogThe Batch (DeepLearning.AI)The GradientThe Verge AITogether AI BlogVentureBeat AIvLLM BlogWeights & Biases BlogWired AIxAI (Grok) Blog
allapiagentsframeworkshardwareinframodelopen sourcereleaseresearchtutorial
★ TOP STORY[ FB ]·67d ago

I Don’t Want a Learning Dashboard for My Child

Often debates about education are framed as non-tech versus AI approaches, but too often, AI ed tech just magnifies the same failures of traditional school. Against quantification As part of a select group of parents whose children homeschool in part with online resources, we met with an EdTech entrepreneur. Our children were already spending time daily on math apps, reading apps, and zoom calls. “I want a dashboard to track what my child is learning, their proficiencies in different areas,” one of the parents brainstormed. “I actively do not want that,” my husband Jeremy countered. Despite embracing the use of screens, we did not want to quantify our child. Our goals for her are more holistic. Outsiders may have expected that we would care more about quantification. Jeremy and I have both built our careers in data and machine learning.…

fast.ai Blogread →
▲ trending · last 48hview all →
[FB]fast.ai Blog· 19 articlesvisit →
87d ago
Breaking the Spell of Vibe Coding
Vibe coding is the creation of large quantities of highly complex AI-generated code, often with the intention that the code will not be read by humans. It has cast quite a spell on the tech industry. Executives push lay-offs claiming AI can handle the work. Managers pressure employees to meet quotas of how much of their code must be AI-generated or risk poor performance reviews. Software developers worry that everyone around them is a “10x developer” and that they’ve fallen behind. College students wonder if it is worth studying computer science now that AI has automated coding. People of all career stages hesitate to invest in their own career development. Won’t AI be able to do their jobs for them anyway a year from now? What is the point? I work at an AI company, and we use AI every…
87dResearch#codingby Rachel Thomas
94d ago
How To Use AI for the Ancient Art of Close Reading
The Ancient Art of Close Reading Close reading is a technique for careful analysis of a piece of writing, paying close attention to the exact language, structure, and content of the text. As Eric Ries described it,“close reading is one of our civilization’s oldest and most powerful technologies for trying to communicate the gestalt of a thing, the overall holistic understanding of it more than just what can be communicated in language because language is so limited.” It was (and in some cases still is) practiced by many ancient cultures and major religions. Some scholars describe close reading as “‘reading out of’ a text rather than ‘reading into’ it”, referring to the importance of making outward connections to broader context. LLMs can provide a useful tool for identifying these outward connections. It might come as a surprise that a technique…
94dTutorialby Rachel Thomas
143d ago
Stop Saying Boredom is Good for Kids
Chronic boredom is harmful to adults, causing stress, disengagement, and poor well-being. Academic researchers have shown that boredom in the workplace can be just as damaging as burnout. But search for information about childhood boredom and you’ll find the opposite message: articles describing boredom for kids as “fantastic”, “important”, and full of benefits. News articles, social media graphics, parenting discussions, and even policy proposals are full of praise for how great boredom is for children. Why do so many adults disresgard children’s autonomy, capabilities, and intellectual needs? Unstructured play is different than boredom Play is a keystone of childhood. It is crucial for children to have unstructured free time in which they figure out what they want to do. Left alone, my daughter has come up with all sorts of fun ideas– building homes for her stuffed animals out of…
143dResearch#safetyby Rachel Thomas
169d ago
A Guide to Solveit Features
Introduction Large language models make it remarkably easy to generate code. Ask ChatGPT or Claude to build an application, and you’ll receive hundreds of lines of working code in seconds. But this creates a problem: you get code you don’t understand, and when you need to modify it, fix a bug, or add a feature, you’re stuck. Solveit solve.it.com takes a different approach. Rather than generating large blocks of code, it works with you interactively to build solutions piece by piece. You write one or two lines at a time, understand what they do, then move to the next step. The AI sees your full working context and suggests what comes next based on what you’re actually building. This method may sometimes be slower initially, but produces something more valuable: working code that you understand. Code you can modify, extend,…
169dTutorial#gpt#claude#codingby Solveit and Kerem Turgutlu
177d ago
Build to Last
Note from Jeremy: We’re teaching a course starting Nov 3rd on how to build towards software mastery and craftsmanship whilst leveraging AI effectively. Have a look at solve.it.com if you’re interested. I’ve spent decades teaching people to code, building tools that help developers work more effectively, and championing the idea that programming should be accessible to everyone. Through fast.ai, I’ve helped millions learn not just to use AI, but to understand it deeply enough to build things that matter. But lately, I’ve been deeply concerned. The AI agent revolution promises to make everyone more productive, yet what I’m seeing is something different: developers abandoning the very practices that lead to understanding, mastery, and software that lasts. When CEOs brag about their teams generating 10,000 lines of AI-written code per day, when junior engineers tell me they’re “vibe-coding” their way through…
177dTutorial#ragby Jeremy Howard
191d ago
Let’s Build the GPT Tokenizer: A Complete Guide to Tokenization in LLMs
graph TB subgraph "GPT-2 Architecture" A[Raw Text: 'Hello world'] --> B[Tokenizer] B --> C[Token Sequence: 15496, 995] C --> D[Embedding Table<br/>50,257 rows × n_embed cols] D --> E[Token Vectors] E --> F[Transformer Attention Layer<br/>Context Window: 1,024 tokens] F --> G[Each token attends to<br/>previous tokens in sequence] G --> H[Output: Next Token Prediction] end style B fill:#e1f5ff style D fill:#fff4e1 style F fill:#ffe1f5 note1[Vocabulary Size: 50,257 tokens] note2[Context Size: 1,024 tokens] note3[Tokens are the fundamental<br/>unit of LLMs] B -.-> note1 F -.-> note2 C -.-> note3 18 months ago, Andrej Karpathy set a challenge: “Can you take my 2h13m tokenizer video and translate the video into the format of a book chapter”. We’ve done it, and the chapter is below, including key pieces of code inlined, and images from the video at key points (hyperlinked to the video timestamp). It’s a…
191dTutorial#multimodal#codingby Andrej Karpathy, via Solveit and Kerem Turgutlu
192d ago
How to Solve it With Code course now available
tl/dr: This is a copy of a one-off email I sent to all fast.ai forum users, with a long-overdue update. I had planned to send this email a year ago to let you know you could sign up for our new fast.ai course, “How to Solve it With Code”, but by the time I woke up the day after launching it, it was full. :O The good news is that we’ve spent the last year making it much better, and a brand new version is ready to go now. You can jump straight to it here: solve.it.com . It’s designed mainly for experienced coders, AI practitioners, and data scientists, although folks from nearly any technical field should find it helpful. When Rachel & I started fast.ai, our mission was to democratize deep learning model training; to make it so you…
192dTutorial#codingby Jeremy Howard
429d ago
fasttransform: Reversible Pipelines Made Simple
Introducing fasttransform, a Python library that makes data transformations reversible and extensible through the power of multiple dispatch. technical Author Rens Dimmendaal, Hamel Husain, & Jeremy Howard Published February 20, 2025 fasttransform: Reversible Pipelines Made Simple Introducing fasttransform, a Python library that makes data transformations reversible and extensible through the power of multiple dispatch. “How did this image get misclassified?” If you’ve ever trained a machine learning model, you know what comes next: the frustrating journey of trying to understand what your model actually saw. You dig through layers of transformations - normalizations, resizes, augmentations - only to realize you’ll need to write inverse functions just to see your data again. It’s so painful that many of us skip it altogether, debugging our models based on abstract numbers rather than actual data. Or as OpenAI’s Greg Brockman puts it: Let’s…
429dReleaseby Rens Dimmendaal, Hamel Husain, & Jeremy Howard
446d ago
What AI can tell us about microscope slides
This article was originally posted on rachel.fast.ai, where Rachel has been writing about her journey as an AI researcher returning to school for immunology. The lavender images below show breast tissue. There are many questions doctors could want to answer using these images: They could want to know whether there are tumors present or not. If there is a tumor, doctors would want to classify its stage, make predictions about how likely the patient is to respond to treatment, and to detect whether the tumor has spread from another organ. All of these are questions which people are now tackling with machine learning. They fall within the area of computational pathology, often abbreviated CPath. In the past year, two CPath AI models were released which achieved state-of-the-art results. Here I will discuss an introduction to this field, what these models…
446dResearchby Rachel Thomas
534d ago
A New Chapter for fast.ai: How To Solve It With Code
Update from Jeremy 11 months later: This was so popular it sold out within 24 hours of me posting this. We’ve got a page with hundreds of comments from the alums. Since it went so well, we’ve spent this year building on it to create a full scalable platform, and we’re doing a new course with it starting Nov 3 2025. To avoid missing out, please sign up as soon as you can here: solve.it.com. Eight years ago, Rachel Thomas and I launched fast.ai with a mission to democratize artificial intelligence. We believed AI would become one of the most significant technologies in history, and, if widely distributed, felt like it could empower people all around the world to create anything they could imagine. But if only a small elite understood it, we worried it could lead to inequality. Today,…
534dTutorial#codingby Jeremy Howard
543d ago
In defense of screen time
My daughter is constantly creating– her passions include making art, writing fiction, coding interactive games, and composing music. Yet, I regularly see news articles and media pundits suggesting that my husband and I are doing things all wrong. The reason for these claims? My daughter uses screen-based tools, at least partially, and in some cases, entirely, to pursue the interests I listed in my first sentence. To give more detail on how she pursues her passions: - Art: She creates digital art in Sketchbook Pro, both on her own and in lessons. She also takes in-person art courses that involve a mix of acrylic painting, water colors, and sculpture. - Coding: She loves coding in Scratch and P5.js to design and build interactive games. - Creative writing: While her handwriting is on target for her age, she prefers typing, as…
543d#codingby Rachel Thomas
865d ago
A new old kind of R&D lab
tl;dr Jeremy Howard (founding CEO, previously co-founder of Kaggle and fast.ai) and Eric Ries (founding director, previously creator of Lean Startup and the Long-Term Stock Exchange) today launched Answer.AI, a new kind of AI R&D lab which creates practical end-user products based on foundational research breakthroughs. The creation of Answer.AI is supported by an investment of USD10m from Decibel VC. Answer.AI will be a fully-remote team of deep-tech generalists—the world’s very best, regardless of where they live, what school they went to, or any other meaningless surface feature. A new R&D lab In 1831 Michael Faraday showed the world how to harness electricity. Suddenly there was, quite literally, a new source of power in the world. He later found the basis of the unification of light and magnetism, and knew he was onto something big: “I happen to have discovered…
865dResearchby Jeremy Howard
964d ago
Can LLMs learn from a single example?
Summary: recently while fine-tuning a large language model (LLM) on multiple-choice science exam questions, we observed some highly unusual training loss curves. In particular, it appeared the model was able to rapidly memorize examples from the dataset after seeing them just once. This astonishing feat contradicts most prior wisdom about neural network sample efficiency. Intrigued by this result, we conducted a series of experiments to validate and better understand this phenomenon. It’s early days, but the experiments support the hypothesis that the models are able to rapidly remember inputs. This might mean we have to re-think how we train and use LLMs. How neural networks learn We train neural network classifiers by showing them examples of inputs and outputs, and they learn to predict outputs based on inputs. For example, we show examples of pictures of dogs and cats, along…
964dTutorial#fine-tuning#trainingby Jeremy Howard and Jonathan Whitaker
1001d ago
The real risk of AI is how it concentrates power
Friends with no previous interest in AI ethics have begun asking me questions in the wake of the release of ChatGPT4, Bard, and Bing Chat. This new generation of large language models has made headlines and sparked widespread debate. To consider the risks posed by new AI applications, it is useful to first understand several underlying concepts. I spent years researching the mechanisms by which algorithmic systems can cause harm, and in late 2021, I gave a 20-minute talk on what I consider key ideas at the heart of AI ethics. With the advent of the newest generation of language models, these concepts are more relevant than ever. Over the past decade, topics such as explainability (having computers generate an explanation of why they compute the outputs they do) and fairness/bias (addressing when algorithms have worse accuracy on some groups…
1001dResearch#gptby Rachel Thomas
1020d ago
AI Safety and the Age of Dislightenment
Abstract Proposals for stringent AI model licensing and surveillance will likely be ineffective or counterproductive, concentrating power in unsustainable ways, and potentially rolling back the societal gains of the Enlightenment. The balance between defending society and empowering society to defend itself is delicate. We should advocate for openness, humility and broad consultation to develop better responses aligned with our principles and values — responses that can evolve as we learn more about this technology with the potential to transform society for good or ill. Executive summary Artificial Intelligence is moving fast, and we don’t know what might turn out to be possible. OpenAI CEO Sam Altman thinks AI might “capture the light cone of all future value in the universe”. But things might go wrong, with some experts warning of “the risk of extinction from AI”. This had led many…
1020dTutorial#safetyby Jeremy Howard
1061d ago
Is Avoiding Extinction from AI Really an Urgent Priority?
This article is the result of a collaboration between philosopher Seth Lazar, AI impacts researcher Arvind Narayanan, and fast.ai’s Jeremy Howard. At fast.ai we believe that planning for our future with AI is a complex topic and requires bringing together cross-disciplinary expertise. This is the year extinction risk from AI went mainstream. It has featured in leading publications, been invoked by 10 Downing Street, and mentioned in a White House AI Strategy document. But a powerful group of AI technologists thinks it still isn’t being taken seriously enough. They have signed a statement that claims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” “Global priorities” should be the most important, and urgent, problems that humanity faces. 2023 has seen a leap forward in AI capabilities, which…
1061dResearchby Seth Lazar, Jeremy Howard, & Arvind Narayanan
1087d ago
Mojo may be the biggest programming language advance in decades
I remember the first time I used the v1.0 of Visual Basic. Back then, it was a program for DOS. Before it, writing programs was extremely complex and I’d never managed to make much progress beyond the most basic toy applications. But with VB, I drew a button on the screen, typed in a single line of code that I wanted to run when that button was clicked, and I had a complete application I could now run. It was such an amazing experience that I’ll never forget that feeling. It felt like coding would never be the same again. Writing code in Mojo, a new programming language from Modular1 is the second time in my life I’ve had that feeling. Here’s what it looks like: Why not just use Python? Before I explain why I’m so excited about Mojo,…
1087dRelease#codingby Jeremy Howard
1117d ago
From Deep Learning Foundations to Stable Diffusion
Today we’re releasing our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of Practical Deep Learning for Coders. Get started now! In this course, containing over 30 hours of video content, we implement the astounding Stable Diffusion algorithm from scratch! That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. We’ve worked closely with experts from Stability.ai and Hugging Face (creators of the Diffusers library) to ensure we have rigorous coverage of the latest techniques. The course includes coverage of papers that were released after Stable Diffusion came out – so it actually goes well beyond even what Stable Diffusion includes! We also explain how to read research papers, and practice this skill by studying and implementing many papers throughout…
1117dTutorial#rag#multimodalby Jeremy Howard
1132d ago
GPT 4 and the Uncharted Territories of Language
Beyond Wittgenstein’s Walls “The limits of my language mean the limits of my world.” — Ludwig Wittgenstein Language is like a map that we use to navigate the world, but it’s also like a prison that keeps us from seeing what’s beyond the walls. But what if there was a way to break out of this prison, to expand our map, to explore new worlds with new words? This is the possibility and the challenge offered by instruction tuned language models like GPT 4, a cutting-edge technology that uses artificial neural networks to generate natural language texts based on user inputs. GPT 4 can write anything from essays to novels to poems to tweets to code to recipes to jokes to lyrics to whatever you want. It can even write things that don’t exist yet, things that no human has…
1132d#codingby Jeremy Howard