132 private links
The landscape of AI is not merely filled with news. It is filled with teams. You have the doomers, the accelerationists, the skeptics, the it’s-a-bubble oracles, the anti-bubble counter oracles, and so on. It would be convenient for my sanity—and, perhaps, the sanity of my readers—if I simply joined one team and never removed the jersey. But I don’t think any aforementioned tribe has a monopoly on good arguments. I think the doomers are right about the risk of the technology, and the accelerationists are right about the promise of the technology, and the skeptics are right that the doomers and accelerationists can both overstate their cases.
My first session with Claude Code was practically magical. I was speaking to my computer, telling it with natural language what I wanted it to do, and it was able to just do it. It did ( and still does ) feel like a completely new form of input, a new way to control my machine. I have misgivings about using AI in this way, but I still think this is a great tool for sufficiently low-level tasks. I’m waiting eagerly for the day that I can spin up a local LLM that can perform this function as well as Claude Code does.
I'm as anti-genAI as it gets. And yet, this past month, I have used generative coding to complete a project. It works. I hated making it.
Retrieval-Augmented Generation (RAG) has become the dominant paradigm for grounding Large Language Model (LLM) agents in domain-specific knowledge. The standard approach requires selecting an embedding model, designing a chunking strategy, deploying a vector database, maintaining indexes, and performing approximate nearest neighbor (ANN) search at query time. We argue that for domain-specific knowledge grounding --- where the vocabulary is predictable and the corpus is bounded --- this entire stack is unnecessary. We present Knowledge Search, a two-layer retrieval system composed of (1) grep with contextual line windows and (2) cat of pre-structured fallback files. Deployed in production across 20 specialized LLM agents serving three knowledge domains (Traditional Chinese Medicine, Christian spiritual classics, and U.S. civics), our approach achieves 100% retrieval accuracy with sub-10ms latency, zero preprocessing, zero additional memory footprint, and zero infrastructure dependencies.
Scientists and educators are concerned about students using artificial intelligence to shortcut their learning. But there are also opportunities, especially when it comes to teaching neuroscience students how to code.
L'agente di coding IA open source
Modelli gratuiti inclusi o collega qualsiasi modello da qualsiasi provider, inclusi Claude, GPT, Gemini e altri.
Stop switching contexts. Many AI CLI tools are heavy, Node.js-based, and trap you in their interface. They might go back and forth just to run a simple ls, cutting off real access to your command line.
pls is different: lightweight, fast, and built for everyday CLI tasks, keeping you fully in the command line while letting you seamlessly switch between AI and shell commands.
nanochat is the simplest experimental harness for training LLMs. It is designed to run on a single GPU node, the code is minimal/hackable, and it covers all major LLM stages including tokenization, pretraining, finetuning, evaluation, inference, and a chat UI.
Journey through the major events since ChatGPT revolutionized AI accessibility in November 2022.
I do not think it will shock anyone to learn that big tech is aggressively pushing AI products. But the extent to which they have done so might. The sheer ubiquity of AI means that we take for ground the countless ways, many invisible, that these products and features are foisted on us—and how Silicon Valley companies have systematically designed and deployed AI products onto their existing platforms in an effort to accelerate adoption.
The role of the IC (Individual Contributor) is evolving fast—and AI is accelerating the shift. As AI tools become deeply integrated into development workflows, many engineers find themselves stepping into responsibilities once reserved for engineering managers. This isn’t a hypothetical trend—it’s already happening in high-performing teams.
Napkin turns your text into visuals so sharing your ideas is quick and effective.