132 private links
The landscape of AI is not merely filled with news. It is filled with teams. You have the doomers, the accelerationists, the skeptics, the it’s-a-bubble oracles, the anti-bubble counter oracles, and so on. It would be convenient for my sanity—and, perhaps, the sanity of my readers—if I simply joined one team and never removed the jersey. But I don’t think any aforementioned tribe has a monopoly on good arguments. I think the doomers are right about the risk of the technology, and the accelerationists are right about the promise of the technology, and the skeptics are right that the doomers and accelerationists can both overstate their cases.
In 2022, I made a New Year’s resolution to switch from Chrome to Firefox, and from VS Code to Neovim.
My goal was to reduce my dependence on GAFAM tools, and it turned out to be a good decision considering this.
It took some time to adjust, but I am now a happy Firefox user on both desktop and mobile.
That said, it still has some issues, for example, the tab system on Android. I wish there were an easier way to search through tabs or select multiple tabs to close them, instead of closing everything at once.
However, my experience with Neovim was very different. I can say I really tried to adopt it, as I used it for four years before deciding to abandon it.
My first session with Claude Code was practically magical. I was speaking to my computer, telling it with natural language what I wanted it to do, and it was able to just do it. It did ( and still does ) feel like a completely new form of input, a new way to control my machine. I have misgivings about using AI in this way, but I still think this is a great tool for sufficiently low-level tasks. I’m waiting eagerly for the day that I can spin up a local LLM that can perform this function as well as Claude Code does.
I'm as anti-genAI as it gets. And yet, this past month, I have used generative coding to complete a project. It works. I hated making it.
These days, Wandering Thoughts has some hacked together HTTP request rate limits. They don't exist for strong technical reasons; my blog engine setup here can generally stand up to even fairly extreme traffic floods (through an extensive series of hacks). It's definitely possible to overwhelm Wandering Thoughts with a high enough request volume, and HTTP rate limits will certainly help with that, but that's not really why they exist. My HTTP rate limits exist for ultimately social reasons and because they let me stop worrying and stop caring about certain sorts of abuse.
Retrieval-Augmented Generation (RAG) has become the dominant paradigm for grounding Large Language Model (LLM) agents in domain-specific knowledge. The standard approach requires selecting an embedding model, designing a chunking strategy, deploying a vector database, maintaining indexes, and performing approximate nearest neighbor (ANN) search at query time. We argue that for domain-specific knowledge grounding --- where the vocabulary is predictable and the corpus is bounded --- this entire stack is unnecessary. We present Knowledge Search, a two-layer retrieval system composed of (1) grep with contextual line windows and (2) cat of pre-structured fallback files. Deployed in production across 20 specialized LLM agents serving three knowledge domains (Traditional Chinese Medicine, Christian spiritual classics, and U.S. civics), our approach achieves 100% retrieval accuracy with sub-10ms latency, zero preprocessing, zero additional memory footprint, and zero infrastructure dependencies.
Scientists and educators are concerned about students using artificial intelligence to shortcut their learning. But there are also opportunities, especially when it comes to teaching neuroscience students how to code.
Other than your Git repository storing your source code, the second most valuable source of information is your commits which chronicle the evolution of your codebase. Your commits are a treasure trove of information — when well written — because they allow you to:
- Achieve Second-Order Thinking by having the long tail of thought in order make forward thinking decisions.
- Have well thought out Code Reviews. Even better, mentorship is built in by default because your code review’s Git history allows less experienced engineers have a chance to level up and learn from more experienced engineers.
- Automate the generation of release notes and versions based on your curated commit history to produce Milestones for your team, stakeholders, and customers.
Package managers are essential tools on Linux systems. They help you install, update, and remove software packages with simple commands. Most distributions come with their own package managers, like apt, dnf, or pacman.
However, many modern tools are distributed as pre-compiled binaries via GitHub releases. Developers using languages like Go, Rust, and Deno often release their software this way. New projects that are not included in the official distro repository yet have to opt for this method.
I do not think it will shock anyone to learn that big tech is aggressively pushing AI products. But the extent to which they have done so might. The sheer ubiquity of AI means that we take for ground the countless ways, many invisible, that these products and features are foisted on us—and how Silicon Valley companies have systematically designed and deployed AI products onto their existing platforms in an effort to accelerate adoption.
The role of the IC (Individual Contributor) is evolving fast—and AI is accelerating the shift. As AI tools become deeply integrated into development workflows, many engineers find themselves stepping into responsibilities once reserved for engineering managers. This isn’t a hypothetical trend—it’s already happening in high-performing teams.
REST wasn’t designed for modern APIs. It was a retrospective description of how early web browsers talked to HTTP servers — formalized by Roy Fielding to finish his PhD. It explained how the Web worked in the 90s, not how your API should work in 2025.
What we do today should probably be called JOHUR instead (JSON over HTTP, URL-based Routing).
Open-source software tools continue to increase in popularity because of the multiple advantages they provide including lower upfront software and hardware costs, lower total-cost-of-ownership, lack of vendor lock-in, simpler license management and support from active communities.
In the following slides, as part of the CRN 2024 Year In Review project, we take a look at some of the most popular open-source software products that have caught our attention this year.