132 private links
The landscape of AI is not merely filled with news. It is filled with teams. You have the doomers, the accelerationists, the skeptics, the it’s-a-bubble oracles, the anti-bubble counter oracles, and so on. It would be convenient for my sanity—and, perhaps, the sanity of my readers—if I simply joined one team and never removed the jersey. But I don’t think any aforementioned tribe has a monopoly on good arguments. I think the doomers are right about the risk of the technology, and the accelerationists are right about the promise of the technology, and the skeptics are right that the doomers and accelerationists can both overstate their cases.
In 2022, I made a New Year’s resolution to switch from Chrome to Firefox, and from VS Code to Neovim.
My goal was to reduce my dependence on GAFAM tools, and it turned out to be a good decision considering this.
It took some time to adjust, but I am now a happy Firefox user on both desktop and mobile.
That said, it still has some issues, for example, the tab system on Android. I wish there were an easier way to search through tabs or select multiple tabs to close them, instead of closing everything at once.
However, my experience with Neovim was very different. I can say I really tried to adopt it, as I used it for four years before deciding to abandon it.
My first session with Claude Code was practically magical. I was speaking to my computer, telling it with natural language what I wanted it to do, and it was able to just do it. It did ( and still does ) feel like a completely new form of input, a new way to control my machine. I have misgivings about using AI in this way, but I still think this is a great tool for sufficiently low-level tasks. I’m waiting eagerly for the day that I can spin up a local LLM that can perform this function as well as Claude Code does.
I'm as anti-genAI as it gets. And yet, this past month, I have used generative coding to complete a project. It works. I hated making it.
These days, Wandering Thoughts has some hacked together HTTP request rate limits. They don't exist for strong technical reasons; my blog engine setup here can generally stand up to even fairly extreme traffic floods (through an extensive series of hacks). It's definitely possible to overwhelm Wandering Thoughts with a high enough request volume, and HTTP rate limits will certainly help with that, but that's not really why they exist. My HTTP rate limits exist for ultimately social reasons and because they let me stop worrying and stop caring about certain sorts of abuse.
One of the biggest problems with measuring AI progress is the ambiguity of measuring intelligence itself.
AGI is treated as a milestone we have yet to cross, but there is no central definition of AGI.
Depending on who you ask, AGI is achieved when a system:
- can fool humans into thinking it is one of them, in other words, pass a Turing Test
- demonstrates creativity (Springer)
- can develop new skills (DeepMind)
- solves unfamiliar tasks (DeepMind)
- is generally capable across domains (IBM)
- is superior to humans in intelligence (Scientific American)
- outperforms humans economically (OpenAI Charter)
- can independently solve complex problems without human oversight (DeepMind)
Even with the lack of consensus, I can confidently say we have AGI, because the criteria above has been met.
Retrieval-Augmented Generation (RAG) has become the dominant paradigm for grounding Large Language Model (LLM) agents in domain-specific knowledge. The standard approach requires selecting an embedding model, designing a chunking strategy, deploying a vector database, maintaining indexes, and performing approximate nearest neighbor (ANN) search at query time. We argue that for domain-specific knowledge grounding --- where the vocabulary is predictable and the corpus is bounded --- this entire stack is unnecessary. We present Knowledge Search, a two-layer retrieval system composed of (1) grep with contextual line windows and (2) cat of pre-structured fallback files. Deployed in production across 20 specialized LLM agents serving three knowledge domains (Traditional Chinese Medicine, Christian spiritual classics, and U.S. civics), our approach achieves 100% retrieval accuracy with sub-10ms latency, zero preprocessing, zero additional memory footprint, and zero infrastructure dependencies.
Scientists and educators are concerned about students using artificial intelligence to shortcut their learning. But there are also opportunities, especially when it comes to teaching neuroscience students how to code.
Visualizing machine learning one concept at a time.
L'agente di coding IA open source
Modelli gratuiti inclusi o collega qualsiasi modello da qualsiasi provider, inclusi Claude, GPT, Gemini e altri.
AI agents running research on single-GPU nanochat training automatically.
Servizio per Docker, che puo' suggerire swap tra le crypto quando prevede un aumento, ottenendo arbitraggio nel lungo termine - a parte i black swan.