I’ll try to synthesise LLM research elsewhere. This is where I keep ephemeral notes and links, continuing my habit in 2024.
1 The year I finally install local LLMs
1.1 …via Ollama for autonomous LLM inference
Useful guides:
1.2 …via Simon Willison
Develops LLM: A CLI utility and Python library for interacting with Large Language Models.
If you like command-line wizardry this is neat.
Has powerful tricks such as Apple mac acceleration and this kind of stunt: pillow.md
git clone https://github.com/python-pillow/Pillow
cd Pillow
files-to-prompt -c . -e .py -e c -e h | \
llm -m gemini-2.0-pro-exp-02-05 \
-s 'Explain how Pillow interacts with the Python GIL - include example code snippets as part of your explanation'
These together produce the most incredible CLI text processing pipeline. Watch out, it gets quite expensive.
See also
- My LLM codegen workflow atm | Harper Reed’s Blog
- yamadashy/repomix: 📦 Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, DeepSeek, Perplexity, Gemini, Gemma, Llama, Grok, and more.
- Aider - AI Pair Programming in Your Terminal
1.3 … via LM Studio
A GUI option:
1.4 Virtually via proxy
2 Moar automation
OpenInterpreter/open-interpreter: A natural language interface for computers.
3 What happened previously
4 Economics
5 Deepseek and the cheapening of LLMs
Deepseek deep dive
- The impact of competition and DeepSeek on Nvidia
- deepseek-ai/awesome-deepseek-integration
- GRPO Trainer
- DeepSeek V3 and the cost of frontier AI models
- DeepSeek FAQ – Stratechery by Ben Thompson
- 2. Implement Reward Function for Dataset — veRL documentation
- deepseek-ai/DeepSeek-R1
- huggingface/open-r1: Fully open reproduction of DeepSeek-R1
6 Using AI Agents
I seem to be doing a lot of it.
See AI Agents for more.