Vitalik Buterin – "My self-sovereign / local / private / secure LLM setup"
derrida
25 points
1 comment
April 02, 2026
Related Discussions
Found 5 related stories in 59.3ms across 3,471 title embeddings via pgvector HNSW
- Things I Think I Think... Preferring Local OSS LLMs zdw · 43 pts · April 02, 2026 · 54% similar
- How I write software with LLMs indigodaddy · 69 pts · March 16, 2026 · 54% similar
- Reliable Software in the LLM Era mempirate · 102 pts · March 12, 2026 · 49% similar
- I'm Not Consulting an LLM birdculture · 52 pts · March 08, 2026 · 49% similar
- I don't use LLMs for programming ms7892 · 68 pts · March 12, 2026 · 46% similar
Discussion Highlights (1 comments)
jononor
Have been playing with Qwen3.5 35B. Runs OK nicely on a RTX5060Ti, though I would have liked to have a bit higher thoughput (a 5080/5090 would do). It is seemingly close-but-not-quite-there for code generation / agentic coding. So I am actually quite hopeful that in a few years time, using local LLM models will be quite feasible.