I'm Not Consulting an LLM

birdculture 52 points 71 comments March 08, 2026
lr0.org · View on Hacker News

Discussion Highlights (18 comments)

mark_l_watson

I understand the author’s sentiment but I would like to give a counter example: I like to read philosophy and after I read a passage and think about it, I find it useful to copy the passage into a decent model and ask for its interpretation, or if it is something old ask about word choice or meaning. I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights. Another counter example: I have never found runtime error traces from languages like Haskell and Common Lisp to be that clear. If the error is not clear to me, sometimes using a model gets me past an error quickly. All that said, I think the author is right-on correct that using LLMs should not be an excuse to not think for oneself.

zaphirplane

I was hoping for a deeper article

NitpickLawyer

Yawn. Just a post++ about white-horse attitudes regarding "muh expertise". And yet the top of the top experts in their fields (Terrence Tao, Karpathy, hell even Linus) are finding ways to make them useful for them . That's the crux imo. If you can't find a way to make these tools useful for you, you are the problem. Not the LLMs. THere's something there, even if currently not much, but there's something there for everyone at this point.

simianwords

If people used LLMs more we would have fewer instances of misinformation. Lots of comments in social media could easily be dispelled by a single LLM search.

tony_codes

speaking of chance discovery, what a great personal website! I love the daily virtues from the diary section.

jstanley

> “I’m Feeling Lucky” intelligence is optimized for arrival, not for becoming. You get the answer but nothing else (keep in mind we are assuming that it's a good answer). You don’t learn how ideas fight, mutate, or die. You don’t develop a sense for epistemic smell or the ability to feel when something is off before you can formally prove it. All you're saying is that you can't imagine working on a task that is longer than 1 Google Search. If "I'm feeling lucky" works by magic, that doesn't mean your life is free of all searching, it just means you get the answer to each Google Search in fewer steps, which means the overall complexity of tasks that you can handle goes up. That's good! It doesn't mean you miss out on the journey of learning and being confused, it just means you're learning and being confused about more complicated things.

test001only

> you had to build a model of the world just to survive the tension? The world the author is describing currently has LLM in it. Irrespective of the author liking it or not, it is here to stay. So to build a model of the world, you would still need to consult an LLM, understand how it can give plausible looking answers, learn how to effectively leverage the tool and make it part of your toolkit. It does not mean you stop reading manuals, books or blogs. It just means you include LLM also in those list of things.

perks_12

I google a lot (or rather, Kagi). I loved to explore the web when I was younger. But over time I lost any interest in trying to gather informational bits from increasingly shittier websites designed to have more ads and hide relevant information for as many ad slots as possible. These days I hit the quick answer button inside Kagi more often and just accept that I might have some false information in there. If it is critical to be right, I usually consult primary sources directly anyway.

po1nt

I don't think the argument is correct. Reasoning LLM will check itself and search multiple sources. It's essentially doing the same mental process as human would. Also consulting multiple LLMs completely breaks this argument.

antonvs

To me this reads like “I don’t want to be able to learn faster.” The downside of the internet is that we get to see people agonizing over their inability to adapt to change.

ChrisMarshallNY

For myself, I’m very much a “results” guy. Have been, for all my career. I’ve been shipping (as opposed to “writing”) software for most of my adult life. People seem to like to stuff I make. I’m currently working on my first major project that incorporates heavy LLM contributions. It’s coming along great. I started with Machine Code and individual gate ICs, so my knowledge goes way down past the roots. I don’t miss it, at all. Occasionally, my understanding of stuff has been helped by that depth of experience, but, for the most part, it’s been irrelevant. It’s a first-stage booster, dropping back into the atmosphere. I will say that my original training as a bench tech has been very useful, as I’m good at finding and fixing bugs, but a lot of my experience is in the rear-view mirror. I have been routinely googling even the most basic stuff, for many years. It hasn’t corroded my intellect (yet), and I’m doing the same kind of thing with an LLM. Not being sneered at by some insecure kid is nice.

cmiles8

I see the SDE pushback on LLMs, but most of it is unfounded. Like any new tool, if used irresponsibly of course bad things can happen. Most of the backlash from devs seems rooted in: 1. It causes a step change in productivity for those that use it well, and as a result a step change in the expectations on productivity for dev teams. Folks simply expect things to be done faster now and that’s annoying folks that have to do the building. 2. It’s removed much of the mystique of dev. When the CEO is vibe coding legit apps on their own suddenly the SDE team is no longer this mysterious oracle that one can’t challenge or question because nobody else can do what they do. Now everyone can do what they do. Not to the same degree, yes, but it’s completely changed the dynamic and that’s just annoying some devs. SDEs aren’t going away, but we will likely need less moving forward and the expectations on how long things take have changed forever. Like anything in tech we’re not going back to the old way so you either evolve or you get cycled out. I also hear the “but LLMs only produce unrecognizable junk that one can’t maintain” angle, but that implies dev teams have been shipping beautiful artwork. Truth is most dev teams have been shipping undocumented fragile junk for years. While LLMs occasionally do odd things, in my experience LLM code is actually better documented and structured than what most dev teams produce and because of that it’s actually easier to hand off to others vs the typical codebase full of hacky workarounds and half-completed documentation.

kolinko

I think the author just doesn't know how to use LLMs well. "Because what would be missing isn’t information but the experience. And experience is where intellect actually gets trained." From my experience, LLMs don't cause this effect. You still get to explore a ton of dead ends and whatnot, just on a much higher level. "You get the answer but nothing else (keep in mind we are assuming that it's a good answer)." On the contrary here - you get to answer a ton of followup questions easily, something you don't get to ask books. "I never so far asked GPT about something that I'm specialized at, and it gave me a sufficient answer that I would expect from someone who is as much as expert as me in that given field." LLMs are at a level of junior-mid of any field (and going higher every year), not senior-master. Is that anything new? Their strength is, among other things, in making connections between fields, and also the availability. If you have an option to talk to a specialist in your field that has time 24/7 to discuss ideas with you, that's great, but also highly unusual. If you don't have such a person, an LLM that is junior-mid is way better than plain books.

kator

The "calculator ruined the world" argument was actually studied to death once the panic subsided. Large meta-analyses of 50 years of data show it was mostly a non-problem. Students using calculators generally developed better attitudes toward math and attempted more complex problems because the mechanical drudgery was gone. The only real "catch" researchers found was timing. If you introduce them before a kid has "automaticity" (around 4th grade), they never develop a baseline number sense, which makes high-level math harder later on. It's a pretty clean parallel for LLMs. The tool isn't the problem, but using it to bypass the "becoming" phase of a skill usually backfires. If you use an LLM before you know how to structure an argument or a block of code, you're just building on sand.

rsfern

I like this analogy of always choosing “I’m feeling lucky” on Google, I feel like it clarifies a boundary between information retrieval and evaluation that gets blurred by language model summarizations. I’ve been frustrated with the LLM summary at the top of the Google search results for scientific topics because often the sources linked to don’t actually contain the information the summary is citing them for. Then I have a side quest of finding the right backing literature or deciding the summary was just wrong in the first place

maplethorpe

> A tool can be efficient and still be intellectually corrosive, not because it lies all the time, but because it lies well enough. Its smoothness hides uncertainty, which is important unless you want intellect-rot. I keep seeing sentiments like this, but to me they're still very much stuck in the past. We once needed to develop our intellects in order to solve problems. It was a necessary part of the process. Solving a particular problem would flood our brains with dopamine, and we would feel good for achieving our goal, and thus continue to develop our intellect in the hope of achieving a similar rush in the future. Now that we have a machine that can solve our problems for us, intellect just plain isn't necessary anymore. We can solve our problems immediately and skip the roundabout intellect-building process entirely. That's a liberating thing.

carrychains

Same can be said of search engines, encyclopedias, or wikis compared to seeking out books, journals, and other source material. If you don't sit there for 8 hours in a library to find the same information on your own, you've missed out on the experience. It's a standard Luddite's argument. Tools of any kind that enhance efficiency have always actualized lazy outcomes. It has always been the human responsibility to, not only rely on their best effort, but to figure out what actually encompasses their best possible effort.

codexetreme

I personally consult LLMs over technical stuff. But for the more nuanced "human" side of things, I feel it gives you answers from both sides For instance, I wanted to know if it's better to setup a call with a client in a coffee shop over their office. Almost all LLMs will give you whatever answer you want based on how much stress you put on one particular answer. I'm not saying that traditional search engines will show you only one sided results, but you'll see reddit threads that will have a bit of conversation before arriving at some conclusion. You'd read it and go hmm this question had XYZ setup and they picked the coffee shop as a great place. My own situation is perhaps (for sake of argument) not XYZ, so it's better to have the meeting in the client office. This is a community driven decision where I have certain basis if I want to revisit my decision. With an llm I find that it will happily just give you one of the two options and keep switching in the middle if you ask it to. That's what makes me a bit a skeptical. With technical stuff, there are direct solutions that fit your usecase and you can just use it and move on.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed