Why does AI tell you to use Terminal so much?
ingve
35 points
63 comments
March 11, 2026
Related Discussions
Found 5 related stories in 53.5ms across 3,471 title embeddings via pgvector HNSW
- Why developers using AI are working longer hours birdculture · 62 pts · March 07, 2026 · 49% similar
- Hyperlinks in Terminal Emulators nvahalik · 17 pts · March 13, 2026 · 49% similar
- AI Made Writing Code Easier. It Made Being an Engineer Harder saikatsg · 380 pts · March 01, 2026 · 49% similar
- Training students to prove they're not robots is pushing them to use more AI PretzelFisch · 148 pts · March 07, 2026 · 48% similar
- AI overly affirms users asking for personal advice oldfrenchfries · 585 pts · March 28, 2026 · 48% similar
Discussion Highlights (20 comments)
dkdbejwi383
Because it’s text.
theshrike79
Because terminal commands are the only way to automate actions on an OS. Clicking through UIs is not it.
littlecranky67
Because it was not trained on screenshots or real rendered computer UIs, but text. That is also why in my experience, LLM suck at describing click paths, and are less helpful on UI development, as they never really "see" the result of the code as in rendered HTML outputs.
xnorswap
The main reason I wouldn't tell someone command line options is because I'd be unsure that either I'd make a mistake and mix something up, or that the person I'm helping would make a mistake. UIs have better visual feedback for "Am I about to do the right thing?". But with the AI, there's a good chance it has it correct, and a good chance it'll just be copy/pasted or even run directly. So the risk is reduced.
Markoff
because GUI can vary over time (or Linux distros) much more than terminal commands edit: ChatGPT talked me recently through Linux Mint installation on two old laptops I have at home where Mint didn't detect existing Windows installation (which I wanted to keep), don't think anyone on Reddit or elsewhere would be as fast/patient as ChatGPT, it was mostly done by terminal commands, one computer was easy, the other had already 4 partitions and FAT32, so it took longer
mgaunard
The real question is why wouldn't you prefer the terminal way over silly GUIs?
kleiba
> Few understand the commands used... By "few" you mean "few Gen-Zs?"
dr_dshiv
These days, command line offers way better usability and accessibility (because Claude Code can do it). Whenever I have to use a GUI I’m like uuughgh… Am I the only one who thinks like this?
magnio
I am not the most ardent supporter of LLM, but the whole article reads like a critique of macOS idiosyncrasies and its aversion to CLI and text format. Why does macOS tell you to use the GUI so much? Sure, GUI is more accessible to the average users, but all the tasks in the article aren't going to be done by the average user. And for the more technical users, having to navigate System Settings to find anything is like Dr. Sattler plunging her arms into a pile of dinosaur dung.
randomtools
Spending time on debugging on UI especially since AI is growing exponentially is quite waste of a time
Hard_Space
This problem is chronic with GPT[N] dealing with a Windows environment. I have to constantly remind it to prefer the GUI option, though nothing really works. I don't know if agents make use of screenshots the way older automation routines have always done, but increasing use of that kind of data would help LLMs progress beyond CLI-addiction.
ZiiS
Why dose software with text interface tell you to use a text interface?
dewey
This seems like one of these “why does my bad prompt give me bad results” kind of complaints. Just tell the LLM something like “in your reply prefer the macOS GUI and wrote the instructions for a non technical user similar to the Apple help pages” and it’ll look much different. It won’t be as fast to go through them than just pasting some commands but if that’s what the user prefers…
fyredge
TFA is short and only shows a single example, but it illuminated something for me. LLMs are a misnomer. These are Large Text Models, or better yet, Large Token Models. The appearance of Language is a result of embedding words or parts of words into Tokens, then identifying the relations between Tokens via Machine Learning. This further solidifies my view that LLMs will not achieve AGI by refuting the oft repeated popsci argument that human brains predict the next word in a sentence just like LLMs.
sunaookami
The main point of the article is not what the title claims but the fact that ChatGPT sucks big time for troubleshooting since even the terminal commands are nonsense.
llarsson
Because it's been trained on decades of StackOverflow and forum posts. And because while some command line tools go in and out of fashion, quite a lot are very stable, so their use will show up all the time in the training material. Since it's all statistics under the LLM hood, both of those cause proven CLI tools to have strong signals as being the right answer.
shevy-java
My initial reaction was "because AI is so stupid". However had, I use the terminal all the time. It is the primary user interface to me to get computers to do what I want; in the most basic sense I simply invoke various commands from the commandline, often delegating onto self-written ruby scripts. For instance "delem" is my commandline alias for delete_empty_files (kept in delete_empty_files.rb). I have tons of similar "actions"; oldschool UNIX people may use some commandline flags for this. I also support commandline flags, of course, but my brain works best when I keep everything super-simple at all times. So I actually do not disagree with AI here; the terminal is efficient. I just don't need AI to tell me that - I knew that before already. So AI may still be stupid. It's like a young over-eager kid, but without a real ability to "learn".
dude250711
It wants us travel back to 1980s. The simpler times.
kolinko
The blog author uses free version of ChatGPT (not logged in on screenshot) - so really talks about the previous generation models. It would be nice if this was mentioned transparently in the beginning of article. I mean - new models also tell you to use the terminal, but the quality is incomparable to what the author is using.
ChrisMarshallNY
His guess is as good as mine, as to “why,” but the results can be terrible . As noted, terminal commands can be ridiculously powerful, and can result in messy states. The last time I asked an LLM for help, was when I wanted to move an automounted disk image from the internal disk to an external one. If you do that, when the mount occurs, is important. It gave me a bunch of really crazy (and ineffective) instructions, to create login items with timed bash commands, etc. To be fair, I did try to give it the benefit of the doubt, but each time its advice pooched, it would give even worse workarounds. One of the insidious things, was that it never instructed to revert the previous attempt, like most online instruction posts. This resulted in one attempt colliding with the previous ineffective one, when I neglected to do so, on my own judgment. Eventually, I decided the fox wasn’t worth the chase, and just left the image on the startup disk. It wasn’t that big, anyway. I made sure to remove all the litter from the LLM debacle. Taught me a lesson. > “A man who carries a cat by the tail learns something he can learn in no other way.“ -Mark Twain