Ask HN: Is anyone still programming the old-fashioned way (without LLMs)?

37 points by philbo 2 days ago

There's so much content about AI-assisted programming now that I'm genuinely curious to hear from people who aren't using LLMs in their regular workflow.

I've tried Cursor and Claude Code and have seen them both do some impressive things, but using them really sucks the joy out of programming for me. I like the process of thinking about and implementing stuff without them. I enjoy actually typing the code out myself and feel like that helps me to hold a better mental model of how stuff works in my head. And when I have used LLMs, I've felt uncomfortable about the distance they put between me and the code, like they get in the way of deeper understanding.

So I continue to work on my projects the old-fashioned way, just me and vim, hacking stuff at my own pace. Is anyone else like this? Am I a dinosaur? And is there some trick for the mental model problem with LLMs?

fzwang 2 days ago

We've mostly banned the use of AI coding assistants, with exception of certain uses, for junior level devs/engineers. Essentially, they need to demonstrate that their use case fits with what LLMs are good at (ie. for in-distribution, tedious, verifiable tasks).

Annecdotally, what we've found was that those using AI assistants show superficial improvements in productivity early, but they learn at a much slower rate and their understanding of the systems is fuzzy. It leads to lots of problems down the road. Senior folks are also susceptible to these effects, but at a lower level. We think it's because most of their experiences are old fashioned "natty" coding.

In a way, I think programmers need to do natty coding to train their brains before augmenting/amputating it with AI.

  • credit_guy 2 days ago

    I think you should reconsider.

    LLMs are here to stay. Banning them in your organization is like banning IDEs, because, you know, real programmers use plain text editors and print statements.

    Yes, junior programmers will take a bit longer to learn. But assuming they will always rely on LLMs is a bit dismissive. I grew up in Eastern Europe, without internet, and basically without TV either. All I had was books, and I read lots of it. When I came to America, I saw that nobody around me had read anything close to how many books I had read, and I felt a bit smug. But I got over it: I realized that people's brains still mature even if their knowledge consumption comes in form of movies, or the internet, or in the more modern days, TikTok, or LLMs. Yes, maybe being able to read Umberto Eco novels will always be beyond the reach of the TikTok-generation, but then reading Cervantes or Cicero in the original was always beyond my reach. I'm still living a fulfilling life even without first-hand knowledge of the classics, it's entirely possible the LLM generation could become decent programmers without internalizing Kernighan and Ritchie.

    • fzwang a day ago

      I agree in part with what you're saying. I do think LLMs are here to stay and will be part of most programmer's toolkit. What my team and I are trying to figure out is where is it helpful, where does it break, and what are the long-term consequences. From an accountability perspective, we do allow senior folks to pick the tools they use, including LLMs. But they also need to be responsible for the outcomes. The consensus so far is that the fuzziness of understanding that LLM tools introduce causes harm in the long-run, in ways that are hard to trace back. It's like radiation and cancer. So if someone is using it, they better have a good rationale.

      Your analogy with book reading is very interesting, although I interpret it differently. It seems like you enjoyed reading longform books, but the environment you moved to (US) is not reading-heavy (much more visual, like TV/phones). The skills you developed was not as valued in this new environment. The issue is the skillset-environment mismatch. If you had moved to a reading-rich culture/community, you'd have appreciated your past reading experiences.

      In software engineering, I think the skillset is more like longform writing, where you have to build the mental model of the story and also be able to dig down to individual words. The more experience you have building these models from scratch and learning from other good builders, the better off you will be. People can certainly get by and "coast" on just using outputs from LLMs, the same way that there will be many LLM storywriters. But I'm concerned it'll put a ceiling on what they can accomplish. They are not developing the skillset needed at a higher level. They're stuck in-distribution, and never venture out. They may not even know what "out" is.

      I guess some programmers are OK with that. And some orgs may be perfectly fine with LLM-based engineering (ie. Think of how many dysfunctional engineering teams there are. is adding LLMs that much worse?). They are willing to risk the tradeoffs. But they may later discover that it's a shrinking pool with a lot of newcomers, and to advance their craft and profession, they may have to write some code from scratch and read Kernighan and Ritchie.

    • archagon 2 days ago

      Not OP, but I view LLMs in the same broad category as Electron: a cost-cutting measure (based on genuinely cool technology) that leads directly to enshittification, unless very carefully cultivated. An expert may use them to accelerate their work; a novice will pump out unscrutinized PRs riddled with garbage code.

      Makes perfect sense to me to keep juniors far away from that stuff.

      • credit_guy 2 days ago

        > a novice will pump out unscrutinized PRs riddled with garbage code.

        They will. But not forever. Presumably their PRs need to be approved by someone more senior, and they'll be told they've made some mistakes, and learn from that. Either they learn, or at some point, they'll lose their job. But that's how it goes with or without LLMs.

        • fzwang a day ago

          We've experienced this scenario a few times, where someone used AI generated code which had significant bugs or mistakes.

          - Telling them about the mistake is not as helpful as you think. It's the same as taking an exam, getting something wrong, and looking at the answer key immediately. You feel like you learned something, but it's not as strong, and you're more likely to make the same mistake in the future. Doing things by hand, although painful, is still a strong check on your understanding of the problem.

          - It's very frustrating for the person reviewing the code. Reading someone else's code is not easy, esp when they can't articulate what they were aiming for, or there's a big discrepancy from what they think they wrote vs what's actually written. In many cases, the reviewer is thinking "Why did I even bother with this? I should've just vibe-coded this myself. At least I know what should've been done. This isn't even worth it for mentoring purposes, because they did't actually learn anything."

          - Using threat/punishment based incentives create a lot of bad vibes and culture. People are less likely to talk about mistakes and spend more time thinking about how to hide their mistakes.

          - Eventually, the consensus converges to the position that the best way to learn is to do things manually and it's better when junior folks didn't rely on these tools from the beginning. It's important to explain to them why or else it can seem hypocritical when more experienced people are allowed to use LLMs more freely.

        • archagon 2 days ago

          Having worked in corporate, the very last thing I want to be doing is reviewing code that no one actually understands.

          If a PR contributor uses AI, they need to be able to intelligently discuss and justify every line of code. Otherwise, I'm the one having to reason about all the havoc it's going to wreak.

hotsauceror 2 days ago

You are not a dinosaur. I would argue that the great majority of engineers at our org do it the 'old fashioned' way.

My own experience with LLM-based coding has been wasted hours of reading incorrect code for junior-dev-grade tasks, despite multiple rounds of "this is syntactically incorrect, you cannot do this, please re-evaluate based on this information" "Yes you are right, I have re-evaluated it based on your feedback" only to do the same thing again. My time would have been better spent either 1) doing this largely boilerplate task myself, or 2) assigning and mentoring a junior dev to do it, as they would only have required maybe one round of iteration.

Based on my experience with other abstraction technologies like ORMs, I look forward to my systems being absolutely flooded with nonperformant garbage merged by people who don't understand either what they are doing, or what they are asking to be done.

  • cyanydeez 2 days ago

    20% of the time it works 99% of the time it works.

sifuhotman2000 2 days ago

I see new engineers adopting AI much faster than the older ones who have been doing all the coding themselves. I very often see senior engineers turing of their copilot after a week out of frustration because it doesnt work the way they want them to, but they arent even trying, they expect it to work 100% in their first try I guess. They spend months learning new technologies to best of their ability but they wont give AI a chance? They think using AI will make them less skilled, but it is not true, it will make them more productive.

  • bluefirebrand 2 days ago

    > They think using AI will make them less skilled, but it is not true, it will make them more productive.

    Less skilled and more productive can both be true

bluefirebrand 2 days ago

My company recently made it mandatory to use Cursor and my motivation has cratered

I'm looking into alternatives because I have zero interest in having LLM tools dictated to me because some MBA exec is sold on the hype

I find it impossible to get into flow with the autocomplete interrupting me constantly and the code they generate in the chat node sucks

PaulShin a day ago

"Am I a dinosaur?" - I think you're asking the most important question for our craft in 2025. Thank you.

I lead a team building Markhub, an AI-native workspace, and we have this debate internally all the time. Our conclusion is that there are two types of "thinking" in programming:

"Architectural Thinking": This is the joy you're talking about. The deep, satisfying process of designing systems, building mental models, and solving a core problem. This is the creative work, and an AI getting in the way of this feels terrible. We agree that this part should be protected.

"Translational Thinking": This is the boring, repetitive work. Turning a clear idea into boilerplate code, writing repetitive test cases, summarizing a long thread of feedback into a list of tasks, or refactoring code. This is the work we want to delegate.

Our philosophy is that AI should not replace Architectural Thinking; it should eliminate Translational Thinking so that we have more time for the joyful, deep work.

For your mental model problem, our solution has been to use our AI, MAKi, not to write the core logic, but to summarize the context around the logic. For example, after a long discussion about a new feature, I ask MAKi to "summarize this conversation and extract the action items." The AI handles the "what," freeing me up to focus on the "how."

You are not a dinosaur. You are protecting the part of the work that matters most.

toast0 2 days ago

What do you want to hear about? Doing things the same old way continues to work in the same old way. I may be a dinosaur, but I hear La Brea is nice this time of year.

I've tried new things occasionally, and I keep going back to a text editor and a shell window to run something like Make. It's probably not the most efficient process, but it works for everything and there's value in that. I have no interest in a tool that will generate lots of code for me that may or may not be correct and I'll have to go through with a fine tooth comb to see; I can personally generate lots of code that may or may not be correct, and if that fails, I have run some projects as copy-paste snippets from stack overflow until it works; it's not my idea of a good time, but I think it was better than spending the time to understand the many layers of OSX when all I wanted to do was get a pixel value from a point on the screen into applescript and I didn't want to do any other OSX ever (and I haven't).

el_magnificus 2 days ago

Agree that it is frustrating and not as satisfying to work using LLM's, I found myself on a plane recently without internet and it was great coding with no LLM access. I feel like we will slowly figure out how to use them in a reasonable way and it will likely involve doing smaller and more modular work, I disabled all tab auto suggestions because I noticed they throw me off track all the time.

JohnFen 2 days ago

A very large majority of the devs that I know and work with are still doing it the old way, or at least 90% the old way.

  • vouaobrasil 2 days ago

    That is very interesting... I would not have guessed that.

soapdog 2 days ago

I don’t use LLMs either. I find them unethical and cumbersome.

  • salawat 2 days ago

    I won't touch them due to the ethical taint. No matter how much deep down I disagree with IP laws; I cannot condone the actions that went into these models creation.

  • vouaobrasil 2 days ago

    I agree. There is the ethical component, not just because the way they were trained but because the big tech companies that leverage them most efficiently are primarily trying to gain an unfair proportion of resources for themselves, so using them is participating in a losing game.

torham 2 days ago

Most of the engineers I know played around with LLMs but are still doing their work without one. Myself, I sometimes pop in to Gemini webapp to ask a question if search isn't going well, and it helps about 25% of the time.

orionblastar 2 days ago

I have been thinking of writing ebooks on Retrocomputing Legacy Software like PowerBASIC 3.5, etc. Run them in DOSBOX/X and create DOS programs. People still use DOS but have no idea how to write programs. This was way before LLMs came out.

jurisjs 2 days ago

Yes, because AI can really ruin your design philosophy for your approached to problem that's being solved for a decade, and you are trying different way.

rsynnott 2 days ago

I used Copilot for about a week before turning it off out of frustration; immensely distracting, and about 50% of what it wanted to autocomplete was simply wrong.

jeremy_k 2 days ago

I do primarily when I'm refactoring something. In those scenarios, I know exactly what I want to change and the outcome is code in a style that I feel is most understandable. I don't need anything suggesting changes (I actually don't have tab completion enabled by default, I find it too distracting but that is a different topic) because the changes already exist in my head.

toldyouso2022 2 days ago

I've been without work for over a year now so I'm still programming the classic way and using ai chats in the browser. When I'll work again I'll use them. I think the best thing to do is separate programming for work and pleasure.

zy5a59 2 days ago

I feel the same way. Vibe coding has taken away the joy of programming for me, but there’s no denying that it has indeed improved my efficiency. So now, it depends on the situation—if it’s just for fun, I’ll code it myself.

  • vouaobrasil 2 days ago

    Even when it comes to a job, sacrificing enjoyment for efficiency can often make life less fun.

sigbottle 2 days ago

Our company forbids ai, although I see my manager frequently popping into chatgpt for syntax stuff and I lowkey use google search AI functionality to bypass that req (not brazen enough to just use gpt)

yummypaint 2 days ago

Using an LLM to directly generate code makes writing code feel like reviewing code, and thereby kills the joy in solving problems with software. I don't think people trying to learn are doing themselves any favors either.

I work with grad students who write a lot of code to analyze data. There is an obvious divide in comprehension between those who genuinely write their own programs vs those who use LLMs for bulk code generation. Whether that is correlation or causation is of course debatable.

In one sense, blindly copying from an LLM is just the new version of blindly copying from stack overflow and forum posts, and it seems to about be the same fraction of people either way. There isn't much harm in reproducing boilerplate that's already searchable online, but in that situation it puts orders of magnitude less carbon in the atmosphere to just search for it traditionally.

krapp 2 days ago

There are dozens of us!

vouaobrasil 2 days ago

I am a part-time coder, in that I get paid for coding and some of my code is actually used in production. I don't use LLMs or any AI in my coding, whatsoever. I've never tried LLM or AI coding, and I never will, guaranteed. I hate AI.

I agree with you, 100%. I like typing out code by hand. I like referring to the Python docs and I like the feeling of slowly putting code together and figuring out the building blocks, one by one. In my mind, AI is about efficiency for the sake of efficiency, not for the sake of enjoyment, and I enjoy programming,

Furthermore, I think AI embodies the model of the human being as a narrowly-scoped tool who gets converted from creator into a replaceable component, whose only job is to provide conceptual input into design. Sound good at first ("computers do the boring stuff, humans do the creative stuff"), but, and it's a big but: as an artist too, I think it's absolutely true that the creative stuff can't be separated from the "boring" stuff, and when looked at properly, the "boring" stuff can actually become serene.

I know there's always the counterpoint: what about other automations? Well, I think there is a limit past which automations give diminishing returns and become counterproductive, and therefore we need to be aware of all automations, but AI is the first sort of automation that is categorically always past the point of diminishing returns, because it targets exactly the sort of cognitive features that we should be doing ourselves.

Most people here disagree with me, and frequently downvote me too on the topic of AI. But I'll say this: in a world where efficiency and productivity has become doctrine, most people have also been converted into only thinking about the advancement of the machine, and have lost the essence of soul to enjoy that which is beyond mere mental performance.

Sadly, people in the tecnhnical domain often find emotional satisfaction in new tools, and that is why anything beyond the technical is often derided by those in tech, much to their disadvantage.

eadwu 2 days ago

I don't use any AI code editor. Not because it isn't useful, but the user experience of using it is so bad. I typically already have the solution at hand - I don't need an AI to give me an answer, I need it to implement the solution I have.

But not using AI is also idiotic right now, at the very least you should be using it for autocomplete, in the _vast_ majority of cases any current leading LLM will return _far more_ than not using it (in the scope of autocomplete).

kypro 2 days ago

Surely you don't find writing boilerplate fun though?

Coding agents still give you control (at least for now), but are like having really good autocomplete. Instead of using copilot to complete a line or two, using something like Cursor you can generate a whole function or class based on your spec then you can refine and tweak the more nuanced and important bits where necessary.

For example, I was doing some UI stuff the other day and in the past it would have taken a while just to get a basic page layout together when you're writing it yourself, but with a coding assistant I generated a basic page asking it to use an image mock up, a component library and some other pages as references. Then I could get on and do the fun bits of building the more novel parts of the UI.

I mean if it's code you're working on for fun then work however you like, but I don't know why someone would employ a dev working in such an inefficient way in 2025.

  • bendmorris 2 days ago

    You can generate boilerplate without AI and whenever there's a significant amount of boilerplate needed there should be a (non-AI) generation tool to go with it. Deterministic code generation is a lot easier to have confidence in than LLM output.

    >I don't know why someone would employ a dev working in such an inefficient way in 2025.

    It amazes me how fast the hype has taken off. There is no credible evidence that, for experienced devs, working with AI coding tools makes you significantly more productive.

    • usersouzana 2 days ago

      Many devs say they are more productive now. That's the "evidence".

      • bendmorris 2 days ago

        Devs (like yourself) might generate scaffolding for a greenfield project very quickly and be amazed and claim they're more productive, but I don't think that is evidence that an experienced developer will actually be more productive.

        Honestly, project scaffolding is such a small part of the job. I spend a lot more time reading, designing, thinking critically about, reviewing changes to, and generally maintaining code than I do creating greenfield projects or writing boilerplate. For all of these tasks having actually written the code myself gives me an advantage. I don't believe today's tools are a net positive.

  • philbo 2 days ago

    > Surely you don't find writing boilerplate fun though?

    Of course. So if I'm faced with some boilerplate, I try to refactor it away so it's less boilerplatey. Perhaps I'm lucky but mostly this seems to work, I don't often find myself writing boilerplate.

    > I don't know why someone would employ a dev working in such an inefficient way in 2025

    Am I working inefficiently? I'm not sure. How much time does the typing part of programming actually take up? I guess it varies, but it's definitely less than 50% for me. Thinking/designing/communicating/listening take most of my time. The typing part is not a bottleneck.

  • JohnFen 2 days ago

    > Surely you don't find writing boilerplate fun though?

    The majority of the code I write is not boilerplate, and writing the boilerplate myself is useful to me.

  • vouaobrasil 2 days ago

    > Coding agents still give you control (at least for now), but are like having really good autocomplete.

    And I think that's the problem. I think autocomplete itself is a bad thing. If one has autocomplete, one is more likely to type stuff that is less valuable to be typed.

  • bluefirebrand 2 days ago

    > Surely you don't find writing boilerplate fun though

    No, but I don't find debugging the LLM boilerplate that is at best 50-80% correct very fun either

    I have better ways to automate boilerplate than using LLMs

Joel_Mckay 2 days ago

If you mean "AI" in the sense of reasoning LLM, than it is generally prohibited given the industrial scale plagiarism, security leaks, and logical inaccuracies.

For the philosophical insights into ethics... we may turn to fiction =3

https://www.youtube.com/watch?v=X6WHBO_Qc-Q