stevage 3 days ago

I guess we're all trying to figure out where we sit along the continuum from anti-AI Luddite to all-in.

My main issue with vibe coding etc is I simply don't enjoy it. Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun. It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

I'm happy to use Copilot to auto-complete, and ask a few questions of ChatGPT to solve a pointy TypeScript issue or debug something, but stepping back and letting Claude or something write whole modules for me just feels sloppy and unpleasant.

  • tobr 3 days ago

    I tried Cursor again recently. Starting with an empty folder, asking it to use very popular technologies that it surely must know a lot about (Typescript, Vite, Vue, and Tailwind). Should be a home run.

    It went south immediately. It was confused about the differences between Tailwind 3 and 4, leading to a broken setup. It wasn’t able to diagnose the problem but just got more confused even with patient help from me in guiding it. Worse, it was unable to apply basic file diffs or deletes reliably. In trying to diagnose whether this is a known issue with Cursor, it decided to search for bug reports - great idea, except it tried to search the codebase for it, which, I remind you, only contained code that it had written itself over the past half hour or so.

    What am I doing wrong? You read about people hyping up this technology - are they even using it?

    EDIT: I want to add that I did not go into this antagonistically. On the contrary, I was excited to have a use case that I thought must be a really good fit.

    • windows2020 3 days ago

      My recent experience has been similar.

      I'm seeing that the people hyping this up aren't programmers. They believe the reason they can't create software is they don't know the syntax. They whip up a clearly malfunctioning and incomplete app with these new tools and are amazed at what they're created. The deficiencies will sort themselves out soon, they believe. And then programmers won't be needed at all.

      • norir 3 days ago

        Most people do not have the talent and/or discipline to become good programmers and resent those who do. This alone explains a lot of the current argument.

    • cube2222 3 days ago

      Just trying to help explain the issues you've been hitting, not to negate your experience.

      First, you might've been using a model like Sonnet 3.7, whose knowledge cutoff doesn't include Tailwind 4.0. The model should know a lot about the tech stack you mentioned, but it might not know the latest major revisions if they were very recent. If that is the case (you used an older model), then you should have better luck with a model like Sonnet 4 / Opus 4 (or by providing the relevant updated docs in the chat).

      Second, Cursor is arguably not the top tier hotness anymore. Since it's flat-rate subscription based, the default mode of it will have to be pretty thrifty with the tokens it uses. I've heard (I don't use Cursor) in Cursor Max Mode[0] improves on that (where you pay based on tokens used), but I'd recommend just using something like Claude Code[1], ideally with its VS Code or IntelliJ integration.

      But in general, new major versions of sdk's or libraries will cause you a worse experience. Stable software fares much better.

      Overall, I find AI extremely useful, but it's hard to know which tools and even ways of using these tools are the current state-of-the-art without being immersed into the ecosystem. And those are changing pretty frequently. There's also a ton of over-the-top overhyped marketing of course.

      [0]: https://docs.cursor.com/context/max-mode

      [1]: https://www.anthropic.com/claude-code

    • varjag 3 days ago

      I had some success doing two front-end projects. One in 2023 using Mixtral 7b local model and one just this month with Codex. I am an experienced programmer (35 years coding, 28 professionally). I hate Web design and I never cared to learn JavaScript.

      The first project was a simple touch based control panel that communicates via REST/Websocket and runs a background visual effect to prevent the screen burn-in. It took a couple of days to complete. There were often simple coding errors but trivial enough to fix.

      The second is a 3D wireframe editor for distributed industrial equipment site installations. I started by just chatting with o3 and got the proverbial 80% within a day. It includes orbital controls, manipulation and highlighting of selected elements, property dialogs. Very soon it became too unwieldy for the laggard OpenAI chat UI so I switched to Codex to complete most of the remaining features.

      My way with it is mostly:

      - ask no fancy frameworks: my projects are plain JavaScript that I don't really know, makes no sense to pile on React and TypeScript atop of it that I am even less familiar with

      - explain what I want by defining data structures I believe are the best fit for internal representation

      - change and test one thing at a time, implement a test for it

      - split modules/refactor when a subsystem gets over a few hundred LOC, so that the reasoning can remain largely localized and hierarchical

      - make o3 write an llm-friendly general design document and description of each module. Codex uses it to check the assumptions.

      As mentioned elsewhere the code is mediocre at best and it feels a bit like when I've seen a C compiler output vs my manually written assembly back in the day. It works tho, and it doesn't look to be terribly inefficient.

    • gs17 3 days ago

      > It was confused about the differences between Tailwind 3 and 4

      I have the same issue with Svelte 4 vs 5. Adding some notes to the prompt to be used for that project helps sort of.

      • tobr 3 days ago

        It didn’t seem like it ever referred to documentation? So, obviously if it’s only going to draw on its ”instinctual” knowledge of Tailwind, it’s more likely to fallback on a version that’s been around for longer, leading to incompatibilities with the version that’s actually installed. A human doing the same task would probably have the setup guide on the website at hand if they realized they were feeling confused.

    • steveklabnik 3 days ago

      Tailwind 4 has been causing Claude a lot of problems for me, especially when upgrading projects.

      I managed to get it to do one just now, but it struggled pretty hard, and still introduced some mistakes I had to fix.

  • pandler 3 days ago

    In addition to not enjoying it, I also don’t learn anything, and I think that makes it difficult to sustain anything in the middle of the spectrum between “I won’t even look at the code; vibes only” and advanced autocomplete.

    My experience has been that it’s difficult to mostly vibe with an agent, but still be an active participant in the codebase. That feels especially true when I’m using tools, frameworks, etc that I’m not already familiar with. The vibing part of the process simultaneously doesn’t provide me with any deeper understanding or experience to be able to help guide or troubleshoot. Same thing for maintaining existing skills.

    • daxfohl 3 days ago

      It's like trying to learn math by reading vs by doing. If all you're doing is reading, it robs you of the depth of understanding you'd gain by solving things yourself. Going down wrong paths, backtracking, finally having that aha moment where things click, is the only way to truly understand something.

      Now, for all the executives who are trying to force-feed their engineering team to use AI for everything, this is the result. Your engineering staff becomes equivalent to a mathematician who has never actually done a math problem, just read a bunch of books and trusted what was there. Or a math tutor for your kid who "teaches" by doing your kid's homework for them. When things break and the shit hits the fan, is that the engineering department you want to have?

      • zdragnar 3 days ago

        I'm fairly certain that I lost a job opportunity because the manager interviewing me kept asking me variations of how I use AI when I code.

        Unless I'm stuck while experimenting with a new language or finding something in a library's documentation, I don't use AI at all. I just don't feel the need for it in my primary skill set because I've been doing it so long that it would take me longer to get AI to an acceptable answer than doing it myself.

        The idea seemed rather offensive to him, and I'm quite glad I didn't go to work there, or anywhere that using AI is an expectation rather than an option.

        I definitely don't see a team that relies on it heavily having fun in the long run. Everyone has time for new features, but nobody wants to dedicate time to rewriting old ones that are an unholy mess of bad assumptions and poorly understood.

        • bluefirebrand 3 days ago

          My company recently issued an "Use AI in your workflow or else" mandate and it has absolutely destroyed my motivation to work

          Even though there are still private whispers of "just keep doing what you're doing no one is going to be fired for not using AI", just the existence of the top down mandate has made me want to give up and leave

          My fear is that this is every company right now, and I'm basically no longer a fit for this industry at all

          Edit: I'm a long way from retirement unfortunately so I'm really stuck. Not sure what my path forward is. Seems like a waste to turn away from my career that I have years of experience doing, but I struggle like crazy to use AI tools. I can't get into any kind of flow with them. I'm constantly frustrated by how aggressively they try to jump in front of my thought process. I feel like my job changed from "builder" to "reviewer" overnight and reviewing is one of the least enjoyable parts of the job for me

          I remember an anecdote about Ian McKellen crying on a green screen set when filming the Hobbit, because Talking to a tennis ball on a stick wasn't what he loved about acting

          I feel similarly with AI coding I think

          • ryandrake 3 days ago

            I just don't understand your company and the company OP interviewed for. This is like mandating everyone use syntax highlighting or autocomplete, or sit in special type of chair or use a standing desk, and making their use a condition for being hired. Why are companies so insistent that their developers "use AI somehow" in their workflows?

            • bluefirebrand 3 days ago

              Shareholders are salivating at the prospect of doing either the same amount of work with fewer salaries or more work with the same salaries

              There is nothing a VC loves more than the idea of extracting more value from people without investing more into them

              • ptman 3 days ago

                So you promote the most efficient employees and let them use tools like AI if they like

            • namaria 3 days ago

              I disagree with siblings that it's fear or greed.

              I think it's way more basic. Much like recruiters calling me up and asking about 'kubernetes' they are just trying to get a handle on something they don't really understand. And right now all stickers point to 'AI' as the handle that people should pull on to get traction in software.

              It is incredibly saddening to me that people do pattern matching and memorize vocabulary instead of trying to understand things even at a basic level so they can reason about it. But a big part of growing up was realizing that most people don't really understand or care to understand things.

            • daxfohl 3 days ago

              FOMO. They don't want to risk being the one company left behind because their engineers haven't learned to use AI as efficiently as others.

          • ponector 3 days ago

            There are lots of ways to use AI coding tools.

            Cursor is great for fuzy search across the legacy project. Requests like "how you do X here" can help a lot while fixing old bug.

            Or adding a documents. Commit description generated based on diff. Or adding a javadoc to your methods.

            Whatever step in your workflow which consists of rewriting existing text but not creating anything new - use Cursor or similar AI tool.

          • daxfohl 3 days ago

            The other side of me thinks that maybe the eventual landing point of all this is a merger of engineering and PM. A sizeable chunk of engineering work isn't really anything new. CRUD, jobs, events, caching, synchronization, optimizing for latency, cost, staleness, redundancy. Sometimes it amazes me that we're still building so many ad-hoc ways of doing the same things.

            Like, say there's a catalog of 1000 of the most common enterprise (or embedded, or UI, or whatever) design patterns, and AI is good at taking your existing system, your new requirements, identifying the best couple design patterns that fit, give you a chart with the various tradeoffs, and once you select one, are able to add that pattern to your existing system, with the details that match your requirements.

            Maybe that'd be cool? The system/AI would then be able to represent the full codebase as an integration of various patterns, and an engineer, or even a technical PM, could understand it without needing to dive into the codebase itself. And hopefully since everything is managed by a single AI, the patterns are fairly consistent across the entire system, and not an amalgamation of hundreds of different individuals' different opinions and ideals.

            Another nice thing would be that huge migrations could be done mostly atomically. Currently, things like, say, adding support in your enterprise for, say, dynamic authorization policies takes years to get every team to update their service's code to handle the new authz policy in their domain, and so the authz team has to support the old way and the new way, and a way to sync between them, roughly forever. With AI, maybe all this could just be done in a single shot, or over the course of a week, with automated deployments, backfill, testing, and cleanup of the old system. And so the authz team doesn't have to deal with all the "bugging other teams" or anything else, and the other teams also don't have to deal with getting bugged or trying to fit the migration into their schedules. To them it's an opaque thing that just happened, no different from a library version update.

            With that, there's fewer things in flight at any one time, so it allows engineers and PMs to focus on their one deliverable without worrying how it's affecting everyone else's schedules etc. Greater speed begets greater serializability begets better architecture begets greater speed.

            So, IDK, maybe the end game of AI will make the job more interesting rather than less. We'll see.

        • nyarlathotep_ 3 days ago

          The one place it really shines for me personally is bash scripts.

          I've probably written 50 over the last two years for relatively routine stuff that I'd either not do (wasn't that important) or done via other means (schelpping through aws cli docs comes to mind) at 2x the time. I get little things done that I'd otherwise have put off. Same goes for IaC stuff for cloud resources. If I never have to write Terraform or Cloudformation again, I'd be fine with that.

          Autocomplete is hit or miss for me--vscode is pretty good with CoPilot, Jetbrains IDEs are absolutely laughably bad with CoPilot (typically making obvious syntax errors on any completion for a function signature, constructor, etc) to the point that I disabled it.

          I've no interest in any "agent" thingys for the time being. Just doesn't interest me, even if it's "far better than everyone" or whatever.

  • timr 3 days ago

    > My main issue with vibe coding etc is I simply don't enjoy it. Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun. It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

    I am the opposite. After a few decades of writing code, it wasn't "fun" to write yet another file parser or hook widget A to API B -- which is >99% of coding today. I moved into product management because while I still enjoy building things, it's much more satisfying/challenging to focus on the higher-level issues of making a product that solves a need. My professional life became writing specs, and reviewing code. It's therefore actually kind of fun to work with AI, because I can think technically, but I don't have to do the tedious parts that make me want to descend into a coma.

    I could care less if I'm writing a spec for a robot, or I'm writing a spec for a junior front-end engineer. They're both going to screw up, and I'm going to have to spend time explaining the problem again and again...at least the robot never complains and tries really hard to do exactly what I ask, instead of slacking off, doing something more intellectually appealing, getting mired in technical complexity, etc.

    • prmph 3 days ago

      After like the 20th time explaining the same (simple) problem to the AI that it is unable to fix, you just might change your mind [1]. At that point you just have to jump in and get dirty.

      Do this a few times and you start to realize it is kinda of worse than just being in the driver's seat in terms of the coding right from the start. For one thing, when you jump in, you are working with code that is probably architectured quite differently from the way you normally do, and you have no developed the deep mental model that is needed to work with the code effectively.

      Not to say the LLMs are not useful, especially in agent mode. But the temptation is always to trust and task them with more than they can handle. maybe we need an agent that limits the scope of what you can ask it to do, to keep you involved at the necessary level.

      People keep thinking we are at the level where we can forget about the nitty gritty of the code and rise up the abstraction level, when this is nothing close to the truth.

      [1] Source: me last week trying really hard to work like you are talking about with Claude Code.

      • timr 3 days ago

        > After like the 20th time explaining the same (simple) problem to the AI that it is unable to fix, you just might change your mind [1]. At that point you just have to jump in and get dirty.

        You're assuming that I haven't. Yes, sometimes you have to do it yourself, and the people who are claiming that you can replace experienced engineers with these are wrong (at least for now, and for non-trivial problems).

        > Do this a few times and you start to realize it is kinda of worse than just being in the driver's seat in terms of the coding right from the start. For one thing, when you jump in, you are working with code that is probably architectured quite differently from the way you normally do, and you have no developed the deep mental model that is needed to work with the code effectively.

        Disagree. There's not a single piece of code I've written using these that I haven't carefully curated myself. Usually the result (after rounds of prompting) is smaller, significantly better, and closer to my original intended design than what I got out of the machine on first prompt.

        I still find them to be a significant net enhancement to my productivity. For me, it's very much like working with a tireless junior engineer who is available at all hours, willing to work through piles of thankless drudgery without complaint, and also codes about 100x faster than I do.

        But again, I know what I'm doing. For an inexperienced coder, I'm more inclined to agree with your comment. The first drafts that these things emit is often pretty bad.

    • dlisboa 3 days ago

      You touched on the significant thing that separates most of the AI code discourse in the two extremes: some people just don't like programming and see it as a simple means to an end, while others love the process of actually crafting code.

      Similar to the differences between an art collector and a painter. One wants the ends, the other desires the means.

      • timr 3 days ago

        That's not fair, and not what I am saying at all.

        I enjoy writing code. I just don't enjoy writing code that I've written a thousand times before. It's like saying that Picasso should have enjoyed painting houses for a living. They're both painting, right?

        (to be painfully clear, I'm not comparing myself to Picasso; I'm extending on your metaphor.)

        • bluefirebrand 3 days ago

          You would rather debug the low quality LLM code that you know you could write better, a thousand times?

          • timr 3 days ago

            Well, I don't write bugs in my code, of course, but let's just say that you were the type of person who does: having a bot that writes code 100x faster than you, that also occasionally makes mistakes (but can also fix them!), is still a huge win.

            • bluefirebrand 3 days ago

              > occasionally makes mistakes

              Well. Maybe we have to agree to disagree but I think it makes mistakes far more frequently than I do

              Even if it makes mistakes exactly as often as I do, making 100x as many mistakes in the same amount as time seems like it would be absolutely impossible to keep up with

      • tptacek 3 days ago

        I love coding, do it for fun outside of my job, and find coding with an LLM very enjoyable.

        • icedchai 3 days ago

          I've been experimenting with LLM coding for the past few months on some personal projects. I find it makes coding those projects more enjoyable since it eliminates much of the tedium that was causing me to delay the project in the first place.

          • timr 3 days ago

            Exactly the same for me...now whenever I hit something like "oh god, I want to change the purpose of this function/variable, but I need to go through 500 files, and see where it's used, then make local changes, then re-test everything...", I can just tell the bot to do it.

            I know a lot of folks would say that's what search & replace is for, but it's far easier to ask the bot to do it, and then check the work.

            • cesarb 3 days ago

              > "oh god, I want to change the name of this function/variable, but I need to go through 500 files, and see where it's used, then make local changes, then re-test everything..."

              Forgive me for being dense, but isn't it just clicking the "rename" button on your IDE, and letting it propagate the change to all definitions and uses? This already existed and worked fine well before LLMs were invented.

              • timr 3 days ago

                Yeah, sorry...I re-read the comment and realized I wasn't being clear. It's bigger than just search/replace. Already updated what I wrote.

                The far more common situation is that I'm refactoring something, and I realize that I want to make some change to the semantics or signature of a method (say, the return value), and now I can't just use search w/o also validating the context of every change. That's annoying, and today's bots do a great job of just handling it.

                Another one, I just did a second ago: "I think this method X is now redundant, but there's a minor difference between it, and method Y. Can I remove it?"

                Bot went out, did the obvious scan for all references to X, but then evaluated each call context to see if I could use Y instead.

                (But even in the case of search & replace, I've had my butt saved a few times by agent when it caught something I wasn't considering....)

                • tptacek 3 days ago

                  I really like working with LLMs but one thing I've noticed is that the obvious transformation of "extract this functionality into a helper function and then apply that throughout the codebase" is one I really actually enjoy doing myself; replacing 15 lines of boilerplate-y code in a couple dozen places with a single helper call is _really_ satisfying; it's like my ASMR.

                  • timr 3 days ago

                    Hah, well, to each their own. That's exactly the kind of thing that makes me want to go outside and take a walk.

                    Regardless of what your definition of horrible and boring happens to be, just being able to tell the bot to do a horrible boring thing and having it done with like a junior level intelligence is so experience enhancing that it makes coding more fun.

                    • tptacek 3 days ago

                      I find elimination of inertia and preservation of momentum to be the biggest wins; it's just that my momentum isn't depleted by extracting something out into a helper.

                      People should try this kind of coding a couple times just because it's an interesting exercise in figuring out what parts of coding are important to you.

              • tptacek 3 days ago

                Yes, that particular example modern editors do just fine. Now imagine having that for almost any rote transformation you wanted regardless of complexity (so long as the change was rote and describable).

      • d0100 3 days ago

        I love programming, I just don't like CRUDing, or API'ing...

        I also love programming behaviours and interactions, just not creating endless C# classes and looking at how to implement 3D math

        After a long day at the CRUD factory, being able to vibe code as a hobby is fun. Not super productive, but it's better than the alternative (scrolling reels or playing games)

      • nyarlathotep_ 3 days ago

        > You touched on the significant thing that separates most of the AI code discourse in the two extremes: some people just don't like programming and see it as a simple means to an end, while others love the process of actually crafting code.

        Yeah this is for sure true, but it's probably true in degrees.

        I think there was even a study or something (from GitHub maybe) about the frequency of languages and how there were far more commits in say Rust on weekends than weekdays (don't quote me on this).

        Plenty of people like programming but really don't find yet-another-enterprise-CRUD-with-React-front-end thing to be thrilling, so they will LLM-pasta it to completion but otherwise would have fun hacking away in langs/stuff they like.

        I identify with that (hypothetical) crowd.

      • morkalork 3 days ago

        I think I could be happy switching between the two modes. There's tasks that are completely repetative slop that I've fully offloaded to AI with great satisfaction. There's others I enjoy that I prefer to use AI for consultation only. Regardless, few people liked doing code review before with their peers and somehow we've increased one of the least fun parts of the job.

    • icedchai 3 days ago

      Same. After doing this for decades, so much programming work is tedious. Maybe 5% to 20% of the work is interesting. If I can get a good chunk of that other 80%+ built out quickly with a reasonable level of quality, then we're good.

    • getnormality 3 days ago

      Your use case seems relatively well-suited to AI. Even an unreliable technology like LLMs could be useful for automating a task that is mundane, well-defined, and easy to review for accuracy.

    • kiitos 3 days ago

      > After a few decades of writing code, it wasn't "fun" to write yet another file parser or hook widget A to API B -- which is >99% of coding today.

      If this is your experience of programming, then I feel for you, my dude, because that sucks. But it is definitely not my experience of programming. And so I absolutely reject your claim that this experience represents "99% of programming" -- that stuff is rote and annoying and automate-able and all that, no argument, but it's not what any senior-level engineer worth their salt is spending any of their time on!

      • NewsaHackO 3 days ago

        People who don’t do 1)API connecting, 2)Web design using popular frameworks or 3) requirements wrangling with business analysts have jobs that will not be taken over by AI anytime soon. I think 99% of jobs is pushing it, but I definitely think the vast majority of IT jobs fit into the above categories. Another benchmark would be how much of your job is closer to research work.

  • xg15 3 days ago

    > that I don't entirely understand

    That's the bigger issue in the whole LLM hype that irks me. The tacit assumption that actually understanding things is now obsolete, as long as the LLM delivers results. And if it doesn't we can always do yet another finetuning or try yet another magic prompt incantation to try and get it back on track. And that this is somehow progress.

    It feels like going back to pre-enlightenment times and collecting half-rationalized magic spells instead of having a solid theoretical framework that let's you reason about your systems.

    • AnimalMuppet 3 days ago

      Well... I'm torn here.

      There is a magic in understanding.

      There is a different magic in being able to use something that you don't understand. Libraries are an instance of this. (For that matter, so is driving a car.)

      The problem with LLMs is that you don't understand, and the stuff that it gives you that you don't understand isn't solid. (Yeah, not all libraries are solid, either. LLMs give you stuff that is less solid than that.) So LLMs give you a taste of the magic, but not much of the substance.

  • Kiro 3 days ago

    I'm the opposite. I haven't had this much fun programming in years. I can quickly iterate, focus on the creative parts and it really helps with procrastination.

  • 9d 3 days ago

    Considering the actual Vatican literally linked AI to the apocalypse, and did so in the most official capacity[1], I don't think avoiding AI has to be ludditism.

    [1] Antiqua et Nova p. 105, cf. Rev. 13:15

    • 9d 3 days ago

      Full link and relevant quote:

      https://www.vatican.va/roman_curia/congregations/cfaith/docu...

      > Moreover, AI may prove even more seductive than traditional idols for, unlike idols that “have mouths but do not speak; eyes, but do not see; ears, but do not hear” (Ps. 115:5-6), AI can “speak,” or at least gives the illusion of doing so (cf. Rev. 13:15).

      It quotes Rev. 13:15 which says (RSVCE):

      > and it was allowed to give breath to the image of the beast so that the image of the beast should even speak, and to cause those who would not worship the image of the beast to be slain.

      • ultimafan 3 days ago

        That was a very interesting read, thanks for linking it!

        I think the unfortunate reality of human innovation is that too many people consider that technological progress is always good for progresses sake. Too many people create new tools, tech, etc. without really stopping to take a moment and think or have a discussion on what the absolute worst case applications of their creation will be and how difficult it'd be to curtail that kind of behavior. Instead any potential (before creation) and actual (when it's released) human suffering is hand waved away as growing pains necessary for science to progress. Like those websites that search for people's online profiles based on image inputs sold by their creators as being used to find long lost friends or relatives when everyone really knows it's going to be swamped by people using it to doxx or stalk their victims, or AI photo generation models for "personal use" being used to deep fake nudes to embarrass and put down others. In many such cases the creators sleep easy at night with the justification that it's not THEIR fault people are misusing their platforms, they provided a neutral tool and are absolved of all responsibility. All the while they are making money or raking in clout fed by the real pain of real people.

        If everyone took the time to weigh the impact of what they're doing even half as diligently as that above article (doesn't even have to be from a religious perspective) the world would be a lot brighter for it.

        • 9d 3 days ago

          > Too many people create new tools, tech, etc. without really stopping to take a moment and think or have a discussion on what the absolute worst case applications of their creation will be

          "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." -Jeffrey L. Goldblum when ILM showed him an early screening of Jurassic Park.

          > In many such cases the creators sleep easy at night with the justification that it's not THEIR fault people are misusing their platforms, they provided a neutral tool and are absolved of all responsibility.

          The age old question of gun control.

          • ultimafan 3 days ago

            Guns (and really most forms of progress in warfare and violence) undoubtedly fall under a similar conundrum.

            Funny to note that at least one inventor who contributed greatly to modern warfare (the creator of the gatling gun) did seem to reflect on his future impact but figured it'd go in the opposite direction- that a weapon that could replace a hundred soldiers with one would make wars smaller and less devastating, not more!

    • 9d 3 days ago

      I emphasize that it's the Vatican because they are the most theologically careful of all. This isn't some church with a superstitious pastor who jumps to conclusions about the rapture at a dime drop. This is the Church which is hesitant to say literally anything about the book of Revelation at all, which is run by tired men who just want to keep the status quo so they can hopefully hit retirement without any trouble.

  • throwawayk7h 3 days ago

    Perhaps we're feeling too much pressure to pick an extreme stance. Can we firmly establish a middle-ground party? I feel like a lot of people here fit into a norm of "AI is very useful, but may upend many people's lives, and not currently suitable for every task" category. (There may be variations of course depending on how worried you are about FOOM.)

    We need a catchy name.

  • rowanseymour 3 days ago

    This was my experience until recently.. now I'm currently quite enjoying assigning small PRs to copilot and working through them via the GitHub PR interface. It's basically like managing a junior programmer but cheaper and faster. Yes that's not as much fun as writing code but there isn't time for me to write all the code myself.

    • cloverich 3 days ago

      Can you elaborate on the "assign PR's" bit?

      I use Cursor / ChatGPT extensively and am ready to dip into more of an issue / PR flow but not sure what people are doing here exactly. Specifically for side projects, I tend to think through high level features, then break it down into sub-items much like a PM. But I can easily take it a step further and give each sub issue technical direction, e.g. "Allow font customization: Refactor tailwind font configuration to use CSS variables. Expose those CSS variables via settings module, and add a section to the Preferences UI to let the user pick fonts for Y categories via dropdown; default to X Y Z font for A B C types of text".

      Usually I spend a few minutes discussing w/ ChatGPT first, e.g. "What are some typical idioms for font configuration in a typical web / desktop application". Once I get that idea solidified I'd normally start coding, but could just as easily hand this part off for simple-ish stuff and start ironing out he next feature. In the time I'd usually have planned the next 1-2 months of side project work (which happens, say, in 90 minute increments 2x a week), the Agent could knock out maybe half of them. For a project i'm familiar with, I expect I can comfortably review and comment on a PR with much less mental energy than it would take to re-open my code editor for my side project, after an entire day of coding for work + caring for my kids. Personally I'm pretty excited about this.

      • rowanseymour 3 days ago

        I have not had great experiences interacting directly with LLMs except when asking for a snippet of code that is generic and commonly done. Now with GitHub Copilot (you need a Pro Plus I think) I'm creating an issues, assigning to Copilot, and then having a back and forth on the PR with Copilot until it's right. Exactly as I would with a junior dev and honestly it's the first time I've felt like AI could make a noticeable difference to my productivity.

      • steveklabnik 3 days ago

        I'm not your parent, but Claude at least has the ability to integrate with GitHub such that you can say "@claude please try to fix this bug" on an issue and it'll just go do it.

  • _aavaa_ 3 days ago

    > Luddite

    The luddites were not against progress or the technology itself. They were opposed to how it was used, for whose benefit, and for whose loss [0].

    The AI-Luddite position isn’t ain’t AI, it’s (among other things anti mass copyright theft from creators to train something with the explicit goal of putting them out of a job, without compensation. All while producing an objectively inferior product but passing it off as a higher quality one.

    [0]: https://www.hachettebookgroup.com/titles/brian-merchant/bloo...

  • Garlef 3 days ago

    > Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun.

    Same for me. But maybe that's ultimately an UX issue? And maybe things will straighten out once we figure out how to REALLY do AI assisted software development.

    As an anology: Most people wouldn't want to dig through machine code/compiler output. At least not without proper tooling.

    So: Maybe once we have good tools to understand the output it might be fun again.

    (I guess this would include advances in structuring/architecting the output)

    • username223 3 days ago

      > As an anology: Most people wouldn't want to dig through machine code/compiler output. At least not without proper tooling.

      My analogy is GUI builders from the late 90s that let you drag elements around, then generated a pile of code. They worked sometimes, but God help you if you wanted to do something the builder couldn't do, and had to edit the generated code.

      Looking at compiler output is actually more pleasant. You profile your code, find the hot spots, and see that something isn't getting inlined, vectorized, etc. At that point you can either convince the compiler to do what you want or rewrite it by hand, and the task is self-contained.

    • cratermoon 3 days ago

      Tim doesn't address this in his essay, so I'm going to harp on it: "AI will soon be able to...". That phrase is far too load-bearing. The part of AI hype that says, "sure, it's kinda janky now, but this is just the beginning" has been repeated for 3 years now, and everything has been just around the corner the entire time. It's the first step fallacy, saying that if we can build a really tall ladder now, surely we'll soon be able to build a ladder tall enough to reach the moon.

      The reality is that we've seen incremental and diminishing returns, and the promises haven't been met.

      • tptacek 3 days ago

        Diminishing returns? Am I reading right that you believe the last 6 months has been marked by a decrease in the capability of these systems?

        • cratermoon 3 days ago

          That's not what diminishing returns means.

          • tptacek 3 days ago

            That's true, but it's nearest bit of evidence at hand to how the "returns" could be "diminishing". I'm fine if someone wants to provide any other coherent claim as to how we're in a "diminishing returns" state with coding LLMs right now.

    • layer8 3 days ago

      The compiler analogy doesn’t quite fit, because the essential difference is that source code is (mostly) deterministic and thus can be reasoned about (you can largely predict in detail what behavior code will exhibit even before writing it), which isn’t the case for LLM instructions. That’s a major factor why many developers don’t like AI coding, because every prompt becomes a non-reproducible, literally un-reasonable experiment.

      • steveklabnik 3 days ago

        I think the "largely" in there is interesting and load-bearing: a lot of people find compiler output quite surprising!

        But that doesn't mean that it's not a gradient, and LLM output may be meaningfully harder to reason about than compiler output, and that may matter.

        • layer8 3 days ago

          Assembly output may sometimes be surprising, but maintains the language semantics. The surprise comes from either misunderstanding the language semantics, or from performance aspects. Nevertheless, if you understand the language semantics correctly, the program behavior resulting from the output is deterministic and predictable. This is not true for LLMs.

          • steveklabnik 3 days ago

            I don't disagree on a factual level, I am just describing some people's subjective experiences: some language semantics can be very subtle, and miscompilation bugs are real. Determining if it is just an aggressive optimization or a real codegen bug can be difficult sometimes, that's all.

            • layer8 3 days ago

              To some extent yes, but I don’t think that changes much about the distinction to AI coding I was making. The thing is that language misconceptions are fixable; the LLM unpredictability isn’t.

    • bitwize 3 days ago

      I think that AI assistance in coding will become enjoyable for me once the technology exists for AI to translate my brainwaves into text. Then I could think my code into computer, greatly speeding the OODA loop of programming.

      As it is, giving high-level directives to an LLM and debugging the output seems like a waste of my time and a hindrance to my learning process. But that's how professional coding will be done in the near future. 100% human written code will become like hand-writing a business letter in cursive: something people used to be taught in school, but no one actually does in the real world because it's too time-consuming.

      Ultimately, the business world only cares about productivity and what the stopwatch says is faster, not whether you enjoy or learn from the process.

  • 9d 3 days ago

    > It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

    Writing code is a really fun creative process:

    1. Conceive an exciting and useful idea

    2. Comprehend the idea fully from its top to its bottom

    3. Translate the idea into specific instructions utilizing known mechanics

    4. Find the beautiful middleground between instruction and abstraction

    5. Write lots and lots of code!

    6. Find where your conception was flawed and fix it as necessary.

    7. Repeat steps 2-6 until the thing works just as you dreamed or you give up.

    It's maybe the most fun and exciting mixture of art and technology ever.

    • 9d 3 days ago

      I forgot to say the second part:

      Using AI is the same as code-review or being a PM:

      1. Have an ideal abstraction

      2. Reverse engineer an actual abstraction from code

      3. Compare the two and see if they match up

      4. If they don't, ask the author to change or fix it until it does

      5. Repeat steps 2-4 until it does

      This is incredibly not fun, because it's not a creative process.

      You're essentially just an accountant or calculator at this point.

  • tsumnia 3 days ago

    Sounds like me 20 years ago learning Java

    It's new tech. We're all "giraffes on roller skates" whenever we start something new. Find out where you can use in your life and use it. Where you can't or don't want to, don't. Try to not get deterred by analysis paralysis when there's something that doesn't make sense. In time, you'll get it.

  • gs17 3 days ago

    > My main issue with vibe coding etc is I simply don't enjoy it.

    I almost enjoy it. It's kind of nice getting to feel like management for a second. But the moment it hits a bug it can't fix and you have to figure out its horrible mess of code any enjoyment is gone. It's really nice for "dumb" changes like renumbering things or very basic refactors.

    • tptacek 3 days ago

      When the agent spins out, why don't you just take the wheel and land the feature yourself? That's what I do. I'm having trouble integrating these two skeptical positions of "LLMs suck all the joy out of actually typing code into an editor" and "LLMs are bad because they sometimes force you to type code into an editor".

  • potatolicious 3 days ago

    Yeah, I will say now that I've played with the AI coding tools more, it seems like there are two distinct use cases:

    1 - Using coding tools in a context/language/framework you're already familiar with.

    This one I have been having a lot of fun with. I am in a good position to review the AI-generated code, and also examine its implementation plan to see if it's reasonable. I am also able to decompose tasks in a way that the AI is better at handling vs. giving it vague instructions that it then does poorly on.

    I feel more in control, and it feels like the AI is stripping away drudgery. For example, for a side project I've been using Claude Code with an iOS app, a domain I've spent many years in. It's a treat - it's able to compose a lot of boilerplate and do light integrations that I can easily write myself, but find annoying.

    2 - Using coding tools in a context/language/framework you don't actually know.

    I know next to nothing about web frontend frameworks, but for various side projects wanted to stand up some simple web frontends, and this is where AI code tools have been a frustration.

    I don't know what exactly I want from the AI, because I don't know these frameworks. I am poorly equipped to review the code that it writes. When it fails (and it fails a lot) I have trouble diagnosing the underlying issues and fixing it myself - so I have to re-prompt the LLM with symptoms, leading to frustrating loops that feel like two cave-dwellers trying to figure out a crashed spaceship.

    I've been able to stand up a lot of stuff that I otherwise would never have been able to, but I'm 99% sure the code is utter shit, but I also am not in a position to really quantify or understand the shit in any way.

    I suppose if I were properly "vibe coding" I shouldn't care about the fact that the AI produced a katamari ball of code held together by bubble gum. But I do care.

    Anyway, for use case #1 I'm a big fan of these tools, but it's really not the "get out of learning your shit" card that it's sometimes hyped up to be.

    • saratogacx 3 days ago

      For case 2, I've had a lot of luck starting with asking the LLM "I have experience in X, Y, and Z technologies, help me translate this project in those terms, list anything this code does that doesn't align with the typical use of the technologies they've chosen". This has given me a great "intro" to move me closer to being able to understand.

      Once I've done that and piked a few follow up questions, I feel much better in diving into the generated code.

      • blks 3 days ago

        So you basically read some articles about how frontend works, and that helped you understand frontend code better.

        • saratogacx 2 days ago

          Not quite that reductive. More of, I thought about what would be the exact page/article that, if it exists, would get me started, and used the description of that as a prompt to the LLM "Learning X for people that know Y". This is especially useful because X is now curated to what you're actually working on and Y is curated to what you already know.

thadt 3 days ago

On Learning:

My wife, a high school teacher, remarked to me the other day “you know, it’s sad that my new students aren’t going to be able to do any of the fun online exercises that I used to run.”

She’s all but entirely removed computers from her daily class workflow. Almost to a student, “research” has become “type it into Google and write down whatever the AI spits out at the top of the page” - no matter how much she admonishes them not to do it. We don’t even need to address what genAI does to their writing assignments. She says this is prevalent across the board, both in middle and high school. If educators don’t adapt rapidly, this is going to hit us hard and fast.

  • MarkusQ 2 days ago

    That's because research had already become "look up the answer somebody else found" years ago. If you want to force they to do real research, ask them things no AI knows because no one knows. Ask them to find the exact center of the classroom. Or how many peas the cafeteria throws away each year, on average. Or any of a thousand other questions that no one knows the answer to.

bgwalter 3 days ago

I notice a couple of things in the pro-AI [1] posts: All start writing in a lengthy style like Steve Yegge in his peak. All are written by ex-programmers who are on the management/founder side now. All of them cite programmer friends who claim that AI is useful.

It is very strange that no real open source project uses "AI" in any way. Perhaps these friends work on closed source and say what their manager wants them to say? Or they no longer care? Or they work in "AI" companies?

[1] He does mention return on investment doubts and waste of energy, but claims that the agent nonsense works (without public evidence).

  • orangecat 3 days ago

    I'm a programmer, not a manager. I don't have a blog. AI is useful.

    It is very strange that no real open source project uses "AI" in any way.

    How do you know? Given the strong opposition that lots of people have I wouldn't expect its use to be actively publicized. But yes, I would expect that plenty of open source contributors are at the very least using Cursor-style tab completion or having AIs generate boilerplate code.

    Perhaps these friends work on closed source and say what their manager wants them to say?

    "Everyone who disagrees with me is paid to lie" is a really tiresome refrain.

  • bwfan123 3 days ago

    There is a large number of wannabe hands-on coders who have moved on to become management - and they all either have coder-envy or coder-hatred.

    To them, gen-ai is a savior - Earlier, they felt out of the game - now, they feel like they can compete. Earlier they were wannabe coders. Now they are legit.

    But, this will last only until they accept a chunk of code put out by co-pilot and then spend the next 2 days wrangling with it. At that point, it dawns on them what these tools can actually do.

  • senko 3 days ago

    What’s “real open source” to you? I have a niche project useful to a small audience (grocery prices crawler for Croatian stores), 70ish stars, 10 or so forks, a few contributors, AGPL licensed.

    I used AI a lot (vibe coding for spikes and throwaway tools, AI-assisted coding for prod code, chatgpt sessions to optimize db schema and queries, etc). I’d say some 80% or more of the code was written by Claude and reviewed by me.

    It has not only sped up the development, but as a side project, I would never even have finished it (deployed to prod with enough features to be useful) without AI.

    Now you can say that doesn’t count because it’s a side project, or because I’m bullish on AI (I am, without jumping on the hype train), or because it’s too small, or because I haven’t blogged about it, or because anecdotes are not data, and I will readily admit I’m not a true Scotsman.

  • cesarb 3 days ago

    > It is very strange that no real open source project uses "AI" in any way.

    Using genAI is particularly hard on open source projects due to worries about licensing: if your project is under license X, you don't want to risk including any code with a license incompatible with X, or even under a license compatible with X but without the correct attribution.

    It's still not settled whether genAI can really "launder" the license of the code in its training set, or whether legal theories like "subconscious copying" would apply. In the later case, using genAI could be very risky.

  • rjsw 3 days ago

    At least in my main open source project, use of AI is prohibited due to potentially tainting the codebase with stuff derived from other GPL projects.

strict9 3 days ago

Angst is the best way to put it.

I use AI every day, I feel like it makes me more productive, and generally supportive of it.

But the angst is something else. When nearly every tech related startup seems to be about making FTEs redundant via AI it leaves me with a bad feeling for the future. Same with the impact on students and learning.

Not sure where we go from here. But this feels spot on:

>I think that the best we can hope for is the eventual financial meltdown leaving a few useful islands of things that are actually useful at prices that make sense.

  • fellowniusmonk 3 days ago

    All the angst is 100% manufactured by policy, LLMs wouldn't be hated if it didn't dovetail with the end of ZIRP, Section 174 specifically targeting engineer roles to be tax losers so others could be other tax winners, Macro Economic Uncertainty (which compounds the problems of 174.)

    If ours roles hadn't been specifically targeted by government policy for reduction as a way to buoy government revenues and prop up the budgetary bottom line, in the face of decreasing taxes for favored parties.

    This is simply policy induced multifactorial collapse.

    And LLMs get to take the blame from engineers because that is the excuse being used. Pretty much every old school hacker who has played around with them recognizes that LLMs are impressive and sci-fi, it's like my childhood dream come true for interface design.

    I cannot begin to say how fucking stupid the people in charge of these policies are, I'm an old head, I know exactly the type of 80s executives that actively likes to see the nerds suffer because we're all irritating poindexters to them.

    The pattern of actively attacking the freedoms and sabotaging incomes of knowledge workers is not remotely a rare pattern, and it's often done this stupidly and at the expense of an countries economic footing and ability to innovate.

  • bob1029 3 days ago

    I agree that some kind of meltdown/crash would be the best possible thing to happen. There are too many players not adding any value to the ecosystem at this point. MCP is a great example of this - Complexity merchants inventing new markets to operate in. We need something severe to scare off the bullshit artists for a while.

    How many civil engineering projects could we have completed ahead of schedule and under budget if we applied the same amount of wild-eyed VC and genius tier attention to the problems at hand?

    • pzo 3 days ago

      MCP is now only used by really power users and mostly only in software dev settings but I see them used by users in the future. There is no decent mcp client for non tech savvy users yet. But I think if browsers will have build in better implementation of them they will be used. Think what perplexity comet or browser company dia trying to do. It's still very early for MCP.

perplex 3 days ago

> I really don’t think there’s a coherent pro-genAI case to be made in the education context

My own personal experience is that Gen AI is an amazing tool to support learning, when used properly.

Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.

  • jplusequalt 3 days ago

    >Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.

    Since we're using anecdotes, let me leave one as well--it's been my experience that humans choose the path of least resistance. In the context of education, I saw a large percentage of my peers during K-12 do the bare minimum to get by in the classes, and in college I saw many resorting to Chegg to cheat on their assignments/tests. In both cases I believe it was the same motivation--half-assing work/cheating takes less effort and time.

    Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.

    But wait, this isn't an anecdote, it's already happening! Here's an excellent article that details the damage these tools are already causing to our students https://www.404media.co/teachers-are-not-ok-ai-chatgpt/.

    >[blank] is an amazing tool ... when used properly

    You could say the same thing about a myriad of controversial things that currently exist. But we don't live in a perfect world--we live in a world where money is king, and often times what makes money is in direct conflict with utilitarianism.

    • ryandrake 3 days ago

      > Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.

      I think schools are going to have to very quickly re-evaluate their reliance on "having done homework" and using essays as evidence that a student has mastered a subject. If an LLM can easily do something, then that thing is no longer measuring anything meaningful.

      A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work.

      • jplusequalt 3 days ago

        >A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work

        Okay, how do they go about this?

        Schools are already understaffed as is, how are the teachers suddenly going to have time to revamp the entire educational blueprint? Where is the funding for this revolution in education going to come from when we've just slashed the Education fund?

        • ryandrake 3 days ago

          I'm not an educator, so I honestly have no idea. The world has permanently changed though... we can't put the toothpaste back into the tube. Any student, with a few bucks and a few keystrokes, can instantly solve written homework assignments and generate an any-number-of-words essay about any topic. Something needs to change in the education process, but who knows what it will end up looking like?

        • usefulcat 3 days ago

          I would think that at least part of the solution would have to involve having students do more work at school instead of as homework.

          • jplusequalt 3 days ago

            Okay, and how do you make room for that when there's barely enough time to teach the curriculum as is?

            • usefulcat 3 days ago

              Obviously something has to give.

              • jplusequalt 2 days ago

                This is what I meant in my other comment. Proponents of AI (not necessarily you) haven't seriously considered how these tools will impact the population.

                Until they come up with a semblance of a plan, teachers will experience an undue burden to slog through automated schoolwork assignments, cheating, and handle children who lack the critical faculties to be well functioning members of society.

                It's all very depressing.

      • ThrowawayR2 3 days ago

        > "If an LLM can easily do something, then that thing is no longer measuring anything meaningful."

        An automobile can go quite far and fast but that doesn't mean the flabbiness and poor fitness of its occupants isn't a problem.

  • dowager_dan99 3 days ago

    >> an amazing tool to support learning, when used properly.

    how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI? The same way social media and mobile apps made the internet easy, mindless clicking, LLMs make school a mechanical task. It feels like your argument is similar to LLMs helping experienced, senior developers code more effectively, while eliminating many chances to grow the skills needed to join that group. Sounds like you already know how to learn and use AI to enhance that. My 12-yr-old is not there yet and may never get there.

    • lonelyasacloud 3 days ago

      >> how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI?

      For every person/child that just wants the answer there will be at least some that will want to know why. And these endlessly patient machines are very good at feeding that curiosity.

      • jplusequalt 3 days ago

        >For every person/child that just wants the answer there will be at least some that will want to know why

        You're correct, but let's be honest here, the majority will use it as a means to get their homework over and done with so they can return to Tik Tok. Is that the society we want to cultivate?

        >And these endlessly patient machines are very good at feeding that curiosity

        They're also very good at feeding you factually incorrect information. In comparison, a textbook was crafted by experts in their field, and is often fact checked by many more experts before it becomes published.

        • MarkusQ 2 days ago

          And the carefully checked textbooks are just as full of factually incorrect information. If you doubt this, look at any textbook from 50+ years ago; they were also carefully checked--more so than today's--and yet contained many things we now know to be incorrect. In fifty years, our present textbooks will look just as bad, if not worse (seriously; look at a modern K-12 textbook).

          So the key thing to get across to kids is that argument by authority is an untrustworthy heuristic at best. AI slop can even help with this.

    • rightbyte 3 days ago

      > My 12-yr-old is not there yet and may never get there.

      Wouldn't class room exams enforce that though? Like, imagining LLMs like an older sibling or parent that would help pupils cheat on essays.

  • SkyBelow 3 days ago

    The issue with education in particular is a much deeper issue which gen AI has ripped bandages off and exposed the wound to the world, while also greatly accelerating its decay, but it was not responsible for creating it.

    What is the purpose of education? Is it to learn, or to gain credentials that you have learned? Too much of education has become the latter, to the point we have sacrificed the former. Eventually this brings down both, as a degree gains a reputation of no longer signifying the former ever happened.

    Or existing systems that check for learning before granting the degree that showed an individual learned were largely not ready for the impact of genAI and teachers and professors have adapted poorly. Sometimes due to lack of understanding the technology, often due to their hands being tied.

    GenAI used to cheat is a great detriment to education, but a student using genAI to learn can benefit greatly, as long as they have matured enough in their education process to have critical thinking to handle mishaps by the AI and to properly differentiate when they are learning and when they are having the AI do the work for them (I don't say cheat here because some students will accidentally cross the line and 'cheat' often carries a hint of mens rea). To the mature enough student interested in learning more, genAI is a worthwhile tool.

    How do we handle those who use it to cheat? How do we handle students who are too immature in their education journey to use the tool effectively? Are we ready to have a discussion about those learning who only care for the degree and the education to earn the degree is just seen as a means to an end? How to teachers (and increasingly professors) fight back against the pressure of systems that optimize on granting credentials and which just assume the education will be behind those systems (Goodhart's Law anyone)? Those questions don't exist because of genAI, but genAI greatly increased our need to answer them.

  • murrayb 3 days ago

    I think he is talking education as in school/college/university rather than learning?

    I too am finding AI incredibly useful for learning, I use it for high level overviews and to help guide me to resources (online formats and books) deeper dives. Claude has so far proven to be an excellent learning partner, no doubt other models are similarly good.

    • strict9 3 days ago

      That is my take. Continuing education via prompt is great, I try to do it every day. Despite years of use I still get that magic feeling when asking about some obscure topic I want to know more about.

      But that doesn't mean I think my kids should primarily get K-12 and college education this way.

  • Aperocky 3 days ago

    Computer and internet has been around for 20 years and yet the evaluation systems of our education has largely remained the same.

    I don't hold my breath on this.

    • icedchai 3 days ago

      Where are you located? The Internet boom in the US happened in the mid-90's. My first part-time ISP job was in 1994.

      • dowager_dan99 3 days ago

        dial-up penetration in the mid-90's was still very thin, and high-speed access limited to universities and the biggest companies. Here's the numbers ChatGPT found for me:

        * 1990s: Internet access was rare. By 1995, only 14% of Americans were online.

        * 2000: Approximately 43% of U.S. households had internet access .

        * 2005: The number increased to 68% .

        * 2010: Around 72% of households were connected .

        * 2015: The figure rose to 75% .

        * 2020: Approximately 93% of U.S. adults used the internet, indicating widespread household access .

        • icedchai 3 days ago

          Yes, it was thin, but 1995 - 96 was when "Internet" went mainstream. Depending on your area, you could have several dialup ISP options. Major metros like Boston had dozens. I remember hearing ISP ads on the radio!

          1995 was when Windows 95 launched, and with its built in dialup networking support, allowed a "normal" person to easily get online. 1995 was the Netscape IPO, which kicked off the dot-com bubble. 1995 was when Amazon first launched their site.

schmichael 3 days ago

> I really don’t think there’s a coherent pro-genAI case to be made in the education context.

I think it’s simple: the reign of the essay is over. Educators must find a new way to judge a student’s understanding.

Presentations, artwork, in class writing, media, discussions and debates, skits, even good old fashioned quizzes all still work fine for getting students to demonstrate understanding.

As the son of two teachers I remember my parents spending hours in the evenings grading essays. While writing is a critical skill, and essays contain a good bit of information, I’m not sure education wasn’t overindexing on them already. They’re easy to assign and grade, but there’s so much toil on both ends unrelated to the core subject matter.

  • thadt 3 days ago

    I posit that of the various uses of student writing, the most important isn't communication or even assessment, but synthesis. Writing forces you to grapple with a subject in a way that clarifies your thinking. It's easy to think you understand something until you have to explain or apply it.

    Skipping that entirely, or using a LLM to do most of it for you, skips something rather important.

    • schmichael 3 days ago

      > Writing forces you

      I agree entirely with you except for the word "forces." Writing can cause synthesis. It should. It should be graded to encourage that...

      ...but all of that is a whole lot of work for everyone involved: student and teacher alike.

      And that kind of synthesis is in no way unique to essays! All of the other mediums I mention can make synthesis more readily apparent then paragraphs of (often very low quality) prose. A clever meme lampooning the "mere merchant" status of the Medici family could demonstrate a level of understanding that would take paragraphs of prose to convey.

  • ryandrake 3 days ago

    I'd also say that the era of graded homework in general is over, and using "proof of toil" assignments as a meaningful measurement of a student's progress/mastery.

jplusequalt 3 days ago

Wholeheartedly agree. I can't help but think that proponents of LLMs are not seriously considering the impact it will have on our ability to communicate with each other, or to reason on our own accord without the assistance of an LLM.

It confounds me how these people would trust the same companies who fueled the decay of social discourse via the internet with the creation of AI models which aim to encroach on every aspect of our lives.

  • Workaccount2 3 days ago

    For me it threatens to be like a spell check. Back 20 years ago when I was still in school and still hand writing for many assignments, my spelling was very good.

    Nowadays it's been a long time since my brain totally checked out on spelling. Everything I write in every case has spell check, so why waste neurons on spelling?

    I fear the same will happen on a much broader level with AI.

    • kiitos 3 days ago

      What? Who is spending any brain cycles on spelling? When you write a word, you just write the word, the spelling is... intrinsic? automatic? certainly not something that you have to, like, actively think about?

      • steveklabnik 3 days ago

        I both agree and disagree, I don't regularly think about spelling, but there are certain words I know my brain always gets wrong, so when I run into one of those, things come crashing to a halt for a second while I try to remember if I'm still spelling them wrong or if I've finally trained myself to do it correctly.

      • code_biologist 3 days ago

        I experience the same but I've always been an extremely strong speller. I think it's a biased viewpoint. I remember the kids in grade school who really struggled, and I've always wondered how that same group fares these days with autocorrect: if they pick up the correct spelling through repeat exposure or the opposite is true and they end up relying on autocorrect.

        I don't know of any of the research, but I suspect that teaching reading via "sight reading" over phonics is heavily detrimental to developing an intrinsic automatic sense of spelling.

      • username223 3 days ago

        ... until spellcheck gets "AI," and starts turning correctly-spelled words into different words that it thinks are more likely. (Don't get me started on "its" vs. "it's," which autocorrect frequently randomly incorrects.)

  • soulofmischief 3 days ago

    Some of us realize this technology was inevitable and are more focused on figuring out how society evolves from here instead of complaining and trying to legislate away math and prevent honest people from using these tools while criminals freely make use of them.

    • dowager_dan99 3 days ago

      This is a really negative and insulting comment towards people who are struggling with a very real, very emotional response to AI, and super-concerned about both the real and potential negatives that the rabid boosters won't even acknowledge. You don't have to "play the game" to make an impact, it's valid to try and challenge the math and change the rules too.

      • soulofmischief 3 days ago

        > This is a really negative and insulting comment towards people who are struggling with a very real, very emotional response to AI

        I disagree that my comment was negative at all. Many of those same people (not all) spend a lot of time making negative comments towards my work in AI, and tossing around authoritarian ideas of restriction in domains they understand like art and literature, while failing to also properly engage with the real issues such as intelligent mass surveillance and increased access to harmful information. They would sooner take these new freedom weapons out of the hands of the people while companies like Palintir and NSO Group continue to use them at scale.

        > super-concerned about both the real and potential negatives that the rabid boosters won't even acknowledge

        So am I, the difference is I am having a rational and not an emotional response, and I have spent a lot of time deeply understanding machine learning for the last decade in order to be able to have a measured, informed response.

        > You don't have to "play the game" to make an impact, it's valid to try and challenge the math and change the rules too

        I firmly believe you cannot ethically outlaw math, and this is part of why I have trouble empathizing with those who feel otherwise. People are so quick to support authoritarian power structures the moment it supposedly benefits them or their world view. Meanwhile, the informed are doing what they can to prevent this stuff from being used to surveil and classify humanity, and to find a balance that allows humans to coexist with artificial intelligence.

        We are not falling prey to reactionary politics and disinformation, and we are not willing to needlessly expand government overreach and legislate away critical individual freedom in order to achieve our goals.

        • spencerflem 3 days ago

          its not outlawing math, its outlawing what companies can sell as a product.

          that's like saying that you can't outlaw selling bombs in a store because its "chemistry".

          Or even for usage- can we not outlaw shooting someone with a gun because it is "projectile physics"?

          Im glad you do oppose Palantir - we're on the same side and I support what you're doing! - but I also think you're leaving the most effective solution on the table by ignoring regulatory options.

          • soulofmischief 3 days ago

            We can definitely regulate people's and especially organizations' actions. But a lot of the emotional responses to AI that I encounter are having a different conversation, and many just blindly hate "AI" without even understanding what it is, and want to infringe on the freedoms of individuals to use this groundbreaking technology. They're like the antivaxxers of the digital world, and I encountered many of the same people whenever I worked in the decentralized web space, using the same vague arguments about electricity usage and such.

            • spencerflem 3 days ago

              I feel like its less antivaxx, and more the anti nuclear style movement. The antivaxxers are Just Wrong.

              But for nuclear - there's certainly good uses for nuclear power but its scary! and powers evil world ending bombs! and if it goes wrong people end up secretly mutated and irradiated and its all so awful and we should shut it down now !!

              And to be honest I don't know my own feelings on nuclear power or "good" AI either, but I do get it when people want to Shut it All Down Right Now !! Even if there is a legitimate case for being genuinely useful to real people.

    • jplusequalt 3 days ago

      >Some of us realize this technology was inevitable

      How was any of this inevitable? Point me to which law of physics demanded we reach this state of the universe. These companies actively choose to train these models, and by framing their development as "inevitable" you are helping absolve them of any of the negative shit they have/will cause.

      >figuring out how society evolves from here instead of complaining and trying to legislate away math

      Could you not apply this exact logic to the creation of nuclear weaponry--perhaps the greatest example of tragedy of the commons?

      >prevent honest people from using these tools while criminals freely make use of them

      What is your argument here? Should we suggest that everyone learn how to money launder to even the playing field against criminals?

      • soulofmischief 3 days ago

        > Point me to which law of physics demanded we reach this state of the universe

        Gestures vaguely around at everything

        Intelligence is intelligence, and we are beginning to really get down to the fundamentals of self-organization and how order naturally emerges from chaos.

        > Could you not apply this exact logic to the creation of nuclear weaponry--perhaps the greatest example of tragedy of the commons?

        Yes, I can. Access to information is one thing (must be carefully handled, but information wants to be free, and there should be no law determining what one person can say to another, barring NDAs and government classification of national secrets (which doesn't include math and physics) but we absolutely have international treaties to limit nuclear proliferation, and we also have countries who do not participate in these treaties, or violate them, which illustrates my point that criminals will do whatever they want.

        > Should we suggest that everyone learn how to money launder to even the playing field against criminals?

        I have no interest in entertaining your straw mans. You're intelligent enough to understand context.

      • MarkusQ 2 days ago

        > Should we suggest that everyone learn how to money launder to even the playing field against criminals?

        Certainly. We should also teach them how phishing scams work, and about confirmation bias, high pressure sales tactics, phantom limbs, vote splitting, inflation, optical illusions, demagoguery, peer pressure, lotteries, both insurance and insurance fraud, and lots of other things work.

    • bgwalter 3 days ago

      DDT was a very successful insecticide that was outlawed due to its adverse effects on humans.

      • soulofmischief 3 days ago

        I shouldn't have to tell you that producing, distributing and using a toxic chemical that negatively affects the earth and its biosphere are much, much different than allowing people to train and use models for personal use. This is a massive strawman and doesn't even deserve as much engagement as I've given it here.

      • absurdo 3 days ago

        It didn’t have a trillion dollar marketing campaign behind it.

    • harimau777 3 days ago

      Have they come up with anything? So far I haven't seen any solutions presented that are both politically viable and don't result in people being even more under the thumb of late stage capitalism.

      • soulofmischief 3 days ago

        This is one of the most complicated issues humanity has ever dealt with. Don't hold your breath, it's gonna be a while. Society at large doesn't even have a healthy relationship with the internet and mobile phones, these advancements in artificial intelligence came at both a good and awful time.

    • collingreen 3 days ago

      If only there were more nuances and options between those two extremes! Oh well, back to the anti math legislation pits I guess.

      • soulofmischief 3 days ago

        There are many nuances to this argument, but I am not trying to write a novel in a hacker news comment. Certain broad strokes absolutely apply, and when you get down to brass tacks it's about respecting personal freedom.

throwawaybob420 3 days ago

It’s not angst to see the people who run the companies we work for “encourage” us to use Claude to write our code knowing full well it’s their attempt to see if they really can fire us without a hit in “productivity”.

It’s not angst to see students throughout the entire spectrum end up using ChatGPT to write their papers, summarize 3 paragraphs, and use it to bypass any learning.

It’s not angst to see people ask a question to an LLM and talk what it says as gospel.

It’s not angst to understand the environmental impact of all this stupid fucking shit.

It’s not angst to see the danger in generative AI not only just creating slop, but further blurring the lines of real and fake.

It’s not angst to see the vast amount of non-consensual porn being generated of people without their knowledge.

Feel like I’m going fucking crazy here, just day after day of people bowing down at the altar and legit not giving a single fuck about what happens after rofl

  • bluefirebrand 3 days ago

    Hey for what it's worth, you aren't alone

    This is a really wild and unpredictable time, and it's ok to see the problems looming and feel unsettled at how easily people are ignoring the potential oncoming train

    I would suggest taking some time for yourself to distance yourself from this as much as you can for your own mental health

    Ride this out as best you can until things settle down a bit. You aren't alone

swyx 3 days ago

> Just to be clear, I note an absence of concern for cost and carbon in these conversations. Which is unacceptable. But let’s move on.

hold on, its very simple. here's a oneliner even degrowthers would love: extra humans cost a lot more in money and carbon than it cost to have an llm spin up and down to do this work that would otherwise not get done.

nicbou 3 days ago

One aspect is missing: content creation.

AI is completely destroying the economics of putting out free information. LLMs still relies on human beings to experience and document the real world, but they strip those humans of the reward. Creators lose the income, credit and community that come with having an audience. In the long term, I fear that a lot of the quality information will disappear because it's no longer worth creating.

I wrote a bit about this earlier in a very relevant thread: https://news.ycombinator.com/item?id=44099570

enknee1 3 days ago

The real value in vibe coding does not come to developers who are already out at the bleeding edge of technology. Vibe codings true value is for people who know very little about programming, who know just enough to be able to debug the a type issue, or who have the time to read and research the issues outside of the general structure provided by LLMs. I've never created an Android app before. But I can do that in 24 hours now.

These tools are 2 years old. They're vastly superior to their versions from two years ago. As people continue to utilize and provide feedback these tools will continue to improve and become better and better at providing customers (non-programmers) access to features, tools, and technologies that they would otherwise have to rely on a team of developers for.

Personally I cannot afford the thousands of dollars per hour required to retain a team of top shelf developers for some crazy hair brained Bluetooth automation for my house lighting scheme. I can, however, spend a weekend playing around with Claude (and chat GPT and...). And I can get close enough. I don't need a production tool. I just need the software to do the little thing, the two seconds of work, that I don't want to do every single day.

Who's created a RAG pipeline? Not me! But I can walkthrough the BS necessary to get PostGRE, FastAPI, and Llama 3 set up so that I start automating email management.

And that's the beauty: I don't have to know everything anymore! Not spend months trying to parse all the specialized language surrounding the tools I'll need to implement. I just need to ask the questions I don't have answers for, making sure that I ask enough that the answers tie back into what I do know.

And LLM's and vibe coding do that just fine.

piker 3 days ago

> I really don’t think there’s a coherent pro-genAI case to be made in the education context

I use ChatGPT as an RNG of math problems to work through with my kid sometimes.

  • Herring 3 days ago

    I used it to generate SQL questions set in real-world scenarios. I needed to pick up joins intuitively, and the websites I could find were pretty dull.

spacephysics 3 days ago

I disagree with genAI not having an education use case.

I think a useful LLM for education would be one with heavy guardrails, which is “forced” to provide step-by-step back and forth tutoring instead of just giving out answers.

Right now hallucinations would be problematic, but assuming its in a domain like Math (and maybe combined with something like Wolfram to verify outputs), i could see this theoretical tool being very helpful to learning mathematics, or even other sciences.

For more open-ended subjects like english, history, etc then it may be less useful.

Perhaps only as a demonstration, maybe an LLM is prompted to pretend to be a peasant from Medieval Europe, and with text to voice we could have students as a group interact with and ask questions of the LLM. In this case, maybe the LLM is only trained on historical text from specific time periods, with settings to be more deterministic and reduce hallucinations

prmph 3 days ago

I finally tried Claude Code for most of last week on a toy Typescript project of moderate complexity. It's supposedly the pinnacle of agentic coding assistants, and I tend to agree, finding it far ahead of Copilot et al. Seeing it working was like a bit of magic, and it was very addictive. It successfully distracted me from my main projects that I code mostly by hand.

That said, and it's kind of hard to express this well, not only is the actual productivity still far from what the hype suggests, but I regard agentic coding to be like a bad addictive drug right now. The promise of magic from the agent is always just seems around the corner: just one more prompt to finally fix the rough edges of what it has spat out, just one more helpful hint to put it on the right path/approach, just one more reminder for it to actually apply everything in CLAUDE.md each time...

Believe it or not, I spent several days with it, crafting very clear and specific prompts, prodding with all kinds of hints, even supplying it with legacy code that mostly works (although written in CSharp), and at the end it had written a lot of that almost works, except a lot of simple things just wouldn't work, not matter how much time I spent with it.

In the end, after a couple of hours of writing the code myself, I had a high a quality type design and basic logic, and a clear path to implementing the all the basic features.

So, I don't know, for now even Claude seems mostly useful only as a sporadic helper within small contexts (drafting specific functions, code review of moderate amounts of code, relatively simple refactoring, etc). I believe knowing when AI would help vs slow you down is becoming key.

For this tech to improve, maybe a genetic/evolutionary approach would be needed. Given a task, the agent should launch several models to work on the problem, with each model also launching several randomized approaches to working on the problem. Then the agent should evaluate all the responses and pick the "best" one to return.

nikolayasdf123 3 days ago

> Go programming language is especially well-suited to LLM-driven automation. It’s small, has a large standard library, and a culture that has strong shared idioms for doing almost anything

+1 to this. thank you `go fmt` for uniform code. (even culture of uniform test style!). thank you culture of minimal dependencies. and of course go standard library and static/runtime tooling. thank you simple code that is easy to write for humans..

and as it turns out for AIs too.

  • zenlikethat 3 days ago

    I found that bit slightly ironic because it always seems to produce slightly cringy Go code for me that might get the job done but skips over some of the usual design philosophies like use of interfaces, channels, and context. But for many parts, yeah, I’ve been very satisfied with Go code gen.

    • nikolayasdf123 2 days ago

      of course. it is not there yet. same happens for me. AIs do not get full project view nor dynamics of classes, behavior, domains... probably that is soon coming

      for me it works well for small scope, isolated sub system or trivial code. unit tests, "given this example: A -> B, complete C -> ?" style transformation of classes (e.g. repositories, caches, etc.)

  • icedchai 3 days ago

    I have found LLMs (mainly using Claude) are, indeed, excellent at spitting out Go boilerplate.

lowsong 3 days ago

> at the moment I’m mostly in tune with Thomas Ptacek’s My AI Skeptic Friends Are All Nuts. It’s long and (fortunately) well-written and I (mostly) find it hard to disagree with.

Ptacek has spent the past week getting dunked on in public for that article. I don't think it lends you a lot of credence to align with it.

> If you’re interested in that thinking, here’s a sample; a slide deck by a Keith Riegert for the book-publishing business which, granted, is a bit stagnant and a whole lot overconcentrated these days. I suspect scrolling through it will produce a strong emotional reaction for quite a few readers here. It’s also useful in that it talks specifically about costs.

You're not wrong here. I read the deck and the word that comes to mind is "disgusting". Then again, the morally bankrupt have always done horrible things to make a quick buck — AI is no different.

  • icedchai 3 days ago

    Getting "dunked" only means it's controversial, not necessarily wrong. Developers who don't embrace AI tools are going to get left behind.

    • lowsong 3 days ago

      > Getting "dunked" only means it's controversial, not necessarily wrong.

      It undermines the author's position of being "moderate" if they align with perhaps the most decisive and aggressively written pro-AI puff piece doing the rounds.

      > Developers who don't embrace AI tools are going to get left behind.

      I'm not sure how to respond to this. I am doubtful a comment on Hacker News will change your mind, but I'd ask you to think about two questions.

      If AI is going to be as revolutionary in our industry as other changes of the past, like web or mobile, then how would a similar statement sound around those? Is saying "Developers who don't embrace mobile development are going to get left behind" a sensible statement? I don't think so, even with how huge mobile has been. Same with other big shifts. "Developers who don't embrace microservice architecture are going to get left behind"? Maybe more comparable, but equally silly. So, why would it be different than those? Do you think LLM tools are more impactful than any other change in history?

      Second, if AI truly as as groundbreakingly revolutionary as you suggest, what happens to us? Maybe you'll call me a luddite, raging against the loss of jobs when confronted with automated looms, but you'll have to forgive me for not welcoming my own destruction with open arms.

      • icedchai 3 days ago

        I understand your skepticism. I think, in 20 years, when we look back, we'll see this time was the beginning of a fundamental paradigm shift in software development. This will be similar in magnitude to the move from desktop to web development in the 90's. If I told you, in 1996, that "developers who don't embrace web development will be left behind", it would be an accurate statement.

      • cloverich 3 days ago

        You have to compare it at the right level. A _developer_ who did not embrace mobile is fine, because the market _grew_ as a result of mobile. For developers, there were strictly more opportunities to branch out and find work. For _companies_ however, yes, if they failed to embrace mobile many of them absolutely were hard-passed (or lost substantial market share) compared against those who did. Just like those who failed to embrace the internet were hard passed before that.

        A more apt comparison might be comparing it to the arrival of IDE's and quality source control? Do you think developers (outside of niche cases) working out of text editors and rsyncing code to production are able to find jobs as easily as those who are well versed in using e.g. a modern language tooling + Github in a team environment? Because I've directly seen many such developers being turned down by screening and interviews; I've seen companies shed talent when they refused to embrace git while clinging to SVN and slow deployment processes; said talent would go on to join companies that were later IPOing in the same space for a billion+ while their former colleagues were laid off. To me it feels quite similar to those moments.

    • kiitos 3 days ago

      Then maybe replace "getting dunked on" with "getting ratio'd" -- underlying point is the same, the post was a bad take.

      • tptacek 3 days ago

        To be fair, you had the same response to Kenton Varda's post about using Claude Code to build an OAuth component for Cloudflare, to the point of calling his work just a tiny step away from "vibe coding".

        • kiitos 3 days ago

          I called that project one step away from vibe coding, which I stand behind -- 'tiny' is your editorializing. But his thing wasn't as summarily dunked-on, or ratio'd, or however you want to call it, as your thing was, I don't think! ;)

          • tptacek 3 days ago

            I don't feel like I got "ratio'd" at all? I'd say the response broke down roughly 50/50, as I expected it to. I got "dunked on" here yesterday for suggesting that userland TCP/IP stacks were a good idea; I'm not all that sensitive to "dunking".

      • icedchai 3 days ago

        What was bad about it? Everything he wrote all sounded very pragmatic to me.

    • bgwalter 3 days ago

      Sure, tptacek will outprogram all of us. With his two GitHub repositories, one of which is a POC.

      • icedchai 3 days ago

        Have you tried any of the tools, like Cursor or Zed? They increase productivity if you use them correctly. If you give them quality inputs like well written, spec-like prompts, instruct them to work in phases, provide feedback on testing, the results can be very, very good. Unsurprisingly, this is similar to what you need to give to a human to also get positive results.

greybox 3 days ago

> horrifying survey of genAI’s impact on secondary and tertiary education.

I agree with this. It's probably terrible for structured education for our children.

The one and only one caveat: Self-Driven language learning

The one and only actual use (outside of generating funny memes) I've had from any LLM so far, is language learning. That I would pay for. Not $30/pcm mind you . . . but something. I ask the model to break down a target language sentence for me, explaining each and every grammar point, and it does so very well. sometimes even going to explain the cultural relevance of certain phrases. This is great.

I've not found any other use for it yet though. As a game engine programmer (C++) The code I write now a days quite deliberate and relatively little compared to a web-developer (I used to be one, I'm not pooping on web devs). so if we're talking about the time/cost of having me as a developer work on the game engine, I'm not saving any time or money by first asking Claude to type what I was going to type anyway. And it's not advanced enough yet to hold the context of our entire codebases spanning multiple components.

Edit, Migaku [https://migaku.com/] is a great language learning application that uses this

As OP, I'm not sure it's worth all that CO2 we're pumping into our atmosphere.

  • Alex-Programs 3 days ago

    AI progress has also made high quality language translation a lot cheaper. When I started https://nuenki.app last year, the options were exorbitantly priced DeepL for decent quality low latency translation or Sonnet for slightly cheaper, much slower, but higher quality translation.

    Now, just a year later, DeepL is beaten by open models served by https://groq.com for most languages, and Claude 4 / GPT-4.1 / my hybrid LLM translator (https://nuenki.app/translator) produce practically perfect translations.

    LLMs are also better at critiquing translations than producing them, but pre-thinking doesn't help at all, which is just fascinating. Anyway, it's a really cool topic that I'll happily talk at length about! They've made so much possible. There's a blog on the website, if anyone's curious.

Havoc 3 days ago

> I think about the carbon that’s poisoning the planet my children have to live on.

Tbh I think we’re going to need a big breakthrough to fix that anyway. Like fusion etc.

A bit less proompting isnt going to save the day

That’s not to say one shouldn’t be mindful. Just think it’s no longer enough

absurdo 3 days ago

Poor HN.

Is there a glimpse of the next hype train we can prepare to board once AI gets dulled down? This has basically made the site unusable.

  • ManlyBread 3 days ago

    My sentiments exactly, lately browsing HN feels like a sales pitch for LLMs, complete with the same snark about "luddites" and promises of future glory I remember back when NFTs were the hot new thing in tech. Two more weeks I guess.

    • Kiro 3 days ago

      NFTs had zero utility but even the most anti AI posts are now "ok, AI can be useful but what are the costs?". It's clearly something different.

    • whynotminot 3 days ago

      Really? I feel like hackernews is so anti-AI I go to other places for the latest. Anything posted here gets destroyed by cranky programmers desperately hoping this is just a fad.

    • tptacek 3 days ago

      I share this complaint, for what it's worth.

  • yoz-y 3 days ago

    At the moment 6 out of 30 front page articles are about AI. That’s honesty quite okay.

    • lagniappe 3 days ago

      I use something called the Rust Index, where I compare a term or topic to the number of posts with "written in Rust" in the title.

      • steveklabnik 3 days ago

        HN old timers would call this it Erlang Index.

        • lagniappe 3 days ago

          I was just thinking about you.

      • absurdo 3 days ago

        C-can we get an open source of this?

        Is it written in Rust?

  • layer8 3 days ago

    Anti-aging is an evergreen.

  • acedTrex 3 days ago

    It has made large parts of the internet and frankly previously solid tools and products unusable.

    Just look at the Github product being transformed into absolute slop central its wild. Github universe was exclusively focused on useless LLM additions.

    • gh0stcat 3 days ago

      I'm interested to see what the landscape of public code will look like in the next few years, with sites like StackOverflow dropping off and discussions moving to discord, plus code generation flooding github, writing your own high quality code in the open might become very valuable.

      • acedTrex 3 days ago

        I am very bearish on that idea to be honest, I think the field will stagnate.

        • rightbyte 3 days ago

          Giving away secret sauce for free is not the way of the new guilded era.

greybox 3 days ago

This is probably the best opinion piece I've read so far on GenAI

  • flufluflufluffy 3 days ago

    Yep, basically sums up all of my thoughts about ai perfectly, especially the environmental impact.

keybored 3 days ago

> My input stream is full of it: Fear and loathing and cheerleading and prognosticating on what generative AI means and whether it’s Good or Bad and what we should be doing. All the channels: Blogs and peer-reviewed papers and social-media posts and business-news stories. So there’s lots of AI angst out there, but this is mine. I think the following is a bit unique because it focuses on cost, working backward from there. As for the genAI tech itself, I guess I’m a moderate; there is a there there, it’s not all slop.

Let’s see.

> But, while I have a lot of sympathy for the contras and am sickened by some of the promoters, at the moment I’m mostly in tune with Thomas Ptacek’s My AI Skeptic Friends Are All Nuts. It’s long and (fortunately) well-written and I (mostly) find it hard to disagree with.

So the Moderate is a Believer. But it’s offset by being concerned about The Climate and The Education and The Investments.

You can try to write a self-aware/moment-aware intro. It’s the same fodder for the front page.

  • CaptainFever 2 days ago

    As someone who is pretty strongly pro-AI, I felt that the article was leaning a bit on the anti side. So given our two opinions, I think "moderate" is a pretty good descriptor (people on both sides think you lean towards the other side).

    • keybored 2 days ago

      The primary purpose[1] of flooding the frontpage with AI thinkpieces is to, more narrowly, champion the AI application towards software development. Being skeptical towards electricity use and uses in Education are not as important.

      [1] Which is an emergent phenomena

jillesvangurp 3 days ago

I think the concerns about climate and CO2 emissions are valid but not a show stopper. The big picture here is that we are living through two amazing revolutions at the same time:

1) The emergence of LLMs and AIs that have turned the Turing test from science fiction into basically irrelevant. AI is improving at an absolutely mind boggling rate.

2) The transition from fossil fuel powered world to a world that will be net zero in few decades. The pace in the last five years has been amazing. China is basically rolling out amounts of solar and batteries that were unthinkable in even the most optimistic predictions a few years ago. The rest of the world is struggling to keep up and that's causing some issues with some countries running backward (mainly the US).

It's true that a lot of AI is powered by mix of old coal plants, cheap Texan gas and a few other things that aren't sustainable (or cheap if you consider the cleanup cost). However, I live in the EU and we just got cut off from cheap Russian gas, are now running on imported expensive gas (e.g. from Texas) and have some pet peeves about data sovereignty that are causing companies like OpenAI, Meta, and Google to have to use local data centers for serving their European users. Which means that stuff is being powered with electricity that is locally supplied with a mix of old dirty legacy infrastructure and new more or less clean infrastructure. That mix is shifting rapidly towards renewables.

The thing is that old dirty infrastructure has been on a downward trajectory for years. There are not a lot of new gas plants being built (LNG is not cheap) and coal plants are going extinct in a hurry because they are dirty and expensive to operate. And the few gas plants that are still being built are in stand by mode much of the time and losing money. Because renewables are cheaper. Power is expensive here but relatively clean. The way to get prices down is not to import more LNG and burn it but to do the opposite.

What I like about things that increase demand for electricity is that they generate investments in providing solutions to clean energy and actually accelerate. The big picture here is that the transition to net zero is going to vastly increase demands on power grids. If you add up everything needed for industry, transport, domestic and industrial heating, aviation, etc. it's a lot. But the payoffs are also huge. People think of this as cost. That's short term thinking. The big picture here is long term. And the payoff is net zero and cheap power making energy intensive things both affordable and sustainable. We're not there yet but we're on a path towards that.

For AI that means, yes, we need a lot of TW of power and some of the uses of AI seem frivolous and not that useful. But the big picture is that this is changing a lot of things as well. I see power needs as a challenge rather than a problem or reason to sit on our hands. It would be nice if that power was cheap. It so happens that currently the cheapest way to generate power happens to be through renewables. I don't think dirty power is long term smart, profitable, or necessary. And we could definitely do more to speed up its demise. But at the same time, this increased pressure on our grids is driving the very changes we need to make that happen.

timr 3 days ago

> On the money side? I don’t see how the math and the capex work. And all the time, I think about the carbon that’s poisoning the planet my children have to live on.

The "math and capex" are inextricably intertwined with "the carbon". If these tools have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem, and we'll all be better off. If the tools have no net value at a market-clearing price for energy (as purported), then it won't be a problem.

I mean, maybe the productive way to say this is that we should more formally link the environmental cost of energy production to the market cost of energy. But as phrased (and I suspect, implied), it sounds like "people who use LLMs are just profligate consumers who don't care about the environment the way that I do," and that any societal advancement that consumes energy (as most do) is subject to this kind of generalized luddite criticism.

  • lyu07282 3 days ago

    > If these tools have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem

    I'm confused what you are saying, do you suggest "the market" will somehow do something to address climate change? By what mechanism? And what do LLMs have to do with that?

    The problem with LLMs is that they require exorbitant amounts of energy and fresh water to operate, driving a global increase in ecological destruction and carbon emissions. [ https://www.greenmemag.com/science-technology/googles-contro... ]

    That's not exactly a new thing, just making the problem worse. What is now different with LLMs as opposed to for example crypto mining?

    • timr 3 days ago

      > I'm confused what you are saying, do you suggest "the market" will somehow do something to address climate change? By what mechanism? And what do LLMs have to do with that?

      No, I'm suggesting that the market will take care of the cost/benefit equation, and that the externalities are part of the costs. We could always do a better job of making sure that costs capture these externalities, but that's not the same thing as what the author seems to be saying.

      (Also I'm saying that we need to get on with nuclear already, but that's a secondary point.)

      > The problem with LLMs is that they require exorbitant amounts of energy and fresh water to operate, driving a global increase in ecological destruction and carbon emissions.

      They no more "require" this, than operating an electric car "requires" the same thing. While there may be environmental extremists who advocate for a wholesale elimination of cars, most sane people would be happy for the balance between cost and benfit represented by electric cars. Ergo, a similar balance must exist for LLMs.

      • lyu07282 3 days ago

        > I'm suggesting that the market will take care of the cost/benefit equation, and that the externalities are part of the costs.

        You believe that climate change is an externality that the market is capable of factoring in the cost/benefit equation. Then I don't understand why you disagreed with the statement "the market will somehow do something to address climate change". There is a more fundamental disagreement here.

        You said:

        > If these tools [LLMs/ai] have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem

        And again, why? By what mechanism? Let's say Microsoft 10x it's profit through AI, then it will "finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem". But why? Why would it? Why do you say "we" if we talk about the market.

sovietmudkipz 3 days ago

Minor off-topic quibble about streams: I’ve been learning about network programming for realtime multiplayer games, specifically about input and output streams. I just want to voice that the names are a bit confusing due to the perspective I adopt when I think about them.

Input stream = output from the perspective of the consumer. Things come out of this stream that I can programmatically react to. Output stream = input from the perspective of the producer. This is a stream you put stuff into.

…so when this article starts “My input stream is full of it…” the author is saying they’re seeing output of fear and angst in their feeds.

Am I alone in thinking this is a bit unintuitive?

  • nemomarx 3 days ago

    I think an input stream is input from the perspective of the consumer? Like it's things you are consuming or taking as inputs. Output is things you emit.

    Your input is ofc someone else's output, and vice versa, but you want to keep your description and thoughts to one perspective, and in a first person blog that's clearly the authors pov, right?