benterix a day ago

> When we only half-understand something, we close the loop from observation to belief by relying on the judgement of our peers and authority figures, but these groups in tech are currently almost certain to be wrong or substantially biased about generative models.

Yeah but the same could be said basically about any hype, like XML 25 ago or object-oriented programming. There is huge hype and then at some point the dust settles down and the society and businesses use the bits that are actually useful.

  • JohnFen a day ago

    History has plenty of examples of extremely hyped things that ultimately turned out to be worthless or of negative value, too.

    • verdverm a day ago

      AI will not be one of them, it's been useful since before LLMs and has only become moreso since. They will continue to improve in capabilities and decrease in resources required.

      • JohnFen a day ago

        Maybe, maybe not. It's far too soon to tell with LLMs. We'll have to check back in a decade or so.

        (Just a note -- a thing can be useful and still be a net negative.)

        • benterix 10 hours ago

          Your argument is too broad. You could say that we know today that social media are net negative (because of impact on mental health and many other reasons). Yet, how are you going to measure that? Same could be said about any technology, there will always be some negative aspects.

  • mwcampbell a day ago

    So why not wait and find out what those useful and not-so-harmful bits actually are, if there are any?

    • verdverm a day ago

      How can you know without trying to use them...?

lolc a day ago

I read the other piece and now this piece and it still reads like scaremongering to me. Sure there is hype and a lot of stuff will turn out to be bad. A lot of people (me included) will say "told you so". On the flip side, I do use code generation and we review that code like we review all code. The formal scientific approach demanded in the article runs against my option of just trying it out.

So the question becomes whether my colleagues will change their review practices to humor me and my dumb bot. If they do, yes we've become hype-infected and will earn us some failures down the line. But I don't see why we should be helpless here and why we should defer to whatever scientists for their judgement.

benterix a day ago

> It’s next to impossible for individuals to assess the benefit or harm of chatbots and agents through self-experimentation. These tools trigger a number of biases and effects that cloud our judgement. Generative models also have a volatility of results and uneven distribution of harms, similar to pharmaceuticals, that mean it’s impossible to discover for yourself what their societal or even organisational impact will be.

Yeah, but so do people.

  • mwcampbell a day ago

    It should be uncontroversial to say that we understand the failure modes of working with other people much better, because we have millennia of experience with that.

    • benterix 10 hours ago

      And yet we still have communication problems, misunderstandings, people consciously abusing other people, people subconsciously abusing other people, people falling for all possible traps, educated people falling for populist propaganda and so on.

      So while your point is valid, I'd say that (1) the experience of humanity doesn't extrapolate to individual experience, (2) as LLMs are trained on the same texts that humans produced, the actual experience is not that much different.