I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

  • mirshafie@europe.pub
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    13 hours ago

    This really is a problem with expectations and hype though. And it will probably be a problem with cost as well.

    I think that LLMs are really cool. It’s way faster and more concise than traditional search engines at answering most questions nowadays. This is partly because search engines have degraded in the last 10 years, but LLMs blow them out of the water in my opinion.

    And beyond that, I think you can generate some pretty cool things with it to use as a template. I’m not a programmer but I’m making a quite massive and relatively complicated application. That wouldn’t be possible without an LLM. Sure I still have to check every line and clean up a ton of code, and of course I realize that this is all going to have to go to a substantial code review and cleanup by real programmers if I’m ever going to ship it, but the thing I’m making is genuinely already better (in terms of performance and functionality) than a lot of what’s on the market. That has to count for something.

    Despite all that, I think we’re in the same kind of bubble now as we were in the early 2000s, except bigger. The oversell of AI comes from CEOs claiming (and to the best of my judgement they appear to be actually believing) that LLMs somehow magically will transcend into AGI if they’re given enough compute. I think part of that stems from the massive (and unexpected) improvements that happened from GPT-2 to GPT-3.

    And lots of smart people (like Linus Tordvals for example) point out that really, when you think about it, what is intelligence other than a glorified auto-correct? Our brains essentially function as lossy compression. So I think for some people it is incredibly alluring to believe that if we just throw more chips on the fire a true consciousness will arise. And so, we’re investing all of our extra money and our pension funds into this thing.

    And the irony is that I and millions of others can therefore use LLMs at a steep discount. So lots of people are quickly getting accustomed to LLMs thinking that they’re always going to be free or cheap, whereas it’s paid for by the bubble money and it’s not super likely that it will get much more efficient in the near future.