I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

  • solomonschuler@lemmy.zip
    link
    fedilink
    arrow-up
    9
    ·
    23 hours ago

    I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

    First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

    There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

    On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

    This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true. When I started using chatGPT to fix my codebases or to do this problem, it induced a lot of doubt in my knowledge and intelligence that I gathered these past years in college.

    The second reason why I don’t like LLMs are the business models of these companies. To reiterate, these tech billionaires make this bubble of delusions and fearmongering to get their userbase to stay. Titles like “chatGPT-5 is terrifying” or “openAI has fired 70,000 employees over AI improvements” they can do this because people see the title, reinvesting more money into the company and because employees heads are up these tech giants asses will of course work with openAI. It is a fucking money making loophole for these giants because of how many employees are fucking far up their employers asses. If I end up getting a job at openAI and accept it, I want my family to put me into a god damn psych ward, that’s how much I frown on these unethical practices.

    I often joke about this to people who don’t believe this to be the case, but is becoming more and more a valid point to this fucked up mess: if AI companies say they’ve fired X amount of employees for “AI improvements” why has this not been adopted by defense companies/contractors or other professions in industry. Its a rhetorical question, but it makes them conclude on a better trajectory than “the reason X amount of employees were fired was because of AI improvement”

    • mirshafie@europe.pub
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      13 hours ago

      This really is a problem with expectations and hype though. And it will probably be a problem with cost as well.

      I think that LLMs are really cool. It’s way faster and more concise than traditional search engines at answering most questions nowadays. This is partly because search engines have degraded in the last 10 years, but LLMs blow them out of the water in my opinion.

      And beyond that, I think you can generate some pretty cool things with it to use as a template. I’m not a programmer but I’m making a quite massive and relatively complicated application. That wouldn’t be possible without an LLM. Sure I still have to check every line and clean up a ton of code, and of course I realize that this is all going to have to go to a substantial code review and cleanup by real programmers if I’m ever going to ship it, but the thing I’m making is genuinely already better (in terms of performance and functionality) than a lot of what’s on the market. That has to count for something.

      Despite all that, I think we’re in the same kind of bubble now as we were in the early 2000s, except bigger. The oversell of AI comes from CEOs claiming (and to the best of my judgement they appear to be actually believing) that LLMs somehow magically will transcend into AGI if they’re given enough compute. I think part of that stems from the massive (and unexpected) improvements that happened from GPT-2 to GPT-3.

      And lots of smart people (like Linus Tordvals for example) point out that really, when you think about it, what is intelligence other than a glorified auto-correct? Our brains essentially function as lossy compression. So I think for some people it is incredibly alluring to believe that if we just throw more chips on the fire a true consciousness will arise. And so, we’re investing all of our extra money and our pension funds into this thing.

      And the irony is that I and millions of others can therefore use LLMs at a steep discount. So lots of people are quickly getting accustomed to LLMs thinking that they’re always going to be free or cheap, whereas it’s paid for by the bubble money and it’s not super likely that it will get much more efficient in the near future.