• spit_evil_olive_tips@beehaw.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    15 hours ago

    the author’s Substack bio says “Director of EA DC”

    his website explains the acronym - it’s “Effective Altruism DC”

    at this point, your alarm bells should start ringing.

    but if you are blissfully aware, “effective altruism” is a goddamn scam. it is an attempt by Silicon Valley oligarchs and techbros to wrap “I shouldn’t have to pay taxes” in a philosophical cloak. no more, no less.

    take all of his claims about “no bro AI datacenters are totally fine don’t listen to the naysayers” with a Lot’s-wife-sized pillar of salt.

    • coyotino [he/him]@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      12 hours ago

      Damn that’s some next level detective work. Thx for the details

      Edit: God it just gets worse and worse. I just peeked at the website for his “EA DC” org and their board of directors literally includes an exec from Anthropic. This guy is literally taking money from the AI industry. This article is a complete joke.

      • Hirom@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        11 hours ago

        Consider mentionning this in the title for those who skip comments.

        Maybe “[Likely sponsored by Antropic]”

    • Fluffy Kitty Cat@slrpnk.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      17 hours ago

      I think that a lot of the volume of GenAI queries.are unsolicited. Like google doing one for every search. Ban that and the datacenter volume drops a lot

    • coyotino [he/him]@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      I found the arguments about the environment convincing - he really does a great breakdown and comparison of other, individualist carbon emission sources and clearly explains why one person’s heavy Chatgpt usage is nothing compared to, say, using a laptop for an hour. I still hate Chatgpt and the rest for all the OTHER reasons that we all know by now, but on the environmental point, I felt this article was persuasive. Overly long, but persuasive.

  • poVoq@slrpnk.net
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    17 hours ago

    Well, it is true that in the bigger picture there are larger fish to fry, but LLMs are so useless*, so their relative environmental impact feels like an especially stupid and pointless waste.

    *I have tried them quite a bit and even ran open models myself, and I am still extremely underwhemed by their actual usefulness. Yes the first impression is cool, and they can write corporate emails well, but beyond that? I can’t help but thinking people like the author of the text (although probably most of that was written by AI) are deluding themselves about the usefulness.

    There are of course some narrow usecases like automatic transcription or text translations where modern machine learning is useful, but that stuff can run on a phone now 🤷

    • Hirom@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      More than doubting their usefulness, I wonder whether they’re hurting/slowing people more than they’re helping.

      There are study as well as anecdotes of people giving up vibe coding because it takes more effort to review and fix sloppy code than writing it yourself.

      I occasionally come accros a well written online tutorial that seems to answer the exact problem I’m trying to solve. And later realize it’s a complete dead end with made up API/function names but it looks really convincing. Those are likely AI-generated tutorials with severe hallucinations and bad/zero disclosure.

      I’m now blacklisting whole websites using ublock to avoid wasting more of my time reading that.