I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

  • Brosplosion@lemmy.zip
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    My goto is basically since I have to strictly verify all the information/data AI gives me, it’s faster for me to just produce this information myself. It’s what they literally pay me for.

  • Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    5 hours ago

    Personally it’s because the harder something is pushed to me by large corporations, the more skeptical I am to begin with.

    It is your stance, you don’t have to compulsively change other people’s minds, let them live their lives and you live how you want. For people that are wanting to listen to you, you can tell them how you feel about AI (or perhaps specifically AI chatbots) in both subjective and objective terms. If you want to prepare research and talking points, I think the most effective thing is to have a couple examples such as the Google AI box putting out objectively wrong info with the citation links leading to sites that don’t back up any claim in it. Or how the outputs of comic style image generation tend to look like knock-off Tintin and appear uninspiring and unsettling. How reading generated paragraphs, looking at images and videos of fluffy slop is simply a waste of time for you. Just mix that with all the rest of the shortcomings people have provided and you’ll make for a good discussion. Remember, the point is not to change people’s minds or proselytize but rather to explain why you hold your opinion.

  • MissJinx@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    7 hours ago

    If it’s a decision you make out of conviction and value it is 100% like veganism so I would say embrace it

    Live your truth and people will follow. Or not and that’s ok too

  • Nalivai@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    11 hours ago

    “it looks like shit from a butt and sounds like shit from a butt, and if I wanted to look at a shit from a butt, I would do that for free”

  • Twongo [she/her]@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    11 hours ago

    i don´t necessarily think sources are needed.

    people don´t really care: if an aquaintance asks you you can just tell them it´s not your thing. if an employer asks you, you lose either way. the deranged rants are reserved for close friends :) But if you need some evidence: Look into the environmental consequences (fire up those coalmines for LLM prompts), the several studies that suggest only 60% of all answers are factual, the MIT study that shows how the brain atrophies from using AI and the phenomenon called “ai psychosis”

  • theparadox@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 hours ago

    I tried AI a few times over the last few years, and sometimes I don’t ignore the Gemini results from a search when I’m tired or I’m struggling to get good results.

    Almost every time I’ve done either, helpful looking hallucinations wasted my time and made my attempt to find a solution to a technical problem less efficient. I will give specific examples, often unprompted.

    I also point to a graph of my electric bill.

    I also describe the logon script that a colleague (with no coding experience) asked for help with. He’d used AI to generate what he had to show me and was looking for help getting it to work. Variables declared and never used. Variables storing relevant information but different, similarly named variables used to retrieve the information.

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    13 hours ago

    What is an “extremist view” in this context? Kill sam Altman? Lmao

    Welcome to the world of being an activist buddy. Vegans are doing it for a living being with consciousness. Your cause is just too, imo, but just like the vegan who feels motivated and justified in bringing up their views because, to them, it’s a matter of life and death you will be belittled and mocked by those who either genuinely disagree or who do recognize the issues you describe but do not have the courage or self control to change

    Start with speaking when it’s relevant. Note that this will not always win you fans. I recently spoke to my physician on this issue, who asked for consent for LLM transcription of audio session notes and automatic summarization. I am not morally opposed to such a thing for health care providers but I had many questions: how are records transmitted, stored, destroyed, does the model use any data fed into it or resultant summaries for seeding/reinforcement learning/refinement/updating internal embeddings/continual learning (this point is key bc the language I’ve seen about this shifts a lot, but basically do they feed your data back into the model to refine it further or do they have separate training and production models that allow for one to be “sanitary”), does the AI model come from the EMR provider (often Epic) or a 3rd party and if so is there a BAA, etc

    In my case my provider could answer exactly 0 (zero) of these so I refused consent and am actively monitoring to ensure they are continuing to not use it at subsequent appointments. They are a professional so they’ve remained professional but it’s created some tension. I get it; I work in healthcare myself and I’ve seen these tools demoed and have colleagues that use them. They save a fairly substantial amount of time and in some cases they even guarantee against insurance clawbacks, which is a tremendous security advantage for a healthcare provider. But you gotta know what you’re doing and even then you gotta accept that some people simply will be against it on principle, thems the breaks

  • tym@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    10 hours ago

    Your question is too vague to give any practical advice. I guess my advice is don’t be so vague? There are 100s of subjects within the umbrella term of AI (you’re actually talking about tokenized data inferred by LLMs but I digress). A healthy distrust around centralization of all the things is an honest conversation between adults. Using these various LLMs to remove tedious blockers to one’s work is perfectly acceptable.

    Now if you’re coming at this from an envrionmental angle, then have that conversation with your people just as honestly as the centralization conversation. If you’re in a position wherein people hang on your advice, being diplomatic for self-preservation reasons is the worst thing you can do.

  • Strider@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    10 hours ago

    No is a full sentence.

    Oh you want to explain. For those that are really interested, there are websites explaining the main points.

  • bstix@feddit.dk
    link
    fedilink
    arrow-up
    2
    ·
    13 hours ago

    You don’t need artificial intelligence. We already have intelligence at home.

  • solomonschuler@lemmy.zip
    link
    fedilink
    arrow-up
    9
    ·
    22 hours ago

    I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

    First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

    There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

    On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

    This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true. When I started using chatGPT to fix my codebases or to do this problem, it induced a lot of doubt in my knowledge and intelligence that I gathered these past years in college.

    The second reason why I don’t like LLMs are the business models of these companies. To reiterate, these tech billionaires make this bubble of delusions and fearmongering to get their userbase to stay. Titles like “chatGPT-5 is terrifying” or “openAI has fired 70,000 employees over AI improvements” they can do this because people see the title, reinvesting more money into the company and because employees heads are up these tech giants asses will of course work with openAI. It is a fucking money making loophole for these giants because of how many employees are fucking far up their employers asses. If I end up getting a job at openAI and accept it, I want my family to put me into a god damn psych ward, that’s how much I frown on these unethical practices.

    I often joke about this to people who don’t believe this to be the case, but is becoming more and more a valid point to this fucked up mess: if AI companies say they’ve fired X amount of employees for “AI improvements” why has this not been adopted by defense companies/contractors or other professions in industry. Its a rhetorical question, but it makes them conclude on a better trajectory than “the reason X amount of employees were fired was because of AI improvement”

    • mirshafie@europe.pub
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      12 hours ago

      This really is a problem with expectations and hype though. And it will probably be a problem with cost as well.

      I think that LLMs are really cool. It’s way faster and more concise than traditional search engines at answering most questions nowadays. This is partly because search engines have degraded in the last 10 years, but LLMs blow them out of the water in my opinion.

      And beyond that, I think you can generate some pretty cool things with it to use as a template. I’m not a programmer but I’m making a quite massive and relatively complicated application. That wouldn’t be possible without an LLM. Sure I still have to check every line and clean up a ton of code, and of course I realize that this is all going to have to go to a substantial code review and cleanup by real programmers if I’m ever going to ship it, but the thing I’m making is genuinely already better (in terms of performance and functionality) than a lot of what’s on the market. That has to count for something.

      Despite all that, I think we’re in the same kind of bubble now as we were in the early 2000s, except bigger. The oversell of AI comes from CEOs claiming (and to the best of my judgement they appear to be actually believing) that LLMs somehow magically will transcend into AGI if they’re given enough compute. I think part of that stems from the massive (and unexpected) improvements that happened from GPT-2 to GPT-3.

      And lots of smart people (like Linus Tordvals for example) point out that really, when you think about it, what is intelligence other than a glorified auto-correct? Our brains essentially function as lossy compression. So I think for some people it is incredibly alluring to believe that if we just throw more chips on the fire a true consciousness will arise. And so, we’re investing all of our extra money and our pension funds into this thing.

      And the irony is that I and millions of others can therefore use LLMs at a steep discount. So lots of people are quickly getting accustomed to LLMs thinking that they’re always going to be free or cheap, whereas it’s paid for by the bubble money and it’s not super likely that it will get much more efficient in the near future.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    19 hours ago

    at work mgmt always brings it up. “we need to use it more!”.

    I say nothing. I smile and nod. I ignore AI prompts. I ignore emails written by AI. I ignore requests coming in to integrate AI into the product.

    nobody has asked me about any of the inaction for the last year so I don’t plan on drawing any attention to it by outing myself.

    edit: I suppose if anybody does I can just say the AI agent I used failed to alert me to the thing they wanted. 🤣