im new to lemmy and i wanna know your perspective about ai

  • cally [he/they]@pawb.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 minute ago

    idk who that character is, but i don’t like AI, it is polluting the environment and polluting the internet, all while disrespecting the work of artists (visual artists, musicians, voice actors, writers, photographers, etc)

    opt-out is not consent

  • mechoman444@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    23 minutes ago

    AI is a tool. A Glock 9mm is a tool. A paintbrush is a tool.

    Tools are only as good, or as harmful, as their users. If AI is being used to flood the internet with slop, that is a human decision. The fact that AI was used to generate the slop does not taint the AI itself in any meaningful way. On this platform, however, it is fashionable to hate AI.

    The people who hate it here are either bad actors or have no real understanding of what AI is, what it does, or what it is for.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    2 hours ago

    I’m a fan of the technology, I’ve been using it for various projects and I see a lot of potential. But there’s widespread anti-AI sentiment on the Fediverse. I notice you’re getting a lot of downvotes for merely asking about it.

  • FukOui@lemmy.zip
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    2 hours ago

    At the moment, AI is just a glorified autocomplete and I think it does more harm than good. (For LLMs). Is it a useful tool? Definitely. Should it replace jobs? Hell no. Is it being used as an excuse for the current recession and layoffs caused by offshoring? Hell yes. Is it killing the internet and propagating fake news? Definitely

    If we’re talking about other applications (computer vision, image processing etc), then yes. I think think the surveillance states (face verification) and Ukraine-Russia war heavily uses these applications

    • theherk@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      34 minutes ago

      glorified autocomplete

      People repeat that like it has some value, but it’s really just words. If autocomplete is glorified to the point of outputting something amazing, what is the value of saying it. I’m not saying it is, but if autocomplete spits out Shakespeare, “glorified autocomplete” is amazing.

      I mean, in a sense, brains are just glorified autocomplete. So…?

      • bluespin@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 minutes ago

        It’s an apt description of how these models function. They predict the most likely response to the input based on their training data. A brain can grasp concepts and reason about them - an LLM cannot

  • Perspectivist@feddit.uk
    link
    fedilink
    arrow-up
    22
    arrow-down
    3
    ·
    8 hours ago

    Average user here thinks AI is synonymous with LLMs and that it’s not only not intelligent but also bad for the environment, immoral to use because it’s trained on copyrighted content, a total job-killer that’s going to leave everyone unemployed, soulless slop that can’t create real art or writing, and basically just a lazy cheat for people who lack actual talent or skills.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      55 minutes ago

      And that’s a good thing.

      It’s not just that that’s what the average person thinks, but that’s really the only kind of AI they’re likely to come in direct contact with or is the kind being applied to systems that are directly undermining their lives.

      ML has been used for over a decade now in things like cyber security for behavioral analysis and EDR (Endpoint Detection and Response) systems. I’ve helped a friend use SLEAP, which analyzes specifically formatted videos of animals to catalog interactions over dozens of hours of footage instead of needing to manually scrub it. In these ways, the serious scientist/engineer does not care what the average person thinks of AI, it has no bearing on the functioning of these systems or the work they perform. The only people that care about the sentiment of the average person are the people that need to keep the hype train going for their product valuations to which I have nothing to say but a full-throated, “Fuck 'em”

    • kescusay@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      8 hours ago

      And they’re right about all of that except the AI equals LLMs thing, but that’s forgivable because the LLM hustlers have managed to make the terms synonymous in most people’s minds through a massive marketing effort.

      • Wrufieotnak@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        I would say they are right in that what companies are currently selling as AI is mostly just LLM or machine learning. We don’t have true intelligence. The separation is between what AI did mean in the past before the hype train tried to sell the current snake oil.

  • rustyfish@piefed.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    8 hours ago

    He would be true AI. I would shower him with love.

    Just because some cock sucking finance bros call an LLM a AI, doesn’t make it an AI.

      • SpikesOtherDog@ani.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        I don’t use the term cocksucker myself, but I think fact that vulgarity already gives it negative connotation. Like, I didn’t pat my wife on the head last night and call her my cute little cocksucker. I can imagine that could be sometime else’s pillow talk, but that would leave me touch starved for a while.

        I don’t THINK calling a gay man a pussyfucker would have the same weight, but I don’t have deep enough conversations with gay men to really know. I have heard that some men pride themselves in never having been with having never been with a woman, so maybe it would still hurt.

        On the flip side, just calling someone a fucker can be enough to start a fight.

        I’m not going to pretend that the poster meant to use the word as asshole, because cocksucker definitely hits different to male pride. I don’t think I would use the word to hurt someone I was angry with, but who knows what might come out when emotions are high. I don’t plan on using the word for fighting, but insulting someone could be enough to cause someone to attack riskily. If you don’t practice what you say, then you might just repeat something you will regret.

        To summarize, I hope the poster isn’t a bigot, but when given the chance they appear to have doubled down. Guess you got your answer.

        • Perspectivist@feddit.uk
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          4 hours ago

          The problem isn’t that “everything is AI” - it’s that people think AI means way more than it actually does.

          That superintelligent sci-fi assistant you’re picturing? That’s called Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI). Both are subcategories of AI, but they’re worlds apart from Large Language Models (LLMs). LLMs are intelligent in a narrow sense: they’re good at one thing - churning out natural-sounding language - but they’re not generally intelligent.

          Every AGI is AI, but not every AI is AGI.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 hours ago

    My opinion of AI/LLMs aside, I think that even the joking use of a made-up slur against non-humans still legitimizes the general use of slurs (and many who use real slurs believe their targets are subhuman).

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      This is a very good point and to take it further, it almost legitimizes the AI as well. By using the slurs against the AI in an effort to dehumanize it, it in turn places it in more anthropomorphized position of needing to be dehumanized, if that makes sense.

      And yeah, I just find slurs crude and distasteful in any form.

    • Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      Slurs seems to be okay for things considered sub-human or not human at all which should be viewed as extremely ironic but for some reason isn’t.

      • sem@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 hour ago

        Dehumanization is when influencers convince some humans to put other humans in 5hat subhuman category, along with the cockaroaches and varmints.

  • jordanlund@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    8 hours ago

    I’m biased because of the work my kid does in the field. It’s paying his mortgage so… 😉

    PERSONALLY, not a fan, I think it’s a dangerous abrogation of personal responsibility… BUT…

    I do think I found a legitimate creative use for it.

    There’s an AI powered app for a specific brand of guitar amplifier. If you want your guitar to sound like a particular artist or a particular song, you tell it via a natural language input and it does all the adjustments for you.

    You STILL have to have the personal talent to, you know, PLAY the guitar, but it saves you hours of fiddling with dials and figuring out what effects and pedals to apply to get the sound you’re looking for.

    Video, same player, same guitar, same amp, multiple sounds:

    https://youtube.com/shorts/wsGj4zsfOuQ

      • jordanlund@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        That’s what I thought, it allows creatives to be creative.

        Kind of like if you had an art program you could ask for “Give me a paint palette with the colors from Starry Night.”

        You still have to have the artistic talent to make use of them, it’s not going to help you there, but it saves you hours of research and mixing.

  • IngeniousRocks (They/She) @lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    7 hours ago

    I think ML tools are neat. I think LLMs are a neat tool when used intentionally and not as the “think for me” button. I’m really upset about the PC parts market right now thanks to AI, as soon as I get into selfhosting smdh.

    I think multimedia generation tools are dubiously ethicle. With our Current energy generation structure and the ongoing political turmoil, these tools are dangerous when used irresponsibly.

    AGI is a crock of malarkey, and markov chains are never gonna get us there.

    I should add some of the tools I use. I host Nextcloud, through Nextcloud I host talk. I have all my conversations in talk fed into a whisper model and an llm to transcribe to text and summarize the calls. I also run facial recognition in immich, which is also AI. I host a qwen coder model to digest docs into simple english and generate reference snippets for coding.

    AI is cool, and we should keep using it, but also its not efficient enough and big AI server farms are killing the planet.

  • Destide@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    It’s a tool similar to spell check or search engines. It’s currently not worth the environmental impact imo but obviously time has shown this is a temporary issue should we actually address it. Spoilers we probably won’t.

    I think it’ll follow the. Com model we’ll see a crash before a decent surge and plateau.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    7 hours ago

    Very bullish long term. I think that I can with say pretty good confidence that it’s possible to achieve human-level AI, and that doing so would be quite valuable. I think that this will very likely be very transformational, on the order of economic or social change that occurred when we moved from the primary sector of the economy being most of what society did to the secondary sector, or the secondary sector to the tertiary sector. Each of those changed the fundamental “limiting factor” on production, and produced great change in human society.

    Hard to estimate which companies or efforts might do well, and the near term is a lot less certain.

    In the past, we’ve had useful and successful technologies that we now use everyday that we’ve developed using machine learning. Think of optical character recognition (OCR) or the speech recognition that powers computer phone systems. But they’ve often taken some time to polish (some here may remember “egg freckles”).

    There are some companies promising the stars on time and with their particular product, but that’s true of every technology.

    I don’t think that we’re going to directly get an advanced AI by scaling up or tweaking LLMs, though maybe such a thing could internally make use of LLMs. The thing that made neural net stuff take off in the past few years and suddenly have a lot of interesting applications wasn’t really fundamental research breakthroughs on the software side. It was scaling up on hardware what we’d done in the past.

    I think that generative AI can produce things of real value now, and people will, no doubt, continue R&D on ways to do interesting things with it. I think that the real impact here is not so much technically interesting as it is economic. We got a lot of applications in a short period of time and we are putting the infrastructure in place now to use more-advanced systems in place of them.

    I generally think that the output of pure LLMs or diffusion models is more interesting when it comes to producing human-consumed output like images. We are tolerant of a lot of errors, just need to have our brains cued with approcimately the right thing. I’m more skeptical about using LLMs to author computer software, code — I think that the real problems there are going to need AGI and a deeper understanding of the world and thinking process to really automate reasonably. I understand why people want to automate it now — software that can code better software might be a powerful positive feedback loop — but I’m dubious that it’s going to be a massive win there, not without more R&D producing more-sophisticated forms of AI.

    On “limited AI”, I’m interested to see what will happen with models that can translate to and work with 3D models of the world rather than 2D. I think that that might open a lot of doors, and I don’t think that the technical hump to getting there is likely all that large.

    I think that generative AI speech synth is really neat — the quality relative to level of effort to do a voice is already quite good. I think that one thing we’re going to need to see is some kind of annotated markup that includes things like emotional inflection, accent, etc…but we don’t have a massive existing training corpus of that the way we do plain text.

    Some of the big questions I have on generative AI:

    • Will we be able to do sparser, MoE-oriented models that have few interconnections among themselves? If so, that might radically change what hardware is required. Instead of needing highly-specialized AI-oriented hardware from Nvidia, maybe a set of smaller GPUs might work.

    • Can we radically improve training time? Right now, the models that people use are trained using a lot of time running comoute-expensive backpropation, and we get a “snapshot” of that that doesn’t really change. The human brain is in part a neural net, but it is much better at learning new things at low computational cost. Can we radically improve here? My guess is yes.

    • Can we radically improve inference efficiency? My guess is yes, that we probably have very, very inefficient use of computational capacity today relative to a human. Nvidia hardware runs at a gigahertz clock, the human brain at about 90 Hz.

    • Can we radically improve inference efficiency by using functions in the neural net other than a sum-of-products, which I believe is what current hardware is using? CPU-based neural nets used to tend to use a sigmoid activation function. I don’t know if the GPU-based ones of today are doing so, haven’t read up on the details. If not, I assume that they will be. But point is that introducing that was a win for neural net efficiency. Having access to that function improves efficiency in how many neurons are required to reasonably model a lot of things we’d like to do, like if you want to approximate a Boolean function. Maybe we can use a number of different functions and tie those to neurons in the neural net rather than having to approximate all of those via the same function. For example, a computer already has silicon to do integer arithmetic efficiently. Can we provide direct access to that hardware, and, using general techniques, train a neural net to incorporate that hardware where doing so is efficient? Learn to use the arithmetic unit to, say, solve arithmetic problems like “What is 1+1?” Or, more interestingly, do so for all other problems that make use of arithmetic?