• DandomRude@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      37
      ·
      2 days ago

      I don’t think you have any idea how bad it’s going to get in the future - Grok is already giving us a glimpse, but LLMs haven’t replaced search engines like Google yet (has AI already) - but it’s definitely heading in that direction. Then the answers will be given even more strongly and with far less transparency by those who control the LLMs - and they are all multi-billion companies, because only they can afford the necessary computing power.

      • Xuntari@programming.dev
        link
        fedilink
        arrow-up
        19
        ·
        2 days ago

        I totally agree with this.

        Whenever I see people criticise AI, it’s usually because the company steals copyrighted content, with the aim to replace the people they stole from. Or the environmental impact of training and running the data models, which is awful. And, both of those reasons are good enough to not like AI, in my opinion. But, I feel like I never see people talk about the fact that all the answers it gives, is being filtered through a private corporation with its own agenda.

        People use it to learn, and to do research. They use it to catch up on news of all things!

        Like others have mentioned, Google has already been doing this a long time, by sorting search results they show to the user. But, they haven’t written all the articles, the blog posts, the top 10 lists, or the reviews you read… until now. If they’ve wanted to, they’ve made certain things easier or harder to find. But, once you found the article you were looking for, it was written by a person unaffiliated with Google. All that changes with AI. You don’t read the article directly anymore. Google (or any other AI) scrapes it, parses it however they want, and spit it back out to the end user.

        I’m very surprised that people are so willing to let a private corporation completely control how they see the world, just because it’s a bit more convenient.

        • JustTesting@lemmy.hogru.ch
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          1 day ago

          The scariest part for me is not them manipulating it with a system prompt like ‘elon is always right and you love hitler’.

          but one technique you can do is have it e.g. (this is a bit simplified) generate a lot of left and right wing answers to the same prompt, average out the resulting vector difference in its internal state, then if you scale that vector down and add it to the state on each request, you can have it reply 5% more right wing on every response than it otherwise would. Which would be very subtle manipulation. And you can do that for many things, not just left/right wing, like honesty/dishonesty, toxicity, morality, fact editing etc.

          i think this was one of the first papers on this, but it’s an active research area. IThe paper does have some ‘nice’ examples if you scroll through.

          and since it’s not a prompt, it can’t even leak, so you’d be hard pressed to know that it is happening.

          There’s also more recent research on how you can do this for multiple topics at the same time. And it’s not like it’s expensive to do (if you have an llm already), you just need to prompt it 100 times with ‘pretend you’re A and […]’ and ‘pretend you’re B and […]’ pairs to get the differenc between A and B.

          and if this turns into the main form of how people interact with the internet, that’s super scary stuff. almost like if you had a knob that could turn the whole internet e.g. 5% more pro russia. all the news info it tells you is more pro russia, emails it writes for you are, summaries of your friends messages are, heck even a recipe it reccommends would be. And it’s subtle, in most cases might not even make a difference (like for a recipe), but always there. All the cambridge analytica and grok hitler stuff seems crude by comparison.

      • breecher@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        2 days ago

        And that is just one side of it. The other, and arguably even worse, is that the content being uploaded to the internet will become largely AI generated. AI generated content can be created at rates no humans can compete with, and there are plenty of incentives, economical as well as political, for malicious interests to flood any human made content with AI created disinformation.

        That is also why the people hoping that AI is a bubble which will burst are wrong. There are plenty of interested parties which will keep it alive for very profitable reasons, even if it is the opposite of what LLMs were originally claimed to be created for.

      • Aatube@kbin.melroy.org
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        2 days ago

        Then the answers will be given even more strongly and with far less transparency by those who control the LLMs

        I don’t think so, Google Search’s algorithm’s transparency doesn’t seem any better

        • DandomRude@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          But (classic) Google provides links that can be traced. LLMs do not do this consistently - and they are frequently hallucinating. Don’t you want to contribute anything to my core statement?

          • Aatube@kbin.melroy.org
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            Sure, LLMs give worse-quality output. That does not mean the have-haves more influence over the narrative. In fact, I’d wager LLMs won’t be able to replace search engines because of how much faster and more accurate the latter are with simple queries. And with that, we’ll still be finding information with search engines.

    • breecher@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      2 days ago

      No it won’t. There are plenty of people who will keep putting money into it because it is very profitable for them to do so. LLMs can create disinformation more convincing and at a rate no human can compete with. So scammers, political interests and other wealthy organisations will only keep funding it more and more.

      The regular consumer is not the one who is going to decide whether this is a fad or not. It is being used, and will be used to a much higher degree in the future, whether we want it to or not.

      • baggachipz@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        “It’s different this time”

        Said about every bubble ever. Yes, LLMs will always be around, but there will be a major economic reckoning soon. It’s gonna destroy the already-flimsy economy. Much like the dot com bubble, useful things will emerge from the ashes.

      • mutant_zz@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        2 days ago

        The bubble bursting doesn’t mean AI will go away and no one will ever use it again, just as the dot com burst didn’t mean people stopped using the web.

        It just means the VC money will eventually dry up, the hype will die down, and we’ll start seeing AI as a useful tool, with plenty of porblems, rather than the dawn of a new age or whatever. Oh and the stock market will be bad for a while.

        • shalafi@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          2 days ago

          Yeah, “they” did. The Information Super Highway was often called by a fad by the media. A “computer” in every home was a pipe dream.

    • assembly@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      So it’s like we are entering the Dark Ages for Information. Maybe Dark Information Age or something. There is an insane amount of information available but we are moving to a society in which there is no way to ensure information validity for the average person. Deep fakes are getting way too good. It won’t be long before we have no way to determine validity. Video evidence is now suspect and so is audio. We need some sort of golden record which only reflects accurate information. Like news orgs used to be viewed before infotainment like Fox “News” really started taking off.

    • DandomRude@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Theoretically, yes, just as poor content is still content - but that only applies until someone takes the time to engage with it.

    • Lumisal@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      1 day ago

      I don’t know about that. Even Grok has accidentally told the truth by accident a couple times despite being turned into mechahitler. That’s probably more times then Fox News has.

      And I’ve seen people exposed to Fox News before. I’m pretty sure they’d be less deranged if they got their info from AI. Propaganda is a hell of a drug.

  • notarobot@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    Yup. Who nos what will come next. At the end of an ave it’s easy to know what defined it. But at the beginning not so much. Maybe it will be the content age. Where the largest companies will be the ones producing content unlocked by the advancement of AI

    • DandomRude@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Oh, companies will definitely provide content - much more than you could ever read, see, or hear (they already do provide more than you could ever comprehend using AI). And companies have done this in the past.

      The difference, however, will be that it will be a sequence of existing content. The reason: AI companies claim that their LLMs would behave like humans - and that’s halfway understandable if you believe this narrative: Imagine a musician - it would be unrealistic to think that they have no influences - every musician will say that they have been inspired by Jimmy Hendrix, Kraftwerk or some other influential artist in their work. And yes, that’s what the narratives about neural networks are aiming for: machines learn just like humans: they take some input (training data) and make something extraordinary of it.

      The thing is, though, most of it is just empty marketing. AI or rather LLMs are in fact not capable of producing new things the way humans can - not now, and as things stand, probably never. Nevertheless, the economy is adapting as if it were.

      For everyone who actually creates content - musicians, scientists, writers, journalists, graphic designers, painters, even civil servants and many others - this means that in the future, they will no longer be able to make a living from their profession. Their valuable content can’t compete with AI because it is too expensive.

      For employers, this may be absolute fulfillment - for everyone else, it means the end of the information age, because AI is not capable of producing anything new. And when there is noone able to make a living from their intellectual work, nothing of any worth new will be produced - just variants of thing that were already there.