I know this is unpopular as hell, but I believe that LLMs have potential to do more good than bad for learning, a long as you don’t use it for critical things. So no health related questions, or questions that is totally unacceptable to have wrong.

The ability to learn about most subjects in a really short time from a “private tutor”, makes it an effective, but flawed tool.

Let’s say that it gets historical facts wrong 10% of the time, is the world more well off if people learn a lot more, but it has some errors here and there? Most people don’t seem to know almost no history at all.

Currently people know very little about critical topics that is important to a society. This ignorance is politically and societally very damaging, maybe a lot more than the source being 10% wrong. If you ask it about social issues, there is a more empathetic answers and views than in the main political discourse. “Criminals are criminals for societal reasons”, “Human rights are important” etc.

Yes, I know manipulation of truth can be done, so it has to be neutral, which some LLMs probably aren’t or will not be.

Am I totally crazy for thinking this?

  • zz31da@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    Generally I agree that it can be an incredible tool for learning, but a big problem is one needs a baseline ability to think critically, or to understand when new information may be flawed. That often means having at least a little bit of existing knowledge about a particular subject. For younger people with less education and life experience, that can be really difficult if not impossible.

    The 10% of information that’s incorrect could be really critical or contextually important. Also (anecdotally) it’s often way more than 10%, or that 10% is distributed such that 9 out of 10 prompts are flawless, and the 10th is 100% nonsense.

    And then you have people out there creating AI chat bots with the sole intention of spreading disinformation, or more commonly, with the intention of keeping people engaged or even emotionally dependent on their service — information accuracy often isn’t the priority.

    The rate at which AI-generated content is populating the internet is increasing exponentially, and that’s where most LLM training data comes from currently, so it’s hard to see how the accuracy problem improves going forward.

    All that said, like most things, when AI is used in moderation by responsible people, it’s a fantastic tool. Unfortunately, the people in charge are incentivized to be unscrupulous and irresponsible, and we live in a decadent society that doesn’t exactly promote moderation, to massively understate things…

    (yeah, I used an em-dash, you wanna fight bro? 😘)

    • ComradePenguin@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Good point, as an adult that grew up long before LLMs and social media, I feel that it’s an incredible tool, I just don’t trust it fully. Critical thinking and fact checking is a reflex at this point, I must admit that I don’t always fact check unless something seems shocking or unexpected to me. The accuracy problem is something I doubt they can fix short term