• artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    6 hours ago

    This isn’t speculative, it’s real and running, and it doesn’t pose a lot of the ethical dilemmas other AI applications face. Here’s why I think this matters: The consumer doesn’t have to do anything beyond pressing a button to use it.

    1. Whose data is it trained on? Seems like an ethical dilemma to me.

    2. Even worse than a web-based LLM, people are going to be even more unlikely to fact-check the often-incorrect information it’s going to feed you.

    3. Using it will not be the complicated part. Setting it up will be.

    • manualoverride@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      24 minutes ago
      1. Whose data is it trained on? Seems like an ethical dilemma to me.

      Using a standalone LLM for personal use doesn’t seem like an ethical dilemma to me, it’s already been trained on the data and if the data was accessible on the web or via a library then I don’t see the harm.

      Getting small amounts of medium-trust information on a subject, is a good way to get someone interested enough to read a book, watcha a YouTube video or find a website for more information and validate the AI response.

      • artyom@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 minutes ago

        Using a standalone LLM for personal use doesn’t seem like an ethical dilemma to me

        What is the ethical dilemma, exactly, and why/how is this different?

        Getting small amounts of medium-trust information on a subject, is a good way to get someone interested enough to read a book, watcha a YouTube video or find a website for more information and validate the AI response.

        Again, how is this different? At least the web-based ones actually link to where the info came from…