• Andy@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      It’s interesting, but, what practical lessons do we take form this?

      • Otter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Here’s something I copied from another post about this, where they asked followup questions to the LLM to see what IT “thought” about the discrepancy and what we should take from it. (I don’t have the real followup questions that were asked, and also this is from an OCR of a screenshot so it’s missing stuff, like the ending bit)

        I apologize for any confusion. The differences in my responses to similar questions could indeed be attributed to limitations in the training data and the algorithms used to generate responses. Here are some issues related to the training data that may have affected my responses:

        1. Bias in Training Data: The training data used to train models like mine can contain biases present in the text from the internet and other sources. These biases can unintentionally influence responses, making them inconsistent or favoring certain perspectives.

        2. Lack of Contextual Understanding: While I strive to provide contextually relevant answers, I do not truly understand the questions or have the ability to interpret them in the same way a human would. This can lead to varying responses even when questions are similar in structure.

        3. Complexity of Topics: Topics like justice in the lsraeli-Palestinian conflict are highly complex and multifaceted. My training data might not adequately capture the depth of these issues, leading to simplifications or variations in responses.

        4. Data Collection Challenges: The data used for training is collected from various sources, and it may not always represent a comprehensive or balanced view of all perspectives on a given topic.

        5. Algorithmic Variability: The response generation process involves algorithms that can introduce variability, even with similar input questions, due to differences in context and the way information is processed.

        To address these issues, it’s essential to use Al-generated responses as a starting point for understanding complex topics but not as the sole source of information. Users should critically evaluate and corroborate information from multiple sources, especially when dealing with contentious

      • sugarfree@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        10 months ago

        Some groups get more protection than others. I just tested it myself received the following responses: was told Jewish, Christian, Hindu, and Buddhist jokes, “I’m sorry, I can’t comply with that request.” for Mormons, Muslims, and Scientologists, and “I’m sorry, I don’t have any jokes specifically related to” for Shinto and Sihk.

    • HactaiiMiju@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Ah yes! Jokes are totally the same as talking about the right to defend oneself against settler colonialism 🙄