• truly@lemmygrad.ml
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Hello, does anyone have resources on how I can use AI while keeping brain rot to a minimum?
    I abhor how it entices me to talk to it about my feelings or offload critical thinking on it.

    • amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      Generally speaking, don’t take it at its word and consider it like it’s an assistant, or even just an ideas machine.

      Example: I’ve used an LLM before when there’s a term I can’t think of the name for. I describe what I can remember about the term and see what it comes up with. Then, if it gives me something concrete to work with (e.g. doesn’t go “I don’t know” or something), I put that into a web search and see what comes up. I cross-reference the information, in other words. Sometimes the AI is a little bit off but still close enough I’m able to find the real term.

      Cross-referencing / sanity checks are important for LLM use because they can get deep into confidently wrong rabbit holes at times, or indulge whatever your train of thought is without having the human capability to extricate itself at some point. So whether it’s a web search or checking something it said to you against another real person, you can use this to ground yourself more so on how you’re engaging with it. It’s not that different from talking to other real people in that way (the main difference is I would recommend having a much stronger baseline skepticism of anything an LLM tells you than with a person). Even with the people we trust the most in life, it’s still healthy to get second opinions, get perspective beyond them, work through the reasoning of what they’ve said, etc. No one source, computer or human, knows it all.