• chirospasm@lemmy.ml
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    edit-2
    2 hours ago

    Although this has been heavily downvoted, the author has a point: what do private, safe AI experiences in a software mean for the common browser user? How does a company that was founded as an ‘alternative’ to a crummy default browser take the same approach? For those that do and will use the tech indiscriminately, what’s next for them?

    Just as cookie/site separation became a default setting in FF eventually, or the ability to force a more secure private DNS, what could Mozilla consider on its own to prevent abuse, slop, LLM-syncophantism / deception, undesired user data training, tracking, and more? All that stuff we know is bad, but nobody seems to be addressing all too well. These big AI companies certainly don’t seem to be.

    Rather than advocate for Not AI, how do we address it better for those who’ll simply hit up one of these big AI company websites like they would social media or Amazon?

    Is it anonymous tokenization systems that prevent a big AI company knowing who a user is, a kind of ‘privacy pass?’ Is it text re-obsfucation at the browser level that jarbles user input so that patterns can’t emerge? Is it even a straightforward warning to users about data hygiene?

    The above is silly, and speculative, and mostly for conversation. But: maybe there’s something here for your everyday browser user. And maybe we ought to consider how we help them.

    • Manjushri@piefed.social
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      1
      ·
      9 hours ago

      Because AI is a massive waste of resources that has yet to prove (to me at least) that it can provide any kind of real benefit to humanity that couldn’t be better provided by another, less resource intensive means. Advocating for ‘common’ AI use is absurd in the face of the amount of energy and other resources consumed by that usage, especially in the face of a looming climate crises being exacerbated by excesses like this.

      LLMs may have valid uses, I doubt it, but they may. Using it to make memes and generate answers of questionable veracity to questions that would be better resolved with a Google search is just dumb.

      • Routhinator@startrek.website
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        8 hours ago

        This. It burns too much electricity, wastes too much water and is wrong 70% of the time. Even if its private and offline the problems with it go waaaaay beyond that.

    • SmokeyDope@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 hours ago

      Hi, hope you don’t mind me giving my two cents.

      Local models are at their m9st useful in daily life when they scrape data from a reliable factual database or from the internet and then present/discuss that data to you through natural language conversation.

      Think about searching for things on the internet now a days. Every search provider stuffs ads in top results and intentionally ofsucates the links your looking for especially if its a no-no term like pirating torrent sites.

      Local llms can act as an advanced generalized RSS reader that automatically fetches articles and sources, send STEM based queries to wolfram alpha llm api and retrieve answers, fetch the weather directly from openweatherAPI, retrieve definitions and meanings from local dictionary, retrieve Wikipedia article pages from a local kiwix server, search ArXiv directly for prior searching. One of Claude’s big selling points is the research mode toolcall that scrapes hundreds of sites to collect up to date data on the thing your researching and presenting its finins in a neat structured way with cited sources. It does in minutes what would traditionally take a human hours or days of manual googling.

      There are genuine uses for llms if your a nerdy computer homelab type of person familiar databases, data handling and can code up/integrate some basic api pipelines. The main challenge is selling these kinds of functions in an easy to understand and use way for the tech illiterate who already think bad of llms and similar due to generative slop. A positive future for llms integrated into Firefox would be something trained to fetch from your favorite sources and sift out the crap based on your preferences/keywords. More sites would have APIs for direct scraping and the key adding process would be one click button