Deliverer of ideas for a living. Believer in internet autonomy, dignity. I upkeep instances of FOSS platforms like this for the masses. Previously on Twitter under the same handle. I do software things, but also I don’t.

  • 0 Posts
  • 157 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle



  • Although this has been heavily downvoted, the author has a point: what do private, safe AI experiences in a software mean for the common browser user? How does a company that was founded as an ‘alternative’ to a crummy default browser take the same approach? For those that do and will use the tech indiscriminately, what’s next for them?

    Just as cookie/site separation became a default setting in FF eventually, or the ability to force a more secure private DNS, what could Mozilla consider on its own to prevent abuse, slop, LLM-syncophantism / deception, undesired user data training, tracking, and more? All that stuff we know is bad, but nobody seems to be addressing all too well. These big AI companies certainly don’t seem to be.

    Rather than advocate for Not AI, how do we address it better for those who’ll simply hit up one of these big AI company websites like they would social media or Amazon?

    Is it anonymous tokenization systems that prevent a big AI company knowing who a user is, a kind of ‘privacy pass?’ Is it text re-obsfucation at the browser level that jarbles user input so that patterns can’t emerge? Is it even a straightforward warning to users about data hygiene?

    The above is silly, and speculative, and mostly for conversation. But: maybe there’s something here for your everyday browser user. And maybe we ought to consider how we help them.