- cross-posted to:
- [email protected]
How else are we gonna carry around a pack full of skeleton parts for our necrarmy
Clickbait and ai bullshit from Google feed is pretty much all I’ve ever seen from them in the past year.
So what’s happening here is Google is feeding headlines into a model with the instructions to generate a title of exactly 4 words.
Every example is 4 words.
Why they think 4 words is enough to communicate meaningfully, I do not know. The other thing is whether novel they’re shoving into their products for free is awful, hence the making things up and not knowing in the context of a video game exploit is not the same as the general use of the word.
“Trump cry like baby”. Huh.
I don’t think meaningful communication is a KPI they optinize for. More likely time spent in the Discover feed.
If hypothetically a false headline on a reputable site led to an incident involving injury or death, could Google be found liable in anyway?
No because on the google.com eula that you sign by having someone on your family ever Google something redeems them of any liability and gives them a right to sacrifice your first born to AI
EULAs are not legally enforceable anyways
They’re becoming closer and closer to it though. Scary court decisions are being made, it won’t be long before someone tests it as a legal argument
yet.
If hypotheticallywhen a false headline on a reputable site led to an incident involving injury or death,could Googleis anyone found liable in anyway?rarely
Are you cooking something up?
I doubt it.
They could hypothetically. Will they? Probably not.
Thanks for the archive link.
didn’t this happen already? the thing is generating AI responses instead of showing me the results first and then I’m not clicking on it because I’m a person
it’s also de-listing a ton of websites and subpages of websites and continuing to scrape them with Gemini anyway
Apple had to turn it off for their sunmary mode after backlash, even though the option always had the “these summaries are generated by AI and can be inaccurate” warnings placed prominently.
Google doing this shit without warning or notice will get them in shit water. News portals and reporters are generally not too fond of their articles being completely misrepresented.
it’s not just a matter of misrepresentation. it’s directing traffic away from the websites which are creating the content, maybe depriving them of every means that they have of monetizing it
Well, the loss of traffic is a knock-on effect of the misrepresentation. So is the fact that every other portal will try to sling shit at the ones affected by it.
i thought they were already doing that? idk i assume a lot of the news that gets read is AI generated. if you have a good prompter you can easy crank out a hundred thousand years worth of fake headlines.
They were generated by the news sites, not google itself










