Lots of rambling and maybe you’ll find it unimportant, but I just gotta get this out of my head.
So cats are these adorable and quirky animals. They’ve been a mainstay in the internet since bandwidth allowed for sharing pictures and videos. I’m a certified cat person, so I’ve consumed my fair share of cat content online. I also deal a lot with stray cats IRL.
And now for some reason all these scrolly-short-video apps are littered with so many AI videos of cats. A very small minority of them are amusing due to being so unrealistic.
But the vast majority is this. AI videos of cats doing completely ordinary and realistic cat things. In a vacuum the question is obvious: why would anybody waste their energy creating fake videos cats doing things we already have so much real footage online? And the answer is that this thrash is both a profitable business for low effort content mills who already only repost others’ videos, and also is being intentionally promoted by the hosting platforms who have a stake in AI slop production.
But the worst part is that some videos are obviously fake to anybody, like this one (there’s a third white cat that phases out of existence), but there are many that are obviously fake to me – a cat person who has to either watch out for their body language or have to take antibiotics – but not to many people commenting and sharing the video. The way the cats move is simply “wrong” in subtle ways that most can’t tell while scrolling. And the fact that so many people watch that, think it’s real and move on fills me with a strange dread.
Imagine how many people who don’t have contact with cats will learn their body language wrong. Now imagine how many other way less represented animals might be completely misunderstood due to viral AI videos. Do you know how a Maned Wolf moves? Neither do I. But in the future when you search “Maned Wolf” you might find more fake videos than real ones.
And yes, this applies to way more critical stuff like political happenings, but that’s sort of the problem. Because a lot of people will put in an effort to debunking fake political videos. But who’s gonna bother debunking mundane videos of tamarins in which they walk weirdly like chimps? How much of the cultural image of mundane aspects of reality is going to be affected by mindlessly generated slop? Have you ever seen a horse IRL? If not, would you trust your intuition of how horses move and communicate through body language from watching videos?
The internet enabled this cool thing where, if you really wanted to learn about something, you could just look it up and dig into it not only through free pirated books but also through images. Now I think I trust people’s understanding of basic shit like “why cats meow” even less if their knowledge comes from the internet. I hope every “AI” CEO gets bit by a cat.
I have kind of drawn a line on video gen so far, one I don’t draw with other forms of gen AI, as in not really supporting with it or engaging with it intentionally and being opposed to its proliferation so far when it comes up for discussion. I’m open to there being edge cases where it’s useful, but in its current form, it’s hard for me to see value in it.
Problems I have with it are things like:
- It’s sorta in the position image gen was in some years go, in terms of quality, except that image gen is one frame at a time and after years of research progress, image gen still hasn’t solved basic problems with consistency across generations. So why burn money (aka: resources) trying to make video gen work with brute force funding, as if doing so will magically create breakthroughs, when the fundamentals would indicate that it’s not likely to be going anywhere quality any time soon.
- For some reason, people are fixated on it being lifelike. Maybe because of how much footage there is to train on that’s lifelike, I don’t know, or maybe because there are motives to fake footage for political reasons, or create lifelike ads to sell products. Whatever the reason, it makes the quality problem significantly worse. If video gen was absurd cartoon animations, a cat disappearing mid-frame wouldn’t matter so much and the point you make about people learning realistic behaviors from jank fake footage wouldn’t matter either. Cartoons are not meant to be representative of realistic movement and behaviors.
Nevertheless, as per usual with AI, the problems you outlined are more a capitalism problem than AI itself. The motive to shovel bad fake footage for content mills is there because of the motive to make a quick buck in whatever way you can to make ends meet. And to some extent, “bad” knowledge has preceded AI, with infotainment content mills that put out stuff barely qualifying as information or advice. Or the spread of misinformation more generally online. I don’t think the internet, at least in the capitalist context, has ever been very trustworthy. Thus joking statements people make like, “If it’s on the internet, it must be true” (pointing out how easy it is to run into things that aren’t). Still, the example you give is probably more insidious to come across and shows one of the ways that gen AI can hypercharge already-existing problems with the system.
I tend to think the western capitalist English-speaking version of the internet (I can’t speak for elsewhere) is unsustainable as is, largely because capitalism is unsustainable. So we’re going to continue to see things that were maybe a somewhat reliable experience on it worsen more over time, as capitalism worsens more in RL. Sometimes this takes the form of gen AI uses, sometimes it’s “enshittification” or other things like it.
I found a YouTube link in your post. Here are links to the same video on alternative frontends that protect your privacy:



