The example videos are both impressive (insofar that they exist) and dreadful. Two-legged horses everywhere, lots of random half-human-half-horse hybrids, walls change materials constantly, etc.
It really feels like all this does is generate 60 DALL-E images per second and little else.
For the limitations visual AI tends to have, this is still better than what I’ve seen. Objects and subjects seem pretty stable from Frame to Frame, even if those objects are quite nightmarish
This would work very well with a text adventure game, though. A lot of them are already set in fantasy worlds with cosmic horrors everywhere, so this would fit well to animate what’s happening in the game
I mean, it took a couple months for AI to mostly figure out that hand situation. Video is, I’d assume, a different beast, but I can’t imagine it won’t improve almost as fast.
After seeing the horrific stuff my demented friends have made dall-e barf out I’m excited and afraid at the same time.
The example videos are both impressive (insofar that they exist) and dreadful. Two-legged horses everywhere, lots of random half-human-half-horse hybrids, walls change materials constantly, etc.
It really feels like all this does is generate 60 DALL-E images per second and little else.
For the limitations visual AI tends to have, this is still better than what I’ve seen. Objects and subjects seem pretty stable from Frame to Frame, even if those objects are quite nightmarish
I think “will Smith eating spaghetti” was only like a year ago
This would work very well with a text adventure game, though. A lot of them are already set in fantasy worlds with cosmic horrors everywhere, so this would fit well to animate what’s happening in the game
I mean, it took a couple months for AI to mostly figure out that hand situation. Video is, I’d assume, a different beast, but I can’t imagine it won’t improve almost as fast.