Ok, well educate on how AI cost more and required more work. I’d like to be more aware.
I'm not super expert and the studio didn't reveal how they did it but I know a few things. For sure they didn't just type a bunch of a shyt and just went with the stuff AI spit out. There's whole new field called "prompt engineering". It's kind of hard to get the AI to do what you want it to.
A couple of things I noticed is that the style across different images were somewhat similar in style. I wouldn't be surprised if they trained their model with a style that they wanted. That requires training the model with many pieces of pre-created art. I'm just guessing. I'm also pretty sure they didn't use a service running out there. The stuff looks like "Stable Diffusion" to me. They had to get a PC with decent hardware and install all this shyt. A normal designer wouldn't really have this skillset. They'd probably pulled IT/developer people do it.
Next, the images aren't static. It's video. So you see that the images would slightly morph and change a bit. That means they likely predesigned an outline and template for the AI to use and had it do it frame by frame. If you mess with the AI you'll realize there are serious flickering issues. They probably used some sort of filter or manually removed outlier frames where the AI spazzed.
And for sure they had to run hundreds, if not thousands of iterations to get keep adjusting until it got it right and to pick and choose the best pieces. They likely had to write scripts to do it in a more automated way. It's hard and slow to build it up to do but once the whole thing is set up, they can create many more at scale. So it'd be a cheap if they reuse this system for a lot of videos but they didn't. This is a one-off for a show so it's expensive.
I'm only guessing so don't go talking shyt about it. It's from what I know about AI which changes daily. Maybe there's a company selling the entire thing as a package but I don't know of it.