Friday, November 20, 2009
People have commented on how smooth and cartoon like our animations are. We are highly appreciative of such comments. We achieve our look by not doing what a good majority of dev groups out there do: flip-book style animation. If you saw the texture sheet for those types of games, it would look like a series of frames of the character. This is a very straight forward way of animating 2D games, but smoothness of animation comes at the cost of large amounts of texture space. Comparatively, our texture sheet would look like a bunch of parts.
Lumpy's main set of animations is around 4500 "frames", while the animation sets for specific Pokers is usually around 500-900 "frames". To put this in time perspective, we animate at 30 fps. So Lumpy's main set of animations consists of about 2.5 minutes worth of animation. Each Lumpy specific animation set to a Poker ranges around 16 to 30 seconds worth of animation. And don't forget that gifts and pokers (not all) are animated as well!
And as if handling that many frames of animations wasn't enough, how about adding that most our objects (ie. Lumpy) are relatively large. Oh, and you have limited memory. So what do you do to have smooth looking animation using limited system resources? Our method of solving this requires talented artists, some slick tools, and a fair about of CPU horse power.
One of the problems we have encountered as of late is that because our animation data set is so large, it now takes a lot of time to process. Given that time is relative, what's a lot you ask? The worse file can take up to 2 hours. That's right, you read that correctly. But how is that possible you ask? Our process involves subdividing animations into a smaller subset of parts. We then generate a texture sheet and animation data for all the parts. This is relatively straight forward with our tool set, but it requires that combo of art savvy and CPU processing power.
During the end of EP3, it was getting prohibitively expensive time-wise to process our animations. Mistakes potentially became half day set back. We came to that crossroad where we needed to decide how to resolve this issue. Do I take the hit and upgrade machines? Do we just live with these long processing times? In the end we took a hybrid approach of getting a faster machine ... not top of the line ... but one that was dedicated for just processing the data so I wouldn't need to worry about getting my dev environment up on a new machine. This meant that I got to get a present!
The good? It was a fast shiny new dual core workstation! But much like things in the world of QuitIt! ... all good things aren't always as good as you think. So just how long did it take for this blazingly fast machine to churn through our data? It now takes the worse case scenario only 1.25 hours to process. Definitely faster. But still not quite fast enough! Looks like Ben needs to find some time for some tool optimizations as well!