What’s up designers, and welcome back to Rempton Games. Let’s talk about Spore – the 2008 life simulation strategy game made by Maxis Studios and designer Will Wright. In Spore, players can guide the evolution of a species of creatures from the cell stage to a galactic civilization, but the part of the game that stands out the most to me, and I think most players, is the creature creator. Spore’s creature editor lets players craft custom creatures that they can actually control as playable characters in the game. But how does any of this actually work? Let’s find out by exploring the modeling, texturing, and animation of the player created creatures in Spore.
Before we get started, some of you may have noticed that it’s been a while since I’ve posted on this channel. That’s because a lot has happened while I’ve been working on this video, including moving to a new apartment, getting Covid, discovering a blood clot in my lungs, and having a minor existential crisis. Basically, this video was really difficult to make, so if you could leave a like, comment, share, or subscribe to this channel, I would really appreciate it. Without further ado, let’s get started.
First, let’s take a look at modeling – the creature creator allows players to sculpt a huge variety of creatures, from simple worms to incredibly complex, realistic monsters with hundreds of separate components. Getting this to work was a challenge from both a game-design standpoint, and a technical one.
From a game-design perspective, the designers had to find a balance between two extremes. On the one hand, they could have simply shipped a full-blown 3D modeling tool like Blender or Maya with the game, and let players create anything their minds could think of. However, anybody who’s used these types of tools can tell you that there is a pretty steep learning curve, and most players aren’t professional 3D modelers. The other extreme would be to simply use pre-made models, which is what most games do. The challenge was finding an interface that allowed players with no prior experience to easily create reasonable looking creatures, while still allowing a huge range of possible designs. This challenge was accomplished in 2 main ways – using Metaballs, and Rigblocks.
What is a metaball? It’s basically these gloopy 3D balls that join together when they get close to each other. Metaballs are interesting because they work differently than most 3D models do. Your standard 3D model is what is called a “mesh”, and it is defined by a bunch of points in 3D space, connected by edges. Metaballs, on the other hand, are what is called an “implicit surface”, which means it is defined by a mathematical function, rather than points in space. While I won’t go into all the technical details here, the basic idea is that each metaball has a function associated with it that defines the surface of the metaball. When the metaball is alone this surface looks like a sphere, but when two metaballs get close, their functions get added together, which is why their two surfaces get merged into one.
In the Spore Creature editor, metaballs are used for the main body and limbs of the creature. As you stretch the limbs or spine, more metaballs get added to ensure a smooth, continuous surface. You can also shrink or grow individual metaballs to make parts of the creature fatter or thinner.
While metaballs are good for defining the general shape and structure of the creature, the more complicated pieces – hands, mouths, spikes, etc, were created using rigblocks. A rigblock is an individual, hand-crafted body-part that can be snapped onto your creature. Each rigblock has several degrees of freedom that let them be scaled, stretched, and transformed in pre-defined ways.
By combining these two systems, Spore struck a good balance of open-ended player creativity and pre-made artistic assets that ensured even the most inexperienced player could make something that looks cool and functions well within the game. This system had a number of additional benefits as well – it guarantees that the creature is compatible with the game’s animation system (which we will talk more about later in the video), and also allowed the creature to be stored using a much smaller amount of data than a traditional 3D model, which meant that creatures could easily be shared between games and uploaded to the Sporepedia.
Now that we’ve modeled our creature, it’s time to texture it. Texturing creatures typically requires 2 steps – first, you have to convert the 3D surface of the creature into a 2D plane, then you paint your textures on top of that 2D plane.
The first step is called “UV unwrapping”, and it’s basically witchcraft. If you’ve ever looked at the many, many different types of map projections, that exist, you know how difficult it is to unwrap a simple sphere into 2D without introducing a ton of distortion, and most 3D models are much more complicated than a sphere. UV unwrapping is typically a very difficult, tedious, time-consuming process that can take skilled artists hours to accomplish for a sufficiently complicated model. And Spore has to do it automatically, in the background, for an arbitrary user-generated creature.
Luckily, Spore actually has an advantage that most texture artists don’t – nobody but the computer ever has to actually use the UV map that it creates. Part of the difficulty of creating a usable UV map is that a human has to use that map to texture the model. This means that a human artist needs to be able to associate different parts of the map with different parts of the model, so that they can create appropriate textures. Spore doesn’t have this limitation, so its only goal is to unwrap the model in a way that lies flat with minimal distortion – a much more achievable task.
It basically does this by selecting random faces on the surface of the model, and grouping together connected faces that are pointing in roughly the same direction. It then repeats this process until every surface of the model has been selected. This results in a very ugly, but usable, UV unwrapping.
The next step is to actually apply textures to these maps. The first solution was to basically give the players a virtual “paintbrush” that they could use to color the surface of the creature – as the moved the brush around the creature’s surface, the game would automatically translate the location on the creature to the corresponding location on the UV map, and color that location in. This method worked, but was considered to be too complicated – it took a lot of effort and skill to get good results, and most players wouldn’t want to manually paint every surface of the creature every time they wanted to change the color, so they needed to find a way to make the painting more automatic.
The solution they came up with is a little strange, but very clever. Instead of the players moving the “paintbrush” around the creature, they programmed particles to move around the surface of the creature, basically “painting” as they went. Because the particles are moving on the creature’s surface, this guaranteed that the textures were continuous, and they could program the particles to move in particular ways to create stripes, spots, and other patterns. In the final game, the players are able to customize the creature’s texture by combining 3 different layers of detail – a base coat, a main pattern, and a detail layer. Each layer has a corresponding set of particle painters, which combine to form the character’s overall texture.
Now that we have a fully modeled and painted creature, the final step to bringing it to life is to animate it. Traditionally, 3D animations can be done in 2 ways – either through motion capture, in which the motions of a real-life animal or object is tracked and fed into a computer, or through hand-animation, in which an artist pain-stakingly manipulates a 3D model into different poses to create the animation. Both of these methods require access to a 3D model to be animated, and because the models in Spore don’t exist until the player creates them, neither method is applicable. Instead, the developers had to create a new way of making animations one time, that can then be applied to an infinite number of user-created creatures.
This animation system is pretty complicated, so I think the best way to describe it would be through an example. Suppose you wanted to create a dance animation where the creature would wave its arms and stomp its feet. Because you don’t know what the creature is going to look like, you first break the animation into multiple “channels” – one channel for the arms, one for the feet. Then, for each channel you need to create a “query” to select which body parts each channel should apply to. For example, when designing the animation you don’t necessarily know how many arms the creature will have (or if it will even have arms), so you need to be specific in your queries. Do you want every pair of arms to wave? Or maybe just one pair? Suppose we want only the highest pair of arms, and we want all feet to stomp – first all the left feet, then all the right feet.
Once you’ve broken your animation into channels and selected your body parts, the process is pretty similar to creating a standard 3D animation. The artist moves the body parts around into various poses, and the animation system animates the creature between these poses. Once they’ve animated their motions, they need to tell the system how to generalize the animation. For example, your ground-stomping animations might be marked as “ground relative”, so that all feet hit the ground at the same time, regardless of how long the legs are. On the other hand, you might specify that your arm waving animation is scaled based on limb length, so that longer limbs result in a bigger motion than smaller limbs. These sorts of instructions help your specific animations generalize to a wide range of creatures, while still looking pretty natural. Overall, the system is a bit of a mix between traditional animation and programming – you hand craft the animations, then use “code” to explain to the animation system how to apply and modify the animations for different creatures.
This describes the main animation system, but there are a few special edge-cases. First, some animations are branched, which means the game has multiple potential animations and needs to decide which to use. An example is grabbing an object – if the creature has graspers, they will use those graspers to grab objects. If they don’t, they will use a different animation to grab objects with their mouths.
There is also a secondary animation system called “jiggles”, that provides physics-based animations to different parts of the creature that aren’t actively being animated. For example, suppose your creature has a long tail – while it’s running, your tail might naturally bob up and down, creating a greater sense of realism.
Finally, there is a specialized animation system for the creature’s “gait” or walk, which groups together legs based on length and determines an appropriate walking animation for each group.
Add all these steps together – the modeling, texturing, and animation – and you have a fully functioning, completely custom, player created creature! Of course, Spore is a very complicated and interesting game, and I was only able to scratch the surface in this video, so if you have any questions or topics that you’d like me to explore in more detail, let me know in the comments down below. If you liked this video, make sure to leave a like, and subscribe if you want to see more videos like this in the future. And join me next time for the next entry in my “TCG Design Academy” series. Until then, thank you so much for watching and I’ll see you all next time.