Anatomy of a Feature

Posted by Max at 10-12-2014

One of the decisions we made early on in development was that we weren’t going to treat the view of the body from first person differently than from third person.

In Natural Selection 2 (and most first person games) when you see your arms and hands from first person, you’re looking at a special model that’s designed specifically for that view. When another person looks at you, they’re looking at a different model that includes the full body and most likely less detail. Different animations are used for each view and the first person mode can even use additional tricks like drawing the hands with a different field-of-view than the world to make it look as good as possible.

As you can imagine, this doubles the work involved with creating artwork for the player. Since we want users to make their own artwork — and since we have a small team — we wanted to minimize the art requirements.

The tricks with the field of view are optimized for the case where you’re holding something (typically a gun). Using a small field of view for the weapon can make it look more natural, but it causes problems when you want the player to be able to interact with the world. In games like Far Cry and Battlefield you may have noticed how the camera’s field of view changes when you go to interact with something or drive a vehicle.


First person model of the hands from Natural Selection 2

First person model of the hands from Natural Selection 2


Although we’ve made some guns, the games in Future Perfect aren’t going to be all about shooting things, so it doesn’t make sense to optimize the first person mode for that.

With all that taken into consideration, we made that decision to use the same model and animation for first and third person.

There’s a bit of a problem though.

In “real life” you see through your eyes which are attached to your head. As your head moves your view point moves with it. When you run, your view point bounces all over the place. Your brain is very adept at stabilizing the view based on its sense of the motion so you don’t perceive the bouncing.

In a video game, if the camera is attached to the head you will experience the bouncing, since your brain doesn’t do the stabilization. The result is a game that’s hard to play and will probably make you feel queasy.

For this reason, games rarely attach the camera to the head. Instead the camera is fixed relative to the player’s position and is usually somewhere inside the space of the head. Without any special treatment, this gives you a creepy inside-out view of the player’s face.


Creepy view from inside a head

Creepy view from inside a head


For Future Perfect, we wanted to solve this problem by hiding the head when drawing the player for the first person view, but showing it for the third person view.

One of the really interesting parts about working on Future Perfect is the amount of consideration we put into engine features. Since the engine is a tool for users, we want any feature we create to be flexible and simple (we jokingly use the portmanteau “flexsimple”).

We considered many different approaches for hiding the head for the local player. The main challenge is that your player’s model needs to be treated differently on your machine than on other people’s machines. It’s even a bit more complicated though since the model needs to be treated differently locally when rendering the main view or when rendering the shadow, which should include the head.

Ultimately, we didn’t want to involve complicated scripting or special materials to make this work, since that fails the “simplicity” test. Since our artwork is designed to be reused in new games, we wanted the same player model that we use for first person to be placed in a third person game.

To accomplish this, we augmented our model format: certain pieces could be designed as parts that are automatically hidden for a camera if the view point is inside the volume of that piece. This means that all of the player models are treated uniformly without special logic. It also means the game doesn’t need to do anything different if it’s operating in first person or third person mode, the rendering system handles that automatically. The same is true for the player’s shadow, since shadows are computed by rendering the scene from the light’s point of view.

You can see the system in action here:

httpv://www.youtube.com/watch?v=6NKhoSyra2U

When the game camera is inside the head it’s not rendered so it doesn’t interfere with the view. When the camera is moved to an over-the-shoulder view it’s visible. Moving the editor camera inside the head shows how it changes dynamically.

There is a little bit of extra setup necessary when creating a new model that uses this feature. We considered automatic approaches, but ultimately these would have been more work to implement, been slower, and not given as much control to the artist. Since the process of creating a model is for more advanced users, we’d rather have a little extra complexity in that process than for the newer user who just wants to use the model in their game.

While the idea for this solution might seem obvious, to arrive at it we needed to explore and reject a bunch of other avenues. If we were building an engine only for our team, one of those earlier solutions might have been fine. Since we’re building an engine to empower players to create things, each feature demands a lot more consideration.

The comment system is currently misbehaving. Please click here for the comment thread.

Comments are closed.