Notifications
Article
Road to GDC: Crafting the virtual reality cinematics in "The Gallery"
Published a year ago
128
0
Lessons from the premiere cinematic VR game
I first started using Unity when I was in university in 2010. With little programming knowledge, Unity was perfect for getting my feet wet. I could just throw stuff together and kitbash scenes, and it ended up taking about a weekend to learn all the Unity tools I would need.
When it came time to develop the first episode of The Gallery, Call of the Starseed, a cinematic adventure game in resurging virtual reality, there weren’t really any rulebooks. There wasn’t much to guide us in how to create cinematics in a true 3D space, so when we received our first mocap suit, we were basically flying by the seat of our pants.
With a background in Digital Media, I was stoked to dig in to the cinematics and work with our writer to develop the game’s final scene—a seven-minute monologue that wasn’t just rare to do in a video game, but unprecedented in a VR game. The idea then came about naturally from the team to put the actor inside VR to perform the scene.
For Call of the Starseed, I turned the final scene into a VR set with blocking points and virtual cue cards to guide the actor’s movement and give them a virtual teleprompter. The actor put on a VR headset and could play out the entire scene, moving naturally in the environment as his character. He would read each line, move to his marks, and the scene would play out alongside him. It made the sequence so much more fluid and interactive for the actor, so I took his timings and adjusted the scene to fit.
By the end, the sequence became incredibly dynamic. Even though I went through it a dozen times to do timings, I never really got bored. There’s always stuff moving around you in the scene, and there’s always audio even when the character is out of sight. That’s the biggest part of cinematics in VR: It needs to feel like the character is organically in the scene with you.
 
The feedback was very positive, too; most players come out of the game kind of slackjawed by the ending because of how immersive the final sequence felt. And that kind of stuff is awesome to hear—that scene gave me some of the most valuable lessons I learned from Call of the Starseed.
In VR, players want to explore more than ever. They want to get up close and personal with characters, and it’s hard to make the player not feel weird when they’re in a VR space with a character who doesn’t recognize or respond to them.
Our approach with the second episode of The Gallery, Heart of the Emerstone, is radically different. Episode 1 had 18 separate cinematics, including various interactions with characters. In Heart of the Emberstone, there’s 50+ cinematics, but they’re all shorter—almost snippets.
If you don’t lock the player down during a cinematic, they will run around freely and look at other things despite knowing that something is going on elsewhere. Keeping the cinematics short means keeping the player’s attention and making them want to be there; they can watch something quickly and then return to their adventure without feeling like they’ve lost time. Even if they do miss a cinematic, they can replay it, or watch it again, or see it from a bunch of different angles to get parts of the story they may have missed the first time.
Earlier this month, we put a wrap on the first major shoot day for Heart of the Emberstone using Xsens mocap. And just like with our first episode, new ideas for animations formed during (and after) recording due to the organic process. Luckily, Unity is the kind of tool where if you have an idea and you draw it out, you can start playing in engine and almost always make it happen.
One example of that is a scene in Call of the Starseed where the professor jumps down from a ledge before walking up to and pushing a button. But the jump animation wasn’t planned—I suited up and recorded mocap for the animation myself later on, merging both recordings with Unity’s Mecanim. The scene turned out so smooth that players had no idea it was even two animations.
 
In Episode 2, we’re using that same technique again. I can take the Xsens mocap data, throw it in Unity, test it, and see if it works right away. If an arm needs another pass, or turns out a bit janky, I can just create a mask and cover it with a better recording or a stock animation. There are other variables I can tweak, too—if the actor’s head goes further back than I’d like, I can restrict how far it goes and it’ll still look completely smooth. I can also make Greedo shoot first.
Call of the Starseed was great to work on because we learned so much from it, but Heart of the Emberstone will be on another level. We’ve gone into it with even more experience and ambition, and the animations are coming out even more polished. Soon, we’ll be able to work on dynamic animations. My hope for the future is to develop scenes where the player is an actor, and all the characters can interact with you and change the outcome of the scene—it’s awesome to be at the front lines of cinematic VR gaming.
DU
Denny Unger
5
Comments