This is a guest post from VR Playhouse’s beloved Head of Production, Derin Turner. Sometimes he gets frustrated.
Over the past year, we have been hard at work here at VR Playhouse, experimenting with virtual reality, augmented reality, mixed reality, astral-projecto-realities. Some things have worked, others not so much. We’ve laughed, we’ve cried, we’ve gone on spiritual journeys to rediscover ourselves and the meaning of life. We’ve broken a lot of things. All thanks to the wondrous world of VR. And through all of this, we have learned a lot. Things that we now want to pass on to you, so that we may spare you some of the pain that this gloriously sadistic medium inflicts.
The first thing we’ve learned is this: At it’s core, the idea of VR is to transport your brain-noggin to alternate planes of existence. It is an illusion. And an illusion must be flawless. If you cannot succeed in this illusion, well, then personally l would rather spend my time watching a normal 2D movie. At least with a normal movie I can suspend my disbelief. You want to know why? Because they make sure that nothing in the frame takes away from that illusion. Filmmakers and storytellers are magicians, and we must hold ourselves to that standard.
Holding ourselves to that kind of standard is even more important given how technically complex our work has become. Just one of our cameras is 6 to 100 times as labor intensive as the increasingly archaic two-dimensional model. So let’s come prepared and organized, shall we? If you showed up to a traditional film set unprepared and without so much as a shot list, you would immediately be replaced by someone far more competent. Planning for and executing a VR shoot is not different, just, like, a lot more complicated.
And you know how, on a film set, “We’ll fix it in post” are some of the most ghastly words you can mutter to a crew? It’s 6 to 100 times worse in VR. So if you’re trying to decide whether to utter those ghastly words on a VR shoot, just use this simple formula:
[(time of professional post workflow) + (“fix it in post”)] x (cameras used) π = whyyyyyyyy
In other words, the answer to this equation is always infinite sadness.
So the following are guidelines we suggest that will help to greatly reduce your workload and lead to a better experience in production, post, and in the headset.
In VR, resolution is king. Every pixel counts. The higher resolution we start with, the easier it is to stitch, the easier it is to composite, the easier it is to color, and the better it will look in the headset. Higher frame rates are also always good, but with GoPros the higher frame rates come with a loss of quality, particularly in low light. 30 fps is acceptable for most stuff without a lot of action. 60 is better. 80-120 for action if you can. And bear in mind that with most cameras, the higher the frame rate, the lower the resolution. It is best to find a happy medium between the two, and NEVER dip below 1080p.
Make sure you’re set to create an image that is the most flat and with as little contrast as possible. Lower your ISO to remove noise. We even suggest stopping down the gain to -1 as your baseline. This will help when you stitch and when you color, and help balance the overall image. If you have one camera that is pointing at a very bright source in comparison to the others, gain up to 0 or +1 on that camera. The automatic settings will want to gain down. We need to counter this so that the exposure stays consistent from camera to camera. And for god’s sake, make sure every camera’s date and time are accurate.
Most importantly, always shoot 4:3 so that you may use the entire sensor. We need every pixel possible for the best stitch. Some cameras may have higher resolution but only shoot 16:9. Go with whatever resolution still gives you the option to shoot 4:3.
I can’t tell you how many pieces I’ve watched where every stitch line is visible. This completely takes me out of the experience. Why am I seeing double? I hate seeing double.
To combat this, make sure to always place your action in one camera. Have action all around? Great! But always try to keep individual subjects and their actions contained in one camera and if you need to cross a stitch line, do so as quickly as possible and as close to your background as possible. The closer you are to camera, the harder it is to fix. This goes for objects too. We are dealing with very wide angle lenses that distort things the closer they are so we recommend at minimum, everything (be it animal, vegetable or mineral), be 4 ft from the cameras. If you have anything closer than 4 ft, take photos or film from the SAME EXACT position wherever a stitch line or warping will be. This is called a plate shot. Also remember that anything beyond 25 ft will begin to lose detail. The best plane of action is between 8 and 20 ft.
Some rigs do not shoot full spherical. The last thing anyone wants to see is a gaping hole in the image. If there is a hole in the sky or ground, take the extra 10 minutes to film the holes from the exact same point as the rig. If you are shooting with a drone, please put a camera on top of the drone as well. Which brings me to:
Jerking the camera all over the place will only cause our equilibriums to go into defcon barf, and nobody wants that. If you’re going to move the camera, pick a direction, and move smoothly in that direction. Think of your camera rig as a compass. One camera should always be pointed north. No matter where you turn, where you go, that designated camera always points north. And bear in mind, the more you move, the more difficult it will be to clean up the stitch, especially in enclosed spaces.
Also, if you are shooting high action with fast moving cameras, I highly suggest bumping up your frame rate (but not too much). Every millisecond of difference between frames matters when syncing. But most importantly, when shooting fast moving action (like on a vehicle) shoot with neutral density filters. This removes rolling shutter- a nasty little bugger that can greatly reduce the quality of the stitch lines.
This one really gets me. If not properly executed during production, a bad sync will ruin any stitch and can take days to get right. Yet it baffles me how many people completely fudge this awfully simple step…
There is this revolutionary piece of equipment called a Sync slate. Invented by the studios in the 1930’s for “talkies”, the clap stick quickly became the industry standard for matching audio and video. The same can also be used in VR. Why most have chosen to completely ignore this fantastic (and affordable) piece of technology is completely beyond me. Do yourself a favor and do what filmmakers have been doing for years: USE A SYNC SLATE. Clapping your hands is a poor and unreliable way of trying to get a good sync. Skin smacking against skin can easily be muffled if in a noisy environment.
But please, CLAP THE SLATE ONLY ONCE. People think they need to clap in every camera, flapping their hands around like a moth in a lightbulb. This rapid fire clapping confuses the automatic syncing software and often results in having to hand sync – a time consuming and unnecessary step. Present the slate in every camera, but only clap into one. The clean crisp audio is what’s most important. While you are at it, clap once at the beginning of a take and once at the end just for safety.
Something else syncing software uses to sync is a motion sync. It’s pretty amazing how accurate this can be when done properly. We recommend always syncing both ways- with a clap and a twist. To get an accurate twist, you need one quick definitive twist of the camera 360 degrees. That means, one hard twist and no more. Also, don’t spin with the camera. If you do this, at least one of the cameras will always be facing your torso and will not notice itself being spun around.
Ahh, media management. What a glorious thing. When it works, it is the greatest gift to ourselves, saving us hours, days even off our lives. And when it doesn’t, we stomp around like a spoiled toddler, screaming obscenities at the nearest human in what can only be diagnosed as a rage-induced mania. I’ve been there. It ain’t pretty. We are dealing with far too many cameras to cut corners on this step. Do it right, and you could save a life. Seriously.
Camera logs are an extremely valuable resources in post. With VR we should follow this cardinal rule: treat every shot like a vfx shot. Because if you are to do it, you will inevitably using 3d and compositing software somewhere in your post pipeline. And once you do, its an fx shot. In film, fx shots are meticulously logged and mapped out: lenses, lighting angles, distances to subjects, plate shots, tracking points. All of this info is extremely valuable.
We recommend this structure when logging and organizing vr. I’m not saying this is the only way, but it works for us.
Each SHOT, or ‘take’ folder must contain all of its corresponding cameras renamed as follows:
When consolidating media, folder structure should be as follows:
There should only ever be one “reel_1” (or r1). Reels are numbered in the order DIT receives and ingests onto hard drives over the course of the entire project. SHOT# starts over for every Reel back to #1. SCENE AND TAKEare not required, but helpful. CAMERA is for each individual camera attached to a rig. If using multiple rigs, such as “a cam” and “b cam”, letter should come before the reel.
We recommend renaming the media in each CAMERA FOLDER before moving into individual SHOT FOLDER.
6. Stereo vs mono
The problems with stereo is it looks like somebody ate your footage, digested it, and then barfed it all over the view monitor. The images sometimes lag, there’s ghosting all over the place, and the stitch lines are twice as bad. And if you turn your head sideways, guess what? Doesn’t work. So why would i want to look at something that makes me completely self aware?
If you shot stereo, well you just pissed away at minimum half your resolution and overall quality. Think about all those pixels you could have had if you had just shot mono! All those cameras just wasting away! My god. And to top it off, you had to use a wider lense than you really needed. Well guess what, that fish eye lense you put on there doesn’t even use the whole sensor. So now you’ve removed another 30% of your potential. All in the name of crappy stereo. Look at it this way;if you were filming a movie and had the option to shoot your movie in HD stereo or mono fit for an imax screen, which would you choose?
We here at a VR Playhouse believe that if you are shooting video, mono (for now) reigns supreme. The trick is making really good mono. So good in fact that you may convince people into thinking it’s 3D. It happens with our content all the time. How does one do this you ask? By filming with the highest resolution possible, with as little compression as possible, and maintaining that throughout your workflow. And if you really want that stereo effect, there are ways of turning mono into stereo in post that will be far superior in every way to that double vision nonsense.
So! You’ve shot your VR piece. You planned out your shots, your camera settings were perfect, your sync matches up, your resolution is huge, your action is impeccably choreographed, and your organization would make even your mother proud. Congratulations. You’ve just completed the easy part. Now you just have to stitch it, encode proxies, edit it, fine stitch it, output high res files without compression, and then clean up your stitch lines using a combination of sophisticated 3d compositing software. Oh and don’t forget sound, good sound is critical. But that’s for another article.