I'm old enough that I remember when the height of 3d technology was getting a set of cheap colour filter glasses free with a box of cornflakes and staring at the strange red-green picture on the back of the box as it fails to form into a terrifying picture of a T-Rex.
Happily I'm now able to relive these memories by using the same colour filtering techniques inside Unreal Engine 4, and in doing so provide a useful description of how to use UE4's SceneCapture2D and RenderTargets.
All of the code for this project is available in a Github repository.
Anaglyph 3d is a method of 3d where the outputs of a pair of stereo cameras are tinted and combined so that the viewer can wear a pair of glasses with a colour filter over each eye, allowing each eye to only see the output of each camera and so be able to reconstruct the 3d scene.
The approach in UE4 is oddly similar to that of the traditional camera setup.
In order to quickly have something to render we can start with one of UE4's standard templates. You could apply this technique to any of these but I'm going to be going with the "First Person" template, and as I don't need C++ for this I'll be using Blueprint. I don't need Starter Content, I really just need something simple here.
As I'm going to be keeping this in Github I also add the standard UE4 .gitignore to prevent me checking in unwanted files.
The first thing we need to do is to get a pair of cameras, one for each eye, and capture their images separately.
FirstPerson/Blueprints folder is the FPS pawn Blueprint called FirstPersonCharacter. We need to make changes here to begin with.
We can start by adding two SceneCapture2D components, which act similarly to cameras but allow us to capture their rendered output as a texture. I'm calling these LeftEyeCapture and RightEyeCapture.
In the existing pawn the scene is viewed through the FirstPersonCamera, and so for now this component can act as a useful locator for our capture pair. Parent both captures to the FirstPersonCamera.
Now in order to get the stereo rendering working cleanly we need to move the two captures apart, positioning them where we want the eyes.
In the FirstPersonCharacter it's actually the Y-Axis that runs across the character and so the cameras need separating on this axis.
Now we just need to know how far apart to position them.
According to this Wikipedia article the distance between a person's eyes varies between 50mm and 70mm approximately. We could write a Blueprint to position these according to a variable we could set, but for now we'll just set it to a 60mm separation- so each eye will be 30mm/3UU to the side of the camera. This means I now set the location of LeftEyeCapture to 0,-3,0, and RightEyeCapture to 0,3,0.
Now we have the two SceneCapture2D's we need to give each of them a special type of texture which they can store their output in. In UE4 this type of texture is called a Texture Render Target 2D.
We can actually create Texture Render Targets "d" directly from the editor by right clicking in the content browser and under Create Advanced Asset choosing Materials and Textures -> Render Target, but the problem is that this would need us to be able to specify the size which we can't do as we need these to exactly fill the dimensions of the player's screen.
Instead we go back into the FirstPersonCharacter Blueprint and add two new variables, LeftRenderTarget and RightRenderTarget. Both of these should be Texture Render Target 2D References, and give them a Tooltop and Category to keep them tidy.
Now we need to go into the FirstPersonCharacter event graph so that these two variables are filled with textures of the right size for us, and the two SceneCapture2Ds are set to output them.
For tidiness we'll do this in a new function called SetupRenderTextureTargets. Here we get the size of the screen and create two Texture Render Targets of the same size, then store that in the two variables we've just created, and then plug them into the SceneCapture2Ds.
Now we need to do this when the game starts. In the existing FirstPersonCharacter Blueprint there's a Begin PLay node hooked into some VR HUD setup so, while we're unlikely to want to combine anaglyph with VR, we can keep things simple by calling our SetupRenderTextureTargets function before the existing code.
Now we can actually see if everything's working so far. Press Play to start the game running in the editor then press Shift+F1 to free the mouse cursor. Go over to the World Outliner and find the FirstPersonCharacter in the list. This is the actual Pawn the player is controlling.
Select the FirstPersonCharacter and in the Details panel below will be a list of components. Find the RightEyeCapture and LeftEyeCapture components that we added and when you click on either one you should see a tiny preview of what the camera's seeing in the Texture Target field
If you click between the Details for LeftEyeCapture and RightEyeCapture you may even see that the Texture Target is slightly different for each one, most notably in the gun's position. This is the stereo 3d effect we've been looking for and is a result of when we set the two SceneCapture2Ds to slightly different locations to match the eyes' positions.
For completeness you'd ideally want to call SetupRenderTextureTargets when the screen is resized in order to rebuild the textures to match the new screen size, but for now I'm going to assume the screen is going to remain the same size and be a bit lazy.
Now we have the two outputs as separate textures we need to be able to combine them together by tinting them and then drawing the result onto the player's screen.
For this we're going to use a PostProcess Material. This is a special type of Material which is applied onto the screen after the scene is rendered, and is often used for effects such as depth-of-field, colour grading or similar image-based effects.
We can start this by creating a Material called PP_AnaglyphMerge (I follow Allar's style guide for asset naming hence the slightly odd name).
The first thing to do is to set the Material Domain of the Material to "Post Process" in the Details panel. This tells UE4 that we're building one of those special post-process materials, and greys out all of the material pins except Emissive Color.
Now we can start to write the actual material itself. We're going to be taking both of the texture render targets, converting them to greyscale, and then tinting them by the colour of the two lens filters in the glasses we're using, and adding the results.
We take the two materials using standard TextureSampleParameter2D nodes, called LeftEyeTex and RightEyeTex. We now convert them to greyscale and there are many ways to actually convert an RGB image to a greyscale, and that's actually probably worth a short tutorial on its own, but for now to prevent confusion I'll take the simplest "Take the average of Red, Green, and Blue". The results won't be perfect, but it's easy to follow.
We now have to tint the greyscale by the colour of the two filters. So that we can change the colours we'll have two VectorParams, LeftEyeColor and RightEyeColor, and we'll multiply each by the corresponding greyscale. In order to have a sensible real-world default of a green left filter and a cyan right filter we'll set LetEyeColor to 0,1,0 and RightEyeColor to 1,0,1.
Finally to combine the two eyes into a single image we'll just add them together and send the result to Emissive Color.
Now that we have the stereo cameras rendering and an anaglyph material to combine them we need to set things up so that the material is rendered over the main camera.
To do this we'll be creating a new dynamic material instance based on the PP_AnaglyphMerge material, setting it to use the two Render Targets we're using for the two stereo cameras, and finally creating a new set of Post-Process Settings which contain just the anaglyph effect and connecting them to the main camera.
Firstly we'll handle the dynamic material instance. This is essentially a version of a material where we can set the parameters we've built into it, in this case the LeftEyeTex and RightEyeTex. We could also use this to set the two filter colours if we wanted but for now we'll stick to the defaults we set.
Inside FirstPersonCharacter create a new function, SetupAnaglyphDMI. Inside the function add a Create Dynanmic Material Instance node and set PP_AnaglyphMerge as the parent to create an instance of our anaglyph material, drag off the node's Return Value and choose "Promote to Variable" to easily create a variable to store the dynamic material instance in, and rename it to AnaglyphDMI so we know what it is.
We need to now set the two texture parameters built into the material and so from the AnaglyphPP create a Set Texture Parameter Value node with the Parameter Name of LeftEyeTex and plug our left Render Target into it as the value, then create another Set Texture Parameter Value node from the dynamic material instance and this time set the RightEyeTex parameter to Right Render Target.
Setting up the post process settings is a little strange and turns out to be one of those things best built backwards building "Make" nodes of various types, working down through the nodes needed until we can provide the dynamic material instance we need.
Create a new function SetupAnaglyphPP inside the FirstPersonCharacter, drag the First Person Camera over and create a get node as this is the camera the player is using and so is the one we need to change the settings on. Drag off it and create a Set Post Process Settings node to allow us to change the settings.
Now we need to create the settings, and this is where we start to work backwards. Drag off "Post Process Settings" to see what can be used and choose "Make PostProcessSettings". This creates a node to build the settings we need but is unusual in that doesn't initially have any actual input pins for us to provide our settings, we need to click on the node and under Details we'll see a list of pins we can enable. We need to check "Blendables", which will let us provide a set of post-process materials which will be used.
We keep working backwards. Dragging off the "Blendables" pin we create a "Make WeightedBlendables" node which allows us to create the data structure to contain the post-process materials and weightings to control their strength. This takes an Array as a parameter so we now drag off this and create a "Make Array" node.
We only need one blendable, our Anaglyph material, and so we can drag off the "" node which sets the first item in the Array and create a "Make WeightedBlendable" node. Finally we have somewhere we use the AnaglyphPP dynamic material instance we just created, passing it in as the Object with a Weight of 1 so that it completely replaces the camera's standard rendered output.
We now need to call these two new functions and so can go back to the Event Graph. We've already hooked into the Begin Play event with our SetupRenderTextureTargets and can just add these after it, first calling SetupAnaglyphDMI and then SetupAnaglyphPP.
If everything is properly in place you should be able to press Play and see the game world rendered in shades of green and cyan, suitable for viewing using anaglyph glasses.
There is a issue in the editor you should be aware of though, at least as of Unreal Engine 4.13 which I used for this project. We used Get Viewport Size to get the size of the screen, and for some reason this seems not to get the right values if I press "Play" on the main editor screen- but does work if I'm on a Blueprint editor or similar when I press Play.
I'm investigating this and will update when I know more.
So far we've been doing the greyscale anaglyph approach but there have been other approaches such as anachrome. This is intended to provide better colour fidelity by not converting to greyscale and instead filtering the full RGB image.
Fortunately for us this is pretty simple to achieve in our Material, we can simply remove the greyscale conversion and instead multiply the Render Texture Targets by the filter colours.
The problem with this is that having removed the greyscale conversion we can no longer do the more old-fashioned anaglyph.
We could create two materials, have one with the greyscale conversion and one without, but a cleaner approach is to use Unreal's StaticSwitchParameter. This allows us to toggle between the two approaches on a per material instance value, and because the switch applies at compile time in the material it doesn't have any performance overhead.
To add this we set up both methods, add a StaticSwitchParameter which we name UseGreyscale. Connect the result of the greyscale conversion method into the True node as it's what is used when we have this switch enabled, and the newer direct multiply to the False node for when it's disabled.
Now we have the finished material we can choose whether we're going to use the greycale version or the more colour-rich approach by toggling the switch.
Note that if you are after proper anachrome you'll need move the two cameras closer together, and also to tweak the filter colours slightly too. The cyan filter used is intended to allow in a small amount of red light to allow the eye to see cues more easily and so you'd want to switch from the pure cyan of 0.0, 1.0, 1.0 to a slightly redder colour such as 0.02, 1.0, 1.0. You'll have to play with this a little though, different sets of glasses would need different colours and so there's not a one size fits all approach here.
So, we're done. You've seen how SceneCapture2Ds can be used as additional cameras in Blueprint, and render their output to Render Target Textures for us, and also how a post-process material can be set up to use these for effects such as this anaglyph 3d. We've even touched on using StaticSwitchParameter to control Materials in a more advanced way.
There are still things which can be done, though.