When developing 3D applications, sooner or later one must face the problem of writing different combinations of shaders. Developers must support different feature levels and adjust their shaders accordingly. For example a simple Phong shader could need to be duplicated and modified in order to support diffuse maps, cube maps, various shadow algorithms and so on.
One of the most known alternatives is the so-called “Ubershader”, in which several features are turned on or off through boolean flags. The approach I am presenting here is probably the most-overkill for small projects, as it consists in dynamically generating shaders line by line according to a graph. Daedalus is a tool that allows you to generate HLSL shaders by analyzing YAML descriptions. In addition, it can also work as a simple graphical frontend for the offline-generation of HLSL shader bytecode. You can find the result of my work as an open source project in the GitHub repository. An installer can be downloaded from here.
My inspiration was Visual Studio 2012’s Shader Designer. It is a shader authoring environment available within the IDE. It allows you to build a shader as the result of a connected graph. Since I mainly develop using C#, this environment was not available to me. So armed with patience I developed my own system.
Within my architecture there are two main types. Variables and Nodes. The former wrap HLSL’s variable structure. Nodes are the individual components that make up the shader itself. Each shader is defined by a feature level (i.e. the actual Shader Model being targeted), an Input and Output Structure and a Result node.
The Result node is the “inverted root” of the shader. Typically its function consists in assigning the return value to the output structure (e.g.: the colour in a Pixel shader). Taking a Phong pixel shader as an example, it typically requires: 1) a diffuse component ; 2) a specular component; 3) optionally, a shadow component. Each of these component are available boolean flags in a “PhongLighting” node. That is, the PhongLighting node provides various boolean properties that turn on and off the associated features. If the Shadows flag is turned on while traversing the graph, the shader generator system will factor in the subgraph associated to the calculations of the shadow algorithm.
The PhongLighting node itself requires other information for the computation of the algorithm. This node provides several available connectors to which other nodes may be connected that implement the required functionality or data. For example, the PhongLighting node has the following connectors:
- a Light node that specifies the light type and position
- a Material node that specifies the diffuse, specular and ambient colours
- a LightDirection node that provides the oncoming vector from the light source;
- a ViewDirection node that provides the vector from the viewer’s position
- a Normal node that provides the normal of the currently rendered fragment
All these components should be familiar to anyone who has implemented a basic per-pixel Phong shader. The Shader Generation system I have built supplies several node types that provide the required functionalities. For example, the first two connectors need to be assigned to two Constant nodes that simply fetch two structure variables from a constant buffer. The next two nodes are a bit more complex and involve nodes that perform mathematical operations between variables. The last node is again a Constant node that fetches a value, this time from the input structure to the Pixel Shader.
This is the full list of nodes available in Daedalus v0.2.2
- Math Operators
- Other Operators
- Function Nodes
- Output Nodes
- various VertexShader output nodes
A built-in validation systems ensures that each node can be connected to a predetermined range of node. A TextureSampleNode won’t accept any variable in its Texture connector, but only a Texture variable, and so on.
Once the shader is fully connected and validated, it can be traversed. Daedalus will thus generate the source code and required variables and external methods line by line. It then allows you to view the source code and generate a .ofx object, detailed in the following.
Another feature in Daedalus are the so-called Shader References Each variable is decorated in the code as requiring particular features from the engine. For example, the above mentioned ViewDirection connector node will require the engine to supply not just any value but a Vector3 value that describes the position in world-coordinates of the camera.
What I did before using Daedalus was to assign each value from my engine by hand. Now, thanks to these features I’ve implemented, when I load a .ofx object I can iterate through the required features and assign them automatically based on the particular shader’s requirements.
Thus, when the shader graph is traversed, it collects a list of those references by analyzing the variables required by the shader and the Constant Buffers in which these variables are defined. When I load the .ofx object I can rebuild the constant buffers dynamically based on the shader specifications.
Technique Key Parts and Technique Keys
Finally, Technique Key Parts and Technique Keys are Daedalus’ way of distinguishing group of shaders that work together. An .ofx object is a collection of shaders grouped by the Technique Key they are associated to. Each connector node in the graph is decorated by specific Vertex and Pixel Shader features. For example, does the shader requires UV or UVW mapping? Does it have a Diffuse Map? While traversing the shader graph, a Technique Key Part will be built that characterizes each shader.
Within Daedalus you can specify several Techniques and assign them (through Drag-and-Drop) to the shaders that make up the technique. For example, a Phong Shader that uses a cube map will require a different Vertex and Pixel shader than one using a regular texture. The Define Technique screen will allow you to predefine what features the Technique Key will require from the engine (and provide). A Technique may thus be assigned to the shader implementing these features (e.g.: the CubeMap technique should be assigned to a vertex shader that makes use of a vertex layout which includes a TextureUVW component; likwise the pixel shader should make use of a TextureCube instead of a Texture2D). A single shader can, however, be assigned to multiple different techniques.
Once the .ofx object is built and written to disk, the system will have merged the invidual shader’s Technique Key Parts to form a complete Technique Key. Once loaded in your engine, you can search through the shader collection for a particular technique. It is possible to query the shader collection for a shader that supports Diffuse Maps or for one which supports a Cube Map. In the future it will be possible to query the collection for shader implementing a particular feature level or supporting specific features such as instancing and so on.
Thanks for reading!