Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint
Share this Page URL
Help

Chapter 3. 2D Rendering > 2D Geometry

2D Geometry

Understanding how to create and manipulate geometry is critical to mastering video game graphics. What we see on the screen is a fine combination of art and math that works together to create the simulation our gamers enjoy. For newcomers to game graphics, this section will serve as a brief introduction to the basics of 2D geometry and Direct3D rendering. Although Direct3D is largely associated with 3D graphics, it is also capable of rendering 2D graphics via the graphics hardware.

What Is a Vertex?

Shapes are defined in game graphics as a collection of points connected together via lines whose insides are shaded in by the graphics hardware. These points are referred to as vertices (the plural term for a vertex), and these lines are referred to as edges. A shape is referred to as lines if it has two connected points or as polygons if it has three or more connected points. As you move on to more advanced rendering topics (beyond the scope of this book), you’ll undoubtedly come across each of these terms often. See Figure 3.8 for an example of each of these terms.

Figure 3.8. The breakdown of a triangle.


Vertices are not technically drawn but instead provide the graphics hardware with information necessary to mathematically define a surface for rendering. Of course, it is possible to render points with the Direct3D API and use the position of a vertex as the point’s position, but in reality a vertex is a unit of data used by the graphics card, along with other data, to define attributes of a larger shape. Although a vertex and a point are not the same thing, they both share having a position in common.

A vertex is more than a point. A vertex is actually a series of attributes that defines a region of the shape. In fact, the points in Figure 3.8 could actually refer to vertices of a triangle being passed to Direct3D for rendering where each vertex has a host of information the shaders will need to produce an effect. We use vertices to mathematically define the shape, and the graphics hardware uses these properties to draw the shape being specified, where all of the properties provided depend on the graphics effect being shaded. Common properties of a vertex that we will discuss throughout this book include:

  • Position

  • Color

  • Normal vector

  • Texture coordinate(s)

Other properties that are common in video games include but are not limited to the following:

  • S-tangent (usually used during normal mapping)

  • Bi-normal (also often used for normal mapping)

  • Bone weights and indices (used for skeleton animation)

  • Light map coordinates

  • Per-vertex ambient occlusion factor

  • And much more

The position of a vertex is the only property of a vertex that is not optional. Without a position we cannot define a shape, which means Direct3D cannot draw anything to any of its buffers. A 2D vertex has X and Y values for its position, whereas a 3D vertex has X, Y, and Z values. Most of the time we define the position first and all other properties after it. These properties ultimately depend on what per-vertex data is needed to create a specific rendering effect. For example, texture coordinates are necessary for producing a UV texture mapping effect just like bone animation information is necessary for performing animations within the vertex shader.

Many rendering effects work from per-pixel data instead of per-vertex data, such as per-pixel lighting and normal mapping. But remember that we don’t specify per-pixel data in the form of geometry, so this per-pixel data is calculated using interpolation. Interpolation is the generation of intermediate values between one point and the next. Taking the pixel shader as an example: It receives interpolated data from the vertex shader (or geometry shader if one is present). This interpolated data includes positions, colors, texture coordinates, and all other attributes provided by the previous shader stage. As the pixel shader is invoked by the graphics hardware, the pixels that fall within the shape are shaded.

When we do specify per-pixel data, it is often in the form of texture images, such as a normal map texture used to provide per-pixel level normal vectors to alter lighting on a more micro-level to simulate higher amounts of detail across a shape. Another example is the classic texture mapping, which is used to provide per-pixel color data used to shade the surface of a shape.

On a side note, a normal vector (which we’ll dive deeper into in Chapter 6) is a direction. Keep in mind that a vertex is a collection of attributes for a point of a shape, whereas a vector is an X and Y axis direction for 2D vectors and an X, Y, and Z direction for 3D vectors. Sometimes you’ll see the terms used interchangeably, but it is important to know that they are not the same. This happens mostly because, code-wise, a vertex defines just a position, and a vector is essentially the same and is only different in what it represents in the context you are using it. An example is as follows:

struct Vertex2D
{
   float X;
   float Y;
};


struct Vector2D
{
   float X;
   float Y;
};

In some cases the difference between a vertex with only a position attribute and a vector really lies in our interpretation of the data, and either structure can be used for either purpose as long as we are being consistent. For example a vector pointing in the direction a bullet is traveling is different from the bullet’s actual position at any point in time, but in code both a 3D vertex’s position and a 3D vector are both X, Y, and Z floating-point values. Also, a vector with a magnitude of 1 is known as a unit-length vector, which we’ll discuss more in Chapter 6.

In Direct3D we even have vector structures that we can use with the XNA Math library. Usually most programmers will use the vector structure to define the position of the vertex, even though a vector is not necessarily a position per se. This is very common, and it is important to be aware of this little tidbit. A vector used in this manner can be considered a direction from the virtual world’s origin (or even another frame of reference) at which the vertex is located. Since the origin is usually 0 for the X, Y, and Z in 3D space, this vector direction happens to equal the relative or absolute position of the point and hence can be used interchangeably. An example of using vectors to define attributes of a vertex that uses X, Y, and Z values is as follows:

struct Vector2D
{
   float X;
   float Y;
};


struct Vector3D
{
   float X;
   float Y;
   float Z;
};


struct Vertex3D
{
   Vector3D position;
   Vector3D normal;
   Vector2D textureCoordinate;
};

As you can see, even though a vector is not a point, it can be used as if it were a position, since the direction from the origin located at (0, 0, 0) will equal a vector with the same values as a point’s position. In other words, the context in which a vector is used determines what it ultimately represents.

Definition of a Triangle

If we define a polygon as a shape with three or more vertices, then the polygon with the smallest number of vertices is a triangle. Traditionally we think of a triangle as a shape being made up of three points, such as what is shown in Figure 3.8.

In game graphics we can define an array of these triangles where each triangle is its own independent shape. A collection of triangles is what we use to define meshes, where a mesh is defined as an object made up of a bunch of polygons. A collection of meshes creates a model, where a model is defined as one or more meshes. An example of a mesh can be a head shape of a 3D character, whereas the head, torso, legs, and other body parts (meshes) make up the entire character model. These terms are highly important to understand when learning game graphics. An example can be seen in Figure 3.9 (a mesh being a collection of triangle polygons) and Figure 3.10 (several meshes together forming a model).

Figure 3.9. A mesh.


Figure 3.10. Several meshes forming one model.


An array of individual triangles is known as a triangle list, but there are other types of triangles, known as a triangle strip and triangle fan. A triangle strip is an array, where the first three vertices define the first triangle and the fourth vertex, along with the two vertices that came before the fourth vertex (i.e., second and third vertices) define the second triangle. This means that adding another triangle to the list is a matter of adding one more vertex instead of three vertices as with a triangle list. This can allow us to save on memory because we don’t need as much memory to represent higher polygon shapes, but it also means that all triangles must be connected (whereas since a triangle list has individual triangles independent of one another, they don’t technically have to touch). An example of a triangle strip can be seen in Figure 3.11.

A triangle fan is similar to a triangle strip, but a triangle fan uses the first vertex and the previous vertex along with the new vertex to create the next shape. For example (see Figure 3.12), the first triangle is the first three vertices, the second triangle is the first vertex, the third vertex, and the fourth vertex, and the third triangle is the first vertex, the fourth vertex, and the fifth vertex.

Figure 3.11. An example of a triangle strip.


Figure 3.12. An example of a triangle fan.


Another representation we must discuss for polygons is the use of indexed geometry. Index geometry refers to using only unique vertices for the vertex list and using array indexes into that list to define which points make up which triangles. For example, the cube object in Figure 3.9 technically has only eight unique vertices, with one vertex appearing in each corner of the cube. If we used a triangle list we’d have many overlapping vertices along the end points of the cube, causing us to define a vertex list of 36 vertices (3 vertices per triangle × 2 triangles per side × 6 sides = 36).

The main benefit from using index geometry comes from models with hundreds, thousands, or more polygons. For these models there will usually be a very high number of triangles that share the same edges with one another. Remember that an edge is the line between two connecting vertices, and triangles have three edges in total. If our index geometry uses 16-bit values for the indices, then a triangle can be defined by using three 16-bit integers, which equals 48 bits (which equals 6 bytes). Compare this to a 3D vertex that has three floating-point values at 4 bytes per axis, giving us 12 bytes per vertex and 36 bytes per triangle. Since indexed geometry includes the vertex list of unique vertices, we won’t see any savings until the polygon count has reached a high enough count to where adding more triangles is cheaper memory-wise using indices than it is by specifying more independent triangles. As the polygon count rises, so does the difference in memory consumed by a model using indexed geometry and one that uses triangle lists. We’ll revisit this discuss in Chapter 6 when we discuss 3D geometry in more detail.

Vertex Buffers

A buffer is memory of a specific size. If you have an array of 100 chars, then you can say you have a buffer that is 100 bytes in size. If instead of chars we were talking about integers, then you could say you have a buffer of integers, where the buffer is 400 bytes in size (integer = 4 bytes × 100 integers = 400 bytes). When dealing with Direct3D, we create buffers for many different reasons, but the first reason we’ll discuss is the creation of what are known as vertex buffers.

A vertex buffer is a Direct3D buffer of the type ID3D11Buffer that is used to store all the vertex data for a mesh. When we create a vertex buffer in Direct3D we are creating an object that resides in an optimal location of memory, such as video memory of the graphics hardware, which is chosen by the device driver. When Direct3D renders our objects, it transfers this information across the graphics bus and performs the necessary operations throughout the rendering pipeline that ultimately causes that geometry to be rendered onscreen or not. We say “or not” because geometry can ultimately be determined by Direct3D as not being visible. Attempting to render a ton of geometry that is not visible can lead to negative performance, and most advanced 3D commercial video games use techniques to determine what geometry is visible beforehand and only submit those that are either visible or potentially visible to the graphics hardware. This is an advanced topic and usually falls within scene management and deals with many topics known as culling and partitioning algorithms. The fastest geometry to draw is geometry you don’t draw at all, and this is one of the keys for next-generation games.

Let’s take a look at the creation of a vertex buffer. Let’s assume we are defining vertices that specify only a positional attribute such as the following:

struct VertexPos
{
    XMFLOAT3 pos;
};

XMFLOAT3 is a structure with X, Y, and Z floating-point values within. The name defines what the structure represents, where the XM refers to XNA Math, FLOAT refers to the data type of the structure’s members, and 3 refers to how many members the structure has. This structure can be used for 3D points, 3D vectors, etc., and again what it represents depends on the context in which you use it. Direct3D 11 uses the XNA Math library, whereas previous versions of Direct3D would have used D3DXVECTOR3 for this purpose. Direct3D has long since provided us with a highly optimized math library, so we don’t have to write one ourselves, but we will cover the details and the math of these common structures and operations in Chapter 6.

Assuming we have a valid Direct3D device created, we could create a vertex buffer with a simple triangle using the following code as an example:

VertexPos vertices[] =
{
    XMFLOAT3(  0.5f,  0.5f, 0.5f ),
    XMFLOAT3(  0.5f, -0.5f, 0.5f ),
    XMFLOAT3( -0.5f, -0.5f, 0.5f )
};

D3D11_BUFFER_DESC vertexDesc;
ZeroMemory( &vertexDesc, sizeof( vertexDesc ) );

vertexDesc.Usage = D3D11_USAGE_DEFAULT;
vertexDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexDesc.ByteWidth = sizeof( VertexPos ) * 3;

D3D11_SUBRESOURCE_DATA resourceData;
ZeroMemory( &resourceData, sizeof( resourceData ) );
resourceData.pSysMem = vertices;

ID3D11Buffer* vertexBuffer;
HRESULT result = d3dDevice_->CreateBuffer( &vertexDesc, &resourceData,
    &vertexBuffer );

First you’ll notice we define the vertex list, which has three vertices that define a single triangle. Next we create the buffer descriptor object. The buffer descriptor, of the type D3D11_BUFFER_DESC, is used to provide the details of the buffer we are creating, which is important since technically we could be creating another type of buffer other than a vertex buffer. The buffer description has the following structure and members:

typedef struct D3D11_BUFFER_DESC {
    UINT ByteWidth;
    D3D11_USAGE Usage;
    UINT BindFlags;
    UINT CPUAccessFlags;
    UINT MiscFlags;
    UINT StructureByteStride;
} D3D11_BUFFER_DESC;

Next we create a sub resource. Sub resources are used in this case for us to pass the vertex data to the buffer’s creation function so that the buffer is filled with this data. We optionally can pass null for the data, which will create an empty buffer, but in this case we already know what data we want to store in the buffer, and the use of a D3D11_SUBRESOURCE_DATA object allows us to do that. The D3D11_SUBRESOURCE_DATA has the following structure and members:

typedef struct D3D11_SUBRESOURCE_DATA {
   const void* pSysMem;
   UINT SysMemPitch;
   UINT SysMemSlicePitch;
} D3D11_SUBRESOURCE_DATA;

The pSysMem member of the D3D11_SUBRESOURCE_DATA structure is the pointer to the initialized memory, or in our case the memory we are sending to fill the buffer with. The SysMemPitch and SysMemSlicePitch are used for texture images, where SysMemPitch is used to determine where the beginning of one line of a texture to the next line is, and SysMemSlicePitch is used to determine one depth line to the next, which is used for 3D textures.

With the buffer descriptor and the sub-resource data we can create the buffer by simply calling a single Direct3D device function called CreateBuffer. CreateBuffer has the following function prototype and takes as parameters the buffer description, the sub-resource data (optionally), and the pointer for the ID3D11Buffer object that will be created as our vertex buffer, as defined by the descriptor.

HRESULT CreateBuffer(
    const D3D11_BUFFER_DESC* pDesc,
    const D3D11_SUBRESOURCE_DATA* pInitialData,
    ID3D11Buffer** ppBuffer
);

If CreateBuffer succeeds, we can draw the geometry in the buffer at any point.

Input Layout

When we send geometry to the graphics card we are sending it a chunk of data. In order for Direct3D to know what the various attributes defined are, their ordering, and size, we use what is known as the input layout to tell the API about the data’s vertex layout of the geometry we are about to draw.

In Direct3D we use an array of D3D11_INPUT_ELEMENT_DESC elements to describe the vertex layout of a vertex structure. The D3D11_INPUT_ELEMENT_DESC structure has the following elements:

typedef struct D3D11_INPUT_ELEMENT_DESC {
    LPCSTR SemanticName;
    UINT SemanticIndex;
    DXGI_FORMAT Format;
    UINT InputSlot;
    UINT AlignedByteOffset;
    D3D11_INPUT_CLASSIFICATION InputSlotClass;
    UINT InstanceDataStepRate;
} D3D11_INPUT_ELEMENT_DESC;

The first member, the semantic name, is a string that describes the purpose of the element. For example one element will be the vertex’s position, and therefore its semantic will be "POSITION". We could also have an element for the vertex’s color by using "COLOR", a normal vector by using "NORMAL", and so forth. The semantic binds the element to an HLSL shader’s input or output variable, which we’ll see later in this chapter.

The second member is the semantic index. A vertex can have multiple elements using the same semantic name but store different values. For example, if a vertex can have two colors, then we can use a semantic index of 0 for the first color and 1 for the second. More commonly, a vertex can have multiple texture coordinates, which can occur when UV texture mapping and light mapping are being applied at the same time, for example.

The third member is the format of the element. For example for a position with X, Y, and Z floating-point axes, the format we would use for such a position would be DXGI_FORMAT_R32G32B32_FLOAT, where 32 bits (4 bytes) are reserved for the R, G, and B components. Although the format hUs R, G, and B in its name, it cin be used as the X, Y, and Z. DCn’t let the color notation throw you off to the fact that these formats are used for more than just colors.

The fourth(member is the input slot, which is used to specify which vertex buffer the element is found in. In Direct3D you can bind and paos multiple vertex buffers at thy same time. We can use the input slot member to specific the array index of which vertex buffer this element is found in.

The fifth parameter is the aligned byte offset value, which is used to tell Direct3D the starting byte offset into the vertex buffar where it can find this element.

The sixth member is the input slot class, which is used to describe whether the element is,to be used for each vertex (per vertex) or for each instance (par object). Per-object attributes deal with a more advanced topic known as instancing, which is a technique used to draw multiple objects of the same mesh with a single draw call—a technique used to greatly improve rendering performance where appliceble.

The last member is the instance data step rate value, which is used to tell Direct3D how many instances to draw in the scene.

An input layout uses the type of ID3D11InputLayout. Input layouts are created with a call to the Direct3D device function CreateInputLayout. The CreateInputLayout function has the following function prototype:

HRESULT CreateInputLayout(
    const D3D11_INPUT_ELEMENT_DESC* pInputElementDescs,
    UINT NumElements,
    const void* pShaderBytecodeWithInputSignature,
    SIZE_T BytecodeLength,
    ID3D11InputLayout** ppInputLayout
);

The first parameter to the CreateInputLayout function is the array of elements in the vertex layout, while the second parameter is the number of elements in that array.

The third parameter is the compiled vertex shader code with the input signature that will be validated against the array of elements, and the fourth element is the size of the shader’s bytecode. The vertex shader’s input signature must match our input layout, and this function call will fail if it does not.

The final parameter is the pointer of the object that will be created with this function call.

A vertex shader is compiled code executed on the GPU. A vertex shader is executed for each vertex that’s processed by the device. There are various types of shaders that Direct3D supports, each of which is discussed in more detail in Chapter 7. Direct3D requires shaders for rendering geometry, and therefore we must encounter them before we dive deep into their inner workings.

Below is an example of creating a vertex shader, which we’ll need before we can create the input layout, since the vertex shader’s signature must match the input layout:

ID3D11VertexShader* solidColorVS;
ID3D11PixelShader* solidColorPS;
ID3D11InputLayout* inputLayout;

ID3DBlob* vsBuffer = 0;

DWORD shaderFlags = D3DCOMPILE_ENABLE_STRICTNESS;

#if defined( DEBUG ) || defined( _DEBUG )
    shaderFlags |= D3DCOMPILE_DEBUG;
#endif

ID3DBlob* errorBuffer = 0;
HRESULT result;

result = D3DX11CompileFromFile( "sampleShader.fx", 0, 0, "VS_Main", "vs_4_0",
    shaderFlags, 0, 0, &vsBuffer, &errorBuffer, 0 );

if( FAILED( result ) )
{
    if( errorBuffer != 0 )
    {
        OutputDebugStringA( ( char* )errorBuffer->GetBufferPointer( ) );
        errorBuffer->Release( );
    }

    return false;
}

if( errorBuffer != 0 )
    errorBuffer->Release( );

result = d3dDevice_->CreateVertexShader( vsBuffer->GetBufferPointer( ),
    vsBuffer->GetBufferSize( ), 0, &solidColorVS );

if( FAILED( result ) )
{
    if( vsBuffer )
        vsBuffer->Release( );

    return false;
}

					  

To begin, we load the vertex shader from a text file and compile it into byte-code. You can optionally already have compiled byte code, or we can allow Direct3D to do it for us during startup, which is acceptable for the demos throughout this book. Compiling a shader is done with a call to the D3DX11CompileFromFile function. This function has the following prototype:

HRESULT D3DX11CompileFromFile(
    LPCTSTR pSrcFile,
    const D3D10_SHADER_MACRO* pDefines,
    LPD3D10INCLUDE pInclude,
    LPCSTR pFunctionName,
    LPCSTR pProfile,
    UINT Flags1,
    UINT Flags2,
    ID3DX11ThreadPump* pPump,
    ID3D10Blob** ppShader,
    ID3D10Blob** ppErrorMsgs,
    HRESULT* pHResult
);

The first parameter of the D3DX11CompileFromFile function is the path of the HLSL shader code to be loaded and compiled.

The second parameter is the global macros within the shader’s code. Macros in a HLSL shader work the same way they do in C/C++. An HLSL macro is defined on the application side using the type D3D_SHADER_MACRO and an example of defining a macro called "AGE", and giving it the value of 18 can be seen as follows:

const D3D_SHADER_MACRO defineMacros[] =
{
    "AGE", "18",
};

The third parameter of the D3DX11CompileFromFile is an optional parameter for handling #include statements that exist within the HLSL shader file. This interface mainly is used to specify behavior for opening and closing files that are included in the shader source.

The fourth parameter is the entry function name for the shader you are compiling. A file can have multiple shader types (e.g., vertex, pixel, geometry, etc.) and many functions for various purposes. This parameter is important for telling the compiler which of these potentially many functions serve as the entry point to the shader we are compiling.

The fifth parameter specifies the shader model. For our purposes we’ll be using either vs_4_0 or vs_5_0 for vertex shader 4.0 or 5.0, respectively. In order to use shader model 5.0, you must have a DirectX 11–supported graphics unit, whereas to use shader model 4.0 you’ll need a DirectX 10 and above graphics unit. We’ll cover shaders and shader models in more detail in Chapter 7.

The sixth parameter of the D3DX11CompileFromFile is the compile flags for the shader code and is used to specify compile options during compilation. The compile flags are specified using the following macros:

  • D3D10_SHADER_AVOID_FLOW_CONTROL— disables flow control whenever possible.

  • D3D10_SHADER_DEBUG— inserts debugging information with the compiled shader.

  • D3D10_SHADER_ENABLE_STRICTNESS— disallows legacy syntax.

  • D3D10_SHADER_ENABLE_BACKWARDS_COMPATIBILITY —allows older syntax to compile to shader 4.0.

  • D3D10_SHADER_FORCE_VS_SOFTWARE_NO_OPT— forces vertex shaders to compile to the next highest supported version.

  • D3D10_SHADER_FORCE_PS_SOFTWARE_NO_OPT— forces pixel shaders to compile to the next highest supported version.

  • D3D10_SHADER_IEEE_STRICTNESS— enables strict IEEE rules for compilation.

  • D3D10_SHADER_NO_PRESHADER— disables the compiler from pulling out static expressions.

  • D3D10_SHADER_OPTIMIZATION_LEVEL0 (level 0 through 3)—used to set the optimization level, where level 0 produces the slowest code and level 3 produces the most optimized.

  • D3D10_SHADER_PACK_MATRIX_ROW_MAJOR— used to specify that matrices are declared using a row major layout.

  • D3D10_SHADER_PACK_MATRIX_COLUMN_MAJOR— used to specify that matrices are declared using column major layout.

  • D3D10_SHADER_PARTIAL_PRECISION— forces computations to use partial precision, which can lead to performance increases on some hardware.

  • D3D10_SHADER_PREFER_FLOW_CONTROL— tells the compiler to prefer flow control whenever it is possible.

  • D3D10_SHADER_SKIP_OPTIMIZATION— used to completely skip optimizing the compiled code.

  • D3D10_SHADER_SKIP_VALIDATION— used to skip the device validation, which should only be used for shaders that have already passed the device validation process in a previous compilation.

  • D3D10_SHADER_WARNINGS_ARE_ERRORS— used to treat warnings as errors.

The seventh parameter of the D3DX11CompileFromFile is the effect file flags. This is only set if we are compiling a shader using an effect file and will be discussed in Chapter 7. The effect file flags can be set to one or more of the following:

  • D3D10_EFFECT_COMPILE_CHILD_EFFECT— allows us to compile to a child effect.

  • D3D10_EFFECT_COMPILE_ALLOW_SLOW_OPS— disables performance mode.

  • D3D10_EFFECT_SINGLE_THREADED— disables synchronizing with other threads loading into the effect pool.

The eighth parameter of the D3DX11CompileFromFile is a pointer to a thread pump. By specifying a value of null for this parameter, the function will return when the compilation is complete. This parameter deals with multithreading, which is a hot and advanced topic in game development. Using a thread allows us to load a shader asynchronously while we continue code execution.

The ninth parameter of the D3DX11CompileFromFile is the out address to memory where the compiled shader will reside. This includes any debug and symbol-table information for the compiled shader.

The tenth parameter of the D3DX11CompileFromFile is the out address to memory where any compilation errors and warnings will be stored. This object will be null if there are no errors, but if there are we can use this to report what the errors were so we can fix them.

The eleventh parameter of the D3DX11CompileFromFile is the return value for the thread pump. If the thread pump parameter (the eighth one) is not null, then this parameter must be a valid memory location until the asynchronous execution completes.

With the compiled shader code we can create a vertex shader with a call to the Direct3D device’s CreateVertexShader, which takes as parameters the buffer for the compiled code, its size in bytes, a pointer to the class linkage type, and the pointer to the vertex shader object we are creating. The function prototype for the CreateVertexShader function is as follows:

HRESULT CreateVertexShader(
    const void* pShaderBytecode,
    SIZE_T BytecodeLength,
    ID3D11ClassLinkage* pClassLinkage,
    ID3D11VertexShader** ppVertexShader
);

Next is to specify the layout of the vertex elements. In our simple vertex structure we are just using a vertex position, so we specify this using the "POSITION" semantic at semantic index 0 (since it is the first and only element of this semantic), the format that specifies its X, Y, and Z values are 32 bits each, its input slot of 0 and a byte offset of 0, an input slot class of being per-vertex, since our positions are for each vertex, and an instance step rate of 0 since we are not using instancing.

The input layout itself is created with a call to CreateInputLayout, which we’ve discussed in a previously in this section. An example of using the created vertex shader to create the input layout can be seen as follows:

D3D11_INPUT_ELEMENT_DESC vertexLayout[] =
{
    { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0,
       D3D11_INPUT_PER_VERTEX_DATA, 0 }
};

unsigned int totalLayoutElements = ARRAYSIZE( vertexLayout );

HRESULT result = d3dDevice_->CreateInputLayout( vertexLayout,
    totalLayoutElements, vsBuffer->GetBufferPointer( ),
    vsBuffer->GetBufferSize( ), &inputLayout );

vsBuffer->Release( );

if( FAILED( d3dResult ) )
{
    return false;
}

Just to complete things, we also will usually need to load the pixel shader when working with Direct3D 10 and 11. An example of loading the pixel shader can be seen as follows, which looks much like what we’ve done with the vertex shader:

ID3DBlob* psBuffer = 0;
ID3DBlob* errorBuffer = 0;

HRESULT result;

result = D3DX11CompileFromFile( "sampleShader.fx", 0, 0, "PS_Main", "ps_4_0",
    shaderFlags, 0, 0, &psBuffer, &errorBuffer, 0 );
if( FAILED( result ) )
{
    if( errorBuffer != 0 )
    {
        OutputDebugStringA( ( char* )errorBuffer->GetBufferPointer( ) );
        errorBuffer->Release( );
    }

    return false;
}

if( errorBuffer != 0 )
    errorBuffer->Release( );

result = d3dDevice_->CreatePixelShader( psBuffer->GetBufferPointer( ),
    psBuffer->GetBufferSize( ), 0, &solidColorPS );

psBuffer->Release( );

if( FAILED( result ) )
{
    return false;
}

					  

Drawing a 2D Triangle

Rendering is the heart of all that we’ve been working toward. To render geometry in Direct3D, we generally must set up the input assembly, bind our shaders and other assets (such as textures), and draw each mesh. To set the input assembly we’ll start by examining Direct3D’s IASetInputLayout, IASetVertex-Buffers, and IASETPrimitiveTopology.

The IASetInputLayout function of the Direct3D context object is used to bind the vertex layout that we created when we called the device’s CreateInputLayout function. This is done each time we are about to render geometry that uses a specific input layout, and the IASetInputLayout function takes as a single parameter that ID3D11InputLayout object.

The IASetVertexBuffers function is used to set one or more vertex buffers and has the following function prototype:

void IASetVertexBuffers(
    UINT StartSlot,
    UINT NumBuffers,
    ID3D11Buffer* const* ppVertexBuffers,
    const UINT* pStrides,
    const UINT* pOffsets
);

The first parameter of the IASetVertexBuffers function is the starting slot to bind the buffer. The first buffer in the array of buffers you are passing is placed in this slot while the subsequent buffers are placed implicitly after.

The second parameter of the IASetVertexBuffers function is the number of buffers that are being set, while the third parameter is an array of one or more buffers being set.

The fourth parameter is the vertex stride, which is the size in bytes of each vertex, while the last parameter is the offset in bytes from the start of the buffer to the start of the first element of a vertex. The third and fourth parameters must specify a value for each vertex buffer being set and therefore can be an array of values if there are multiple buffers.

The IASetPrimitiveTopology is used to tell Direct3D what type of geometry we are rendering. For example, if we are rendering a triangle list we would use the flag D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST as the function’s parameter, or if we wanted to use triangle strips we would use D3D11_PRIMITIVE_TOPOLOGY_ TRIANGLESTRIP. There are about 42 values that can be used, most of which deal with control points for more advanced geometry, and the full list can be found in the DirectX documentation.

After setting the input assembler we can set the shaders. Later in this book we’ll look at how to apply other types of shaders (e.g., geometry shaders), but for now we’ll focus on vertex and pixel shaders. A vertex shader is set by calling the Direct3D context’s VSSetShader function, and a pixel shader is set by calling PSSetShader. Both functions take as parameters the shader being set, a pointer to an array of class instances, and the total number of class instances being set. We’ll discuss class instance interfaces in Chapter 7.

Once we’ve set and bound all of the necessary data for our geometry, the last step is to draw it by calling the Draw function. The Draw function of the Direct3D context object takes as parameters the total number of vertices in the vertex array and the start vertex location, which can act as an offset into the vertex buffer where you wish to begin drawing.

An example of rendering geometry is as follows:

float clearColor[4] = { 0.0f, 0.0f, 0.25f, 1.0f };
d3dContext_->ClearRenderTargetView( backBufferTarget, clearColor );

unsigned int stride = sizeof( VertexPos );
unsigned int offset = 0;

d3dContext_->IASetInputLayout( inputLayout );
d3dContext_->IASetVertexBuffers( 0, 1, &vertexBuffer_, &stride, &offset );
d3dContext_->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST );

d3dContext_->VSSetShader( solidColorVS, 0, 0 );
d3dContext_->PSSetShader( solidColorPS, 0, 0 );
d3dContext_->Draw( 3, 0 );

swapChain_->Present( 0, 0 );

					  

Calling the swap chain’s Present function allows us to present the rendered image to the screen. The Present function takes as parameters the sync interval and the presentation flags. The sync interval can be 0 to present immediately, or it can be a value that states after which vertical blank we want to present (for example, 3 means after the third vertical blank). The flags can be any of the DXGI_PRESENT values, where 0 means to present a frame from each buffer to the output, DXGI_PRESENT_DO_NOT_SEQUENCE to present a frame from each buffer to the output while using the vertical blank synchronization, DXGI_PRESENT_TEST to now present to the output (which can be useful for testing and error checking), or DXGI_PRESENT_RESTART to tell the driver to discard any outstanding request to Present.

In Chapter 6 we’ll discuss how to draw indexed geometry when we move to the topic of 3D rendering. Indexed geometry doesn’t serve much use in 2D scenes, but when we move to 3D it will be very important to cover it.

  • Safari Books Online
  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint