3D Tank Shooter
Solo Project - Personal Engine
About TankGame
TankGame is a 3D tank shooter where 3 enemy bases spawn minions continuously up to a cap of 75 enemies. Minions swarm towards the player, and damage the player on contact. To win, the player must destroy all bases and enemies without dying. This project pushed me to expand my engine's 3D capabilities - over the course of the 2 months of development, I added a forward rendering pipeline, material definitions, shadows, a transform class with parenting capabilities similar to Unity, a MeshBuilder with the capability to combine primitive shapes into a single mesh, and a particle system. In addition to the improvements to my engine, the project allowed me to explore some simple 3D physics, map generation, and AI.
Features
Gameplay
Engine Features
Game Features
Player Movement

TankGame uses a single-point physics system for the player movement, where the tank is always "stuck" to the surface of the terrain and oriented with the normal of the terrain at that point. To do this, I do all movement on the player's XZ position, and every frame update the tank's 3D position accordingly. The Y (vertical) position is determined by the height of the terrain at that point - which is an interpolation of the heights at the four vertices of the current face. Similarly, the orientation is the interpolated normal of the 4 vertices' normals. I want to preserve the forward orientation of the tank for gameplay feel, so I choose to cross the terrain's normal (which is the tank's new "up" vector) with it's existing forward vector. This gives me a new right vector for the tank, and from there, I can cross this new right with the normal to get the final forward vector, which might be pitched slightly up or down from the existing forward. This means that as the player is moving forward with "W", they move predictably across the terrain without drifting left and right as they go over hills, while the body of the tank rolls to match the terrain.
void Player::SetWorldPositionFromXZPosition() { //get the world position - our XZ position at the world's height + a little padding for feel Vector3 worldPos = Vector3(m_positionXZ.x, GetHeightAtCurrentPos() + RENDERABLE_OFFSET, m_positionXZ.y); //normal is the interpolated normal of the terrain at our XZ position Vector3 normal = g_theGame->m_currentMap->GetNormalAtPosition(m_positionXZ); //want to maintain forward of the tank - cross this with our new up Vector3 forward = GetForward(); Vector3 newUp = normal.GetNormalized(); Vector3 newRight = Cross(newUp, GetForward()).GetNormalized(); Vector3 newForward = Cross(newRight, newUp).GetNormalized(); //the new transform of the tank Matrix44 mat = Matrix44(newRight, newUp, newForward, worldPos); //set the transform of the renderable (with orientation) and the position of the spherical collider and shadow camera transform m_renderable->m_transform.SetLocalMatrix(mat); m_collider.SetPosition(worldPos); m_shadowCameraTransform->SetLocalPosition(worldPos + m_shadowCameraOffset); }
Player Aiming

In TankGame, the tank always aims at the target at the center of the screen - indicated by the red square.This target is the hit point of a raycast through the camera's forward - Once this position is identified, the tank turret turns to look towards it every frame. The raycast is a basic step-and-sample algorithm, with some improvements to find a relatively accurate hit point with a relatively small number of steps. The raycast hits if it is inside the sphere collider of an enemy or the box collider of a base, or if it passes beneath the terrain, which can be determined by the height of the terrain at the XZ position of each step of the raycast. Once a step determines that a hit has occured, I narrow down the actual hit position by looking at the midpoint of the position before the hit and the hit position, and checking if that point is a hit. Using a "binary search"-ish algorithm, I can continue getting the midpoint of the most recent non-hit position and the most recent hit position I've found to narrow down an accurate hit position very quickly. This allows me to have a fairly large initial step size (about 20% of a terrain tile), and still have a fairly accurate position for the target barrel target. While the large step size does miss hits at narrow corners of bases, enemies, and terrain, this tradeoff is barely noticeable in a fast-moving game like this.
Back To Top
Terrain Generation

The terrain in TankGame is generated from a noise texture, which determines the height of the terrain. Given the desired size of the map, min and max height, number of chunks, and the number of planes per chunk, the position of each vertex of the terrain can be determined. The provided extents of the map is divided into an even grid of vertices, where the height of each vertex is determined by the value of the red pixel at the image's corresponding uvs. Once all of the heights have been found, a mesh is constructed using these vertices - the terrain can also be split into chunks for future culling. in addition to constructing the mesh, the normal at each vertex can be determined by crossing the directions to adjacent vertices. this normal is used for lighting and the player tank.
After the terrain's mesh is generated, a plane for the water is added 33% of the way between the minimum height and the maximum height of the terrain. This plane has alpha, and has a shader that scrolls the uvs of the water texture to simulate "moving" water.
Back To Top
Swarming AI

In TankGame, bases continuously spawn enemy minions up until a cap. These enemies swarm towards the player in large clusters, using a simple swarming algorithm. On each frame, each computes the direction they should move by weighting the different behaviors they should exhibit to swarm. The minions should cluster towards nearby enemies, but also stay far enough away that they aren't overlapping. Additionally, the clusters should attempt to move in a unified direction, so their forward direction should be weighted by their neighboring minions forward directions. And finally, their forward direction should turn slowly towards the player. All of these vectors are calculated and averaged together using pre-defined weights for each behavior, to form the new forward direction of the minion. The weights of these four factors on the forward direction can be easily manipulated to change the behavior of the swarm.
Back To Top
Profiler

To monitor performance of the swarming AI and the rendering pipeline, I added a profiler to my engine. In code, I can specify a start and end point to profile - this signals to the profiler to measure how many cycles take place between these points. At the end of each frame, calculations are done on each marked area to establish how much of the frame was spent in each section. The information for the last 128 frames are saved by the profiler, and made into a bar graph which shows how long each frame took. I can select any of these frames to look at in detail, to establish what areas of the code if any are especially slow. This tool remains useful today for optimizing my projects.
It's worth noting that the act of running and rendering the profiler is ironically pretty slow, due to the amount of calculation and rendering of the UI, as evidenced by the spike in frame time when the profiler is opened in the gif. While profiling, I measure these frame costs so that I can calculate about how much the profiler is impacting what I'm seeing. In the future, I'd like to add a toggle to remove these frame costs as part of the graph at all, so that I can focus on the performance of the game running without the profiler.
Back To Top
Data-Driven Materials
<material name="default_lit" shader="lit_fog"> <textures> <texture type="diffuse" path="white" /> <texture type="normal" path="flat" /> </textures> <properties> <property type="float" name="SPECULAR_AMOUNT" value=".5" /> <property type="float" name="SPECULAR_POWER" value="3" /> <property type="RGBA" name="TINT" value="255,255,255,255"/> </properties> </material> <material name="water" shader="water"> <textures> <texture type="diffuse" path="water_2.png" /> </textures> </material> <material name="terrain" shader="lit_fog"> <textures> <texture type="diffuse" path="grass_green_d.png" /> <texture type="normal" path="grass_green_n.png" /> </textures> </material>
To simplify the process of rendering an object, I added a data-driven material system to my renderer. This combines the shader, any textures that should be used for the material, and any properties that the material should bind to the GPU. Each renderable has one or more materials - one for each mesh on the renderable. This allows the user to easily combine every aspect of an objects rendering in one place, so that the diffuse/normal textures, properties, and shader don't have to defined separately at creation. Additionally, each renderable instances shallow copies of the materials - which allows for gameplay effects on a specific entity using that material. For example, when a base is shot, it could flash red for a second by changing the tint of the material, without changing the tint of the identical materials on the other bases.
Back To Top
Forward Render Path and Lighting

To streamline the 3D rendering pipeline, I added a forward rendering path to my engine, which stores renderables in a scene, and then can render everything in that scene to a specified camera. The render path creates specific draw calls for each renderable, and sorts draw calls according to their rendering layer (opaque, alpha, etc.), and sorts renderables in the alpha layer by their distance to the camera. For objects affected by lighting, the render path also establishes which lights in the scene are affecting each renderable (the 8 closest lights to the renderable). Additionally, the render path handles set up for the camera, including screen clearing (in this case to a skybox), and setting up particle systems made up of billboarded sprites for the camera's position. After the camera is set up, and all of the draw calls have been created and sorted, the render path binds the render state for each draw call and passes the mesh to the GPU. The forward rendering path simplifies rendering large scenes like the one in tank-game - now, each entity's renderable just needs to be added to the render scene, instead of carefully ordering which entities need to be rendered at what times within the game's render loop.
Lighting is done in a shader using a simple blinn-phong model. For the sake of expanding my understanding of the lighting algorithm, I added a simple cel-shading algorithm, which is used on everything except the terrain in the game. Both models use the shadow camera's depth target to establish which areas are in complete shadow.
void ForwardRenderPath::RenderSceneForCamera(Camera * cam, RenderScene * scene) { //bind the camera's model and projection matrices for every draw call m_renderer->BindCamera(cam); //clears to a color, or to a skybox ClearForCamera(cam); //updates the particle system's meshes to be billboarded to this camera for (ParticleSystem* s : scene->m_particleSystems){ s->PreRenderForCamera(cam); } std::vectordrawCalls; // generate the draw calls for(Renderable* r : scene->m_renderables){ //this will change for multi-pass shaders or multi-material meshes Light* lights[MAX_LIGHTS]; if (r->GetEditableMaterial()->UsesLights()){ ComputeMostContributingLights(lights, r->GetPosition(), scene->m_lights); } //meshes can have multiple submeshes, which shares the renderable's model matrix but can have its own material. //Each submesh is it's own drawcall for (int i = 0; i < (int) r->m_mesh->m_subMeshes.size(); i++){ DrawCall dc; //set up the draw call for this renderable :) // the layer/queue comes from the shader! dc.m_mesh = r->m_mesh->m_subMeshes[i]; dc.m_model = r->m_transform.GetWorldMatrix(); dc.m_material = r->GetEditableMaterial(i); dc.m_layer = r->GetEditableMaterial(i)->m_shader->m_sortLayer; //if this game has shadows, bind the shadow camera's depth target if (m_usingShadows){ dc.m_material->SetTexture(SHADOW_DEPTH_BINDING, scene->m_shadowCamera->GetDepthTarget()); } //if we use lights, add the vector of lights we used earlier if (r->GetEditableMaterial(i)->UsesLights()){ dc.SetLights(lights); } //add the draw call to your list of draw calls drawCalls.push_back(dc); } } //now we sort draw calls by layer/queue //and sort alpha layer by distance to camera, etc. SortDrawCalls(drawCalls, cam); //for each draw call, bind render state and draw the mesh. for(DrawCall dc: drawCalls){ //an optimization would be to only bind the thing if it's different from the previous bind. m_renderer->BindMaterial(dc.m_material); m_renderer->BindModel(dc.m_model); m_renderer->BindLightUniforms(dc.m_lights); m_renderer->BindStateAndDrawMesh(dc.m_mesh); } }
Back To Top