The winter in the Netherlands was wet and warm with no snow to build a snow man, so we decided to implement our own snow system in our engine.
Basically what we did was use the deferred render normal and depth buffer to determine the normal and the 3D position of a pixel on the screen. Than with a little bit of dot product we created snow that will lay on the top of every object rendered with deferred shading.
Here is a movie of the completed effect.
As seen in the movie, the snow only lies on the up-facing sides of the tree branches.
Weather forecast for the coming game: cloudy with lots of rain!
We wanted to add weather particles to our games. And we wanted A LOT! Our existing particle engine was not capable of simulating millions of particles in real time and we could only add them to a limited area. To solve this problem we added boxed movement and batch rendering for particles. This means that we simulate between 50 to 200 particles in a limited area, say 3x3x3 meters and render this box of particles multiple times in an grid of 3x3x3, having a larger weather cube that follows the camera seamlessly.
We also stretch particles in the direction they move and also by camera movement so you get a star field like effect when you are moving pretty fast trough the rain or snow. Here are two movies to demonstrate the effect.
We can already color the sky and change the lighting, and by adding weather effects we hope to create a more dynamic world with real time changing weather.
For the need of animated 3D models we implemented MD5 model support, introduced in idTech 4 (aka Doom 3 engine). This model format is supported in some 3D modelling and animation tools, so it would be a nice choice. There are disadvantages as that the MD5 models and animations are design to be computed on the CPU in stead of the GPU. This will not stop us as this is the first animated model file format that we will support. In the future we may use more formats, but for now it will suffice.
For now we have the Cyberdemon from Doom 3 as test subject and none where harmed during testing…
As I described in my previous post I was working on marching cubes terrain. This time the texture stretching is fixed by generating texture coordinates from the 3D position and normal. This simply is the (y-axis is the up-vector), xz * normal.y + xy * normal.z + zy * normal.x. With 2 textures to splat this requires 6 texture look-ups.
Also I added cpu tessellation. The terrain size for this post is a bit smaller, but triangle count is far higher. Next step is to fix geomipmapping to reduce triangle count.
Here are some new screenshots. Also mind the”buggy” ones :).
Somehow you always come back to some standard terrain engine. Load a height map create a soup of triangles, optimize it, create more triangles, put a texture on it and… yes you have your terrain. Height map based. Kinda limited to a height map. Not something I wanted! So I want to have over hanging cliffs and tunnels. Have almost possible shape I can think of.
In the past I did some stuff that didn’t worked out as well as I hoped. I used a height map and offset the X, Y and Z via the R, G and B color channels. This way you can have over hanging cliffs, but still no tunnels (or well, you can use the right colors so that the edges of the cliffs meet in the middle…). But not really what I wanted.
So… I did some research on terrain and tried the marching cubes. From Wikipedia: Marching cubes is a computer graphics algorithm, published in the 1987 SIGGRAPH proceedings by Lorensen and Cline, for extracting a polygonal mesh of an isosurface from a three-dimensional scalar field (sometimes called voxels). An equivalent two-dimensional method is called the marching squares algorithm.
Sort: Creating a 3D polygon model from an “3D object”. In my case that’s what I need. However I don’t have a “3D object”. So I generate one! Define you “3D object” (in my case terrain) as a bunch of cubes. Say an array of 32x32x16. All cubes that have some neighbors, have some polygons as there is something at that location. (For more information I suggest Wikipedia or google).
Anyway, I use a height map (YES I DID!) to define the basics of my terrain shape. Next I start to add cubes randomly on locations where they could fit to generate a non-height map look and have some stuff “hanging over” (hangover…?). It still looks very polygonized, but when applied some textures it looked much better. I’m thinking of adding some kind of tessellation to smooth it all out. Screenshot or it didn’t happen, so here you go:
While still progress is slow I try to tell you guys something about the editor user interface. Decisions where to make the editor in-game for easy access in full screen mode. So far it’s doing it’s job very well. I reworked the editor’s graphical user interface images to some eye-candy that looks pretty good for now.
Currently the editor and UI are able to create, delete and switch scenes, display images, browse through files, transform the camera, and see some logging, browsing and place actors (moving and selecting needs to be implemented at some time). Basically the basics. I try to keep the editor to a minimum as possible and use notepad, paint and other tools to create the game. But eventually you always need some in-game-tweaking. And that’s where our nice user interface comes in handy!!
As we are curious of engine and game performance beyond the “bad” frames-per-second measurement, I started to integrate a benchmark “tool” into the engine. This “tool” only registers the start and the end of a specified function block with a small useful information description.
With a separate benchmark viewer we can analyze the engine performance per function per thread and it shows the actions on a nice time line. What we see are blocks, aligned in time per thread. A large block means something took a shitload of time, a small block means mostly a few milliseconds. Also the engine schedules some update, physics and render, content loading calls on different threads, we now can see what actually happens inside. This also gives us more power for better optimizations and debug capabilities.
As we are busy with a procedural-city-generator that we could use in future games, I created a small simple test building (prefab) to test the building-placement-code. This code is still pretty simple and only puts some buildings on a straight line along each street with a small offset. furthermore we added a sidewalk strip next to each street to give it a better look.
Here are the images created so far. Apart for some bugs it looks pretty neat.
As seen in part 1 we are building a procedural city. So far we have the generation of the street network in place. Now we where working on the generation of blocks, the places we can but building or parks or whatever on. It’s basically an open space between the streets.
Problem area: we want to have a city. A BIG city. I don’t have the skill or time to build a BIG city in a 3D modeling program. So what to do? Create one from code!
Why do we want to have a city?… Well probably we gonna create a race game (again) through a urban-shaped environment . How or what we don’t know yet, but we now ave another idea for yet another game…again…
Where to we start? First the streets. If we have streets, we have building blocks. On building blocks we can put building or other things like a park. Sounds easy huh.
So far I created a small test-app that defines 5 lines. Those lines are extruded and connected in a way so the look like streets. So here’s a small image of what I made so far. Red is left side of the street, blue right side (debugging purpose only…)
Left: the app as-is
Right: triangulation (draw by hand).
The triangulation in 3D space is also in place (not shown in the picture) and apart from a crappy test-texture it does it’s job very well!
Next step is to determine (detect) the two triangles in the middle. This would be the building blocks that hold some information like big building, sky scraper or park (and maybe water and other stuff).