Order of post-process effects

June 20, 2011

Recently, I started to think of which order should post-process effects should run. Its an interesting problem since its a often a matter of balancing performance, quality and correctness.

The usual post-process effects that you would find in a modern game are, Anti-Aliasing Resolve, Tone-Mapping, Bloom, Depth Of Field, Motion Blur and Color Correction.

  • The AA Resolve is the process of downsampling a high-res image (rendered usually using the hardware Multi-Sampled Anti-Aliasing support) to an output resolution. It give a nice anti-aliased image. We could replace that pass with other Anti-Aliased post-process such as the recent MLAA technique.
  • The Tone-Mapping is the process of taking a high dynamic range (HDR) image to a displayable range (LDR).
  • Bloom is a process of blurring an image, and adding the result back on top of the source image. This gives a nice effect of light-bleeding.
  • Depth Of Field is a process of having an object in focus sharp, and objects out of focus blurry.
  • Motion Blur is a process of blurring areas of the screen that are in motion.
  • Color Correction is a process of changing the color palette of your image. It usually also includes a brightness/contrast pass.

Typically, on the current platforms (XBox 360 and Ps3) you would have a game render frame that would look like the following:

1- Scene Opaque (in MSAA)
2- AA Resolve
3- Scene Transparency
4- Bloom
5- Tone-Mapping
6- Depth Of Field
7- Motion-Blur
8- Color Correction

I’ve integrated the position of the Scene rendering just to give us a better idea of how it all fits together within the frame.

But this is not exactly correct. Depth Of Field and Motion-Blur should be done in HDR, not LDR, so pre ToneMapping. Also, since Bloom simulates light bleeding, Depth Of Field and Motion-Blur should also be done pre Bloom. This will give a nice glow effects around moving and out-of-focus lights.

Color-Correction needs to be done in LDR. This because color-correction is an artistic tweak over the rendered image. We can easily represent LDR colors, harder to do so with HDR colors. Also, color should be preserved post-color correction. If the color correction was to be done in HDR (pre Tone-Mapping) colors would change.

There is the remaining question of the Anti-Aliasing, and the order of Depth Of Field with Motion-Blur. Lets remind ourselves of the new proper order of post-processing effects:

1- Opaque
2- AA Resolve
3- Transparency
4- Depth Of Field
5- Motion Blur
6- Bloom
7- Tone-Mapping
8- Color-Correction

For anti-aliasing to look the best, you would want to put it in the very last stage or your rendering pipeline, so all way after Color-Correction: You would get this new full-screen effect pipeline:

1- Opaque
2- Transparency
3- Depth Of Field
4- Motion Blur
5- Bloom
6- Tone-Mapping
7- Color-Correction
8- AA Resolve

This requires to have all your other effects to be done at full MSAA resolution, which can be costly. On current console hardware, this is almost impossible to do without sacrificing some big quality for performance, or just frame-rate (which ultimately results in a quality loss).

Now when it comes to Motion-Blur and Depth Of Field, ideally, you would want those two effects to be combined into one. Or you want the 2nd effect, to have information on how the 1st effect happen, to make sure the effect is more correct overall. Its a similar issue that you have between Bloom and Motion-Blur. You want the blurry pixels resulting of the Depth of Field to be Motion Blurred properly, and you want also the blurry pixels caused by motion blur to be properly blurred out depending on their depth (knowing even that their depth have changed during motion), so Depth of Field needs to be temporal, like Motion-Blur.

In the world of video-games, this is almost impossible to do with current technologies, so I guess you would have to choose what is the best for your game. Some games choose to even only do one or the other: They would for example, at very slow movements, only have Depth of Field, and at fast movement have only Motion-Blur.

To finish, I’m sorry I don’t have a demo, or a set of screen-shots to illustrate my post. This is all very theoretical. Some are conclusions that came from my experience in implementing those effects in God Of War 3 and Mercenaries 2. I hope to be able to illustrate this, and deliver a demo very soon, but I hope this will help you think a little bit more about the problem, and how to resolve it.

Systems: Init and Shutdown for your program…

February 27, 2011

Recently, I was talking with some of colleagues about the issue with data initialization when booting your game. Mostly the issue with globals, and rules about how to shut them down, order of init/shutdown. I looked back to the different projects I worked on, and realized there are a few ways to approach the problem.

Before I go into the details, its important to notice that systems are singletons. There is only one instance of them in memory at any given time. And there are different ways to manage those. Mostly to initialize/shut them down, access them etc… Here’s a few different techniques that I’ve seen, and I will try to explain their advantages and flaws.

I’ve seen games do different initializations:

First, most naive is to just have a list of globals for your systems, which are auto-initialized when the game boots.

Renderer renderer;
MemoryAllocator memoryAllocator;
void main()
{
      while (!quit)
      {
           DoMainLoop();
           renderer.DoStuff();
      }
}

This is very simple but its has one huge flaw. It is that you have no control over the order of initialization. Different compilers will generate different initializer lists when it comes to contructing the globals. That means that you can’t have a dependency between systems, which is usually a huge issue. For example, if your renderer needs to allocate memory, the memory system needs to be constructed before the renderer, and using globals, you can’t guarantee in hell it will be the case.

So you would approach the problem another way: You could initialize on first use.

class Renderer
{
     static Renderer *renderer;

     static Renderer* GetInstance()
     {
         if (!renderer)
         {
              renderer = new Renderer;
         }
         return renderer;
     }
};

or

class Renderer
{
     static Renderer* GetInstance()
     {
         // This will initialize the first time we call that function
         static Renderer renderer;
         return &renderer;
     }
};

This somewhat fixes the issue of dependencies. When a system needs to allocate memory, the allocator will “construct” itself on first allocation. But this have a preformance issue. It needs to check on each use if the system is valid or not to decide if it needs to be constructed. Also, while it might work for construction, it won’t work for destructor. So this a technique that should be avoided.

Another approach is to allocate your systems on the heap in the right order (so explicit dependencies):

Renderer *renderer;
MemoryAllocator *memoryAllocator;
void main()
{
     memoryAllocator = new MemoryAllocator;
     renderer = new Renderer();
     while (!quit)
     {
          DoMainLoop();
     }
     delete renderer();
     delete memoryAllocator();
}

This is now much closer to what we need. We have a defined order of initialization and shutdown and we don’t have any performance issue when accessing those systems. But we still have one problem. The memory allocator has to be initialized first, and it can’t depend on anything.

My prefered way, which I found to always be rock solid is to do something like this:

The initialization/shutdown code snipset would look like this:

void main()
{
    Core::Memory::Init();
    Graphics::Init();

    while (!quit)
    {
          DoMainLoop();
    }

    Graphics::Done();
    Core::Memory::Done();
}

Where Core and Graphics are namespaces for your different systems and they do their static initialization.

Then, use the placement new/delete for your globals in the Init/Done functions to construct/destruct the system. For example it would look like this:

namespace Graphics
{
     char systemMem[sizeof(Renderer)];
     Renderer &renderer = reinterpret_cast(systemMem);

    void Init()
    {
         new (&renderer)Renderer;
    }

    void Done()
    {
         renderer.~Renderer();
    }
}

This has lots of advantages. You have solved the issue where the memory allocator needs to exist first. For example, you could have your logging system be initialized before the memory allocator so that the memory allocator can use it to track down its allocations. Then you still have total control over the order of which your systems are initialized (and shutdown for that matter). You also get the performance of not checking if your system has been initialized or not for each access to it. And finally, a smaller advantage is that your system is guaranteed valid once you went through the initialization list. With a pointer there will always be a doubt.
The one small disadvantage is that you need to check your .bss to see how much memory your systems are using. Its a small annoyance. Some people would prefer to have the smallest .bss section and have everything in their heap. It helps them to have a global view of their memory map. But checking the .map file to track down your .bss memory is not that much of a hassle. And you could add the tracking of those elements in your memory tracking tool.

Another thing related to systems, are the ones created during gameplay. For example how a scene is loaded/unloaded. But I will talk about this in a later article.

Do work at the start to avoid the mandatory crunch of the end…

September 7, 2010

After working on multitude of projects, some that shipped and some that didn’t, I realized that: “Starting a game is easy, Making a game is hard, Shipping a game is even harder”. Often, the issue of finishing a project is to really take the features to the finish line, the content, and more important, the product as a whole. What makes that part hard is that you can always make things better, you can always tweak and improve, and the hard thing is to say “Stop! this is enough, lets ship this thing!”. Often, you have people who are there do say this “Stop!”. But also, when you get to the end, things are just not good enough and the team has to crunch hard to get over the hump of good enough, and ship the product.

In 2008, when we were finishing Mercenaries 2, we were crunching like crazy to get things finished and ship the game. The big problem we had is that we had to crunch to get the basics finished. Those things were: getting our streaming system working properly off disk, get our levels loading working properly, get our game in memory, in framerate, getting our Ps3 up to par etc… Pretty much the things you need to have to ship a game.

Starting the new project, we were talking about doing things better. Some of those things were team wide processes: having a more solid pre-production, a design that has been prototyped and proved out to work before going into full production. But some of us were thinking of the technical issues, and how we could resolve them to make the development smoother. Unfortunately, we quickly went back into doing similar mistakes and I decided to leave. But this made me think of the situation, and trying to materialize ideas on how to make things better. I looked back at previous projects and realized that its very hard to not crunch at the end, because you can’t predict the issues you’re gonna have. But I believe there are a few things you can do to your project, specially when starting something brand new, with tech from scratch (or almost scratch) to get your project in a better shape at the end and not suffer through crunch, or do crunch to deliver something of higher quality instead of just something shippable. But what are those things ?

I will come to that list, but thinking of those things, you will notice that not only they will help you with the last phase of your project, but they usually help throughout the development. When I tried to make that list, I tried to think of the things that you need to have in your game, but doesn’t make it fun, or pretty, and doesn’t affect the quality of your game – actually some of things do, we’ll come to it – but are things you just can’t avoid to do. Most of the things I will list apply to both consoles and PC. PC development can be a little different due to its variety of configuration, and its also a platform I don’t know very well, so you’ll have to see if this applies to you.

1. Loading and Media.

It is obvious that when you make a game, you need to pretty much load files, extract the data and “run” this data. You usually have something that loads, and runs. But something I’ve seen a few times is the lack of a good mechanism to load files, unload files and go from levels to levels. At the start of the project, you just get one level that loads. You can probably have multiple levels, but people just load the level they are working on and just work like that. You get to the end of the project, and boom, now you have to put your game together, and you can’t go from levels to levels without crashing.

On top of this, you need to load your files from Media. Either Disk (DVD or Blu-Ray) or Download-able package. This is not something to take lightly. Getting your game run off media means multiple things. Getting the game to boot, getting your project properly setup to burn the files you need : Not forget system files, getting the directory structure setup which can mean rearrange your data pipeline, automate the system so that you can burn disks on demand, and more important, and often the part that is a surprise, and takes the longest to fix, make sure the load times are decent.

2. Memory and performance

Something that all of us game developers always fight is Memory and Performance:

About memory, to ease the end of your project, deliver every milestone always in consumer memory. I’ve seen teams that want to show their best by using all the devkit’s memory, and show a world that they wouldn’t be able to build and ship in consumer memory. It important though to not show “smoke and mirrors”.

You could say something similar about Performance. In fact, the philosophy between Performance and Memory is rather the same. You need to make sure your performance is always within a threshold of your target performance. You shouldn’t add lots of features without optimizing what you already have to keep your game in a controllable state. You can quickly get yourself deep into a hole that is very hard to get out of.

So for both performance and memory, its important to be reminded where you are. So always have printed on screen the framerate, and the current memory usage. And get your tracking tools early in development. I would say this is one important thing to have in your technology. Lots of tools to track down memory usage and performance.

For performance, good on-screen profiling tools (with nice colored bars) are essential. Try to get this to show for all your processors (GPU, CPU and of you’re on Ps3, SPUs), and threads. Show the current timing for your different systems early, have the tool setup so tracking performance is easy. Get your profiling configuration early in development.

For memory, get your tools to output the memory usage for each of your assets in your levels through a good report file (a CSV file can be easy to read in Excel). An on-target memory tracker is also essential to track down dynamic memory allocation. I saw different way to approach this, but my prefered approach is to record a small callstack with each allocation and print it out when requested. Also, when it comes to memory, make sure you never have memory leaks. A good way to test this is to go from one level to another, and make sure all the memory for the previous level has been properly released.

3. Build Machines

For any kind of medium to large software project, it is quite important to separate the everyday workspace to the actual build of your software. I have seen multiple times in my career projects where the lead, producer just sync code and data on their local machine, build, do some changes, and generate builds from their local machine. Those builds become THE build that they deploy manually to the different users on the team, and to the publisher when there is one. This can cause a desync between the members of the team which causes errors, weird issues going on, and worse, an official build of the game that have some weird debug code in there that someone forgot to remove before making their build. Also, having a human making builds is pretty tedious and error prone specially on milestone weeks as it is not rare that the team and individual will work late, and can get pretty tired.

So you really want to have a machine, completely virgin to any external software, local code or data. Its only role is to sync to the latest from the source control database, and build code, then data and make THE build. Doing so can verify that the database is consistent, for example, that someone didn’t forget to check something in, and it also helps with deploying a nice clean build from the current state of whats checked in. This process can be automated to make things a lot easier, and if your courageous you can push the process to have scripts that will collect all the built files and make isos that can be directly burnt onto a disk, or packaged for online delivery. And to go further, with this, you can have your build machine generate all the different skus (for example, europe and us skus). Having all this process very separate from the day to day team work. Depending on your budget, you have from one to multiple build machines. You can have one per platform, one for code, one for data etc… This allows to build your game faster and separate code and data teams. To finish on this part, if you can make the usage of your build machine really easy, you can make your producer, project lead or whatever become the person to trigger the build machine to make the builds that needs to be deployed.

4. The shell

A few projects that I worked on didn’t have a shell until very late. Mercenaries 2 was one of them and it really bit us in the butt. A shell, even minimalistic is essential during the course of your project. Often the shell drives the flow of your game, but also of your data. On top of that, you will be forced to test early the loading/unloading of levels since once you have a shell, you should be able to go from the shell to your game, and back. And finally, once you have a shell, you can expose all the levels available to the user that they can load/run. Then while your project gets bigger and bigger, you can add more options to your shell. On Destroy All Humans!, we had a shell pretty much from the start. We were able to ship every milestone with it, and use it as a Hub for all our current levels, whether they were production levels, or prototype levels. And we were able to develop the flow of our game using the shell. Adding the options to load/save a game, graphics/audio options etc… became very easy. Also adding and removing a level was dead simple, and because this happened often – deciding to cut levels, or add new ones. It was important for us to keep the project as a whole updated, and everyone knew what was going on. Load the shell, and you now know the levels of the game.

5. Platforms

If you’re doing a cross-platform game, try to sort out your architecture and pipeline as soon as possible. Do not do that as an afterthought. I would even say that you should have all platforms progress at the same time. It is better to do a little less features every milestone, but have them work on all platforms. You will quickly notice that if you have structure your code properly, you will get to make it work on all platforms, with proper performance and memory constraints rather easily. But if you try to focus on one platform during your development, and “port” to the other platforms at the end of your project. You will end up sacrificing quality and worse, the programmers will spend precious time porting the code to the platforms instead of polishing/optimizing/debugging the current state of the game. This is a very easy trap to fall into. The reason is that usually focusing on only one platform, specially at the start of your project makes your development much easier. The producer will usually pick the easiest platform to develop for, and will push for more and more features. But this is very counter productive from a technical point of view. It often, to not say always, forces you to make technical decisions that will hurt you later in the project. The platform situation also goes hand-in-hand with memory and performance. Keeping those in control at all time on all platforms will be very beneficial throughout the project.

6. Crashes and Bugs

This will sound obvious, but you’d be surprised the number of teams that say “I know” but don’t really do this. To avoid the big crunch in the end, you need to have an eye on your bugs and crashes throughout your project. One good way to do that is make sure you always prioritize bugs and crashes over features, and keep your bug count under a threshold. If the number of bugs and/or crashes gets too high, its time to stop everyone from working and have a bug hunt/bug fix day or even week. You can apply this rule to memory and performance as well. It is important to keep those under control.

Conclusion

Most of those things should also help your project throughout development. For example, if you’re working with a publisher, you will most likely have milestones and deliverable. Wouldn’t it just be easier if you could just send a disk of the version of game that the publisher could just boot on any TestKit in their office and just experience the current deliverable like a normal user from the outside world. Believe me, this kind of situation will always win you points in front of a publisher. It makes everyone’s life much easier, saves you lots of hassle, which translates into lowering the cost.

The funny thing about all this is that anyone with a bit of experience will read this and think: “This is so obvious, we should all do this !”, but how many of you, even with experience have been on a project where you get to the end of the project, and never booted the game running of disk, or starts making the game fit in consumer memory when getting to Alpha…

The truth is that often those problems emerge from bad prioritization at the production level. But while it sounds harmless at the beginning of the project, it affects your technical team badly at the end. And your precious programmers who could spend more time polishing the game, spend their time just getting the game to ship.

Starting my blog…

August 26, 2010

Hello,

My name is Eric Smolikowski. After lots of thinking, I finally decided to launch my blog. I have worked in the game industry for over 12 years, and learned a lot. After having learned from so many people over the years, I thought it was now the time for me to share my experience and the knowledge I’ve accumulated to the world out there.

During those 12 years, I’ve focused 95% of my time in programming. Almost 100% console development, and mainly graphics. But also audio, tools, streaming, animations and more. I had the chance to work on the Ps1, Ps2, XBox, Ps3 and XBox 360 and be a senior and lead graphics engineer on Destroy All Humans!, Mercenaries2: World In Flames, and the recently God Of War 3.

Welcome to my blog, and I hope you will find ideas and inspirations throughout.

Eric

 
Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org