What are we trying to do with rendering algorithms? So the goal is that I have this screen and I would like to create some image on this screen, an image of some virtual object, let’s say utah teapot. The main problem that the rendering algorithms are going to be solving is what we call the visibility problem that is I have this teapot in a virtual environment, and where does this teapot fall on my screen, fall on what pixels. And then from there, we can go on to coloring those pixels accordingly and generating an image like this.
![]() | ![]() |
Broadly speaking, when we look at rendering algorithms, we can classify them into two groups, rasterization based algorithms and ray tracing based algorithms.
- Rasterization
- Painter’s Algorithm
- Z-Buffer Rasterization
- A-Buffer Rasterization
- REYES
- Ray Tracing
- Ray Casting
- Path Tracing
The simplest one is the painter’s algorithm. The most popular rasterization based algorithm possibly the most commonly used algorithm in the world is z-buffer rasterization and because that’s what our GPUs do. A-Buffer and REYES are different more advanced offline rendering algorithms using rasterization.
Rasterization
![]() | ![]() |
Painter’s Algorithm
Empty Canvas![]() | Draw Sky![]() |
Draw Ocean![]() | Draw Cloud![]() |
Draw Mountain![]() | Draw trees![]() |
Draw more trees![]() | Draw more trees![]() |
Draw more trees![]() | Draw more trees![]() |
Draw more tress![]() | Draw path![]() |
So the painter’s algorithm will begin with sorting these triangles, sorting them from back to front, based on their distance from the camera.

We’ll follow the painter’s algorithm and draw things at the far end first.


Everything looks fine, but you generate a proper image that is if your triangles are not intersecting.
![]() | ![]() |
If your triangles are intersecting, this becomes a little bit tricky. Can you tell me which triangle is in front of the other now? Now if I were to sort it, it’s kind of tricky, is the blue one in the back or the blue one in the front? The painter’s algorithm will give us either one, but it will never give us this, which is what I am supposed to see:

Pros and Cons of Painter’s Algorithm:
- Pros
- Simple
- Cons
- Needs sorting (sorting is sort of expensive)
- Cannot handle interesting geometry
Z-Buffer Rasterization
The z-buffer rasterization is the algorithm that our GPUs are designed to use for rendering.
The way it solves the visibility problem is by storing along with color that is RGBA, a depth value for each pixel. So each pixel will also store how far that pixel is from the camera, more specifically at the center of the pixel that point that corresponds to the center of the pixel on that triangle is from the camera, that’s what the z-buffer rasterization does.
![]() | ![]() |
This is nice but it’s looking kind of jaggy. What if I wanted to have anti-aliasing? What would happen then? Can rasterization handle that? The answer is yes.
By storing multiple samples per pixel, so for each pixel if I store multiple color values and multiple depth values, I can do anti-aliasing, so this is what we call super sample anti-aliasing (SSAA). The super sample anti-aliasing with 4x will store four samples per pixel. You may think that this is like an overkill and in some way it is, sort of expensive, but at least you get these nice looking smooth edges with it.

Well we can do something a little better with z-buffer rasterization, we can sort of simplify this, now remember for visibility what’s important is the depth, the z value, that’s what we are trying to compute in multiple places on the pixel, so instead of doing this like storing color plus depth and four different places with 4x anti-aliasing, instead I could use what we call multi-sample anti-aliasing, in which case I am storing four depth values, but only one color value, so for each pixel I am going to store a single color that’s going to be the combined color of everything, but I am going to be storing multiple depth values. By doing so I can figure out which triangle to choose for which sample of a pixel, and I can generate nice looking anti-aliasing images.

Z-Buffer rasterization however is not perfect, it struggles with semi-transparent objects. So if the two triangles are opaque, z-buffer rasterization is fine. But if one of my triangle is semi-transparent, and this would be the image that I would like to get, I can get it only if I render the triangles in the correct order, so if I render the triangles at the back first that the blue one, and then I render the red triangle that is semi-transparent, I will get this image with z-buffer rasterization for that our GPUs are able to do alpha blending.

But if I do the opposite, if I draw the semi-transparent red triangle first and then I draw the blue one, I am getting this, that’s not great. And the reason why this is happening is that with z-buffer, I have one z value, when I draw that red triangle, the red triangle wrote the z values for each pixel, now when I am rendering the blue triangle, the blue triangle is going to compare its z values with the z values of the red triangle and the z-buffer estimation will say that this triangle is behind the other one, yes but the triangle in front was semi-transparent, this is supposed to go behind it, but this is a little too hard to handle for z-buffer rasterization.

Pros and Cons of Z-Buffer Algorithm:
- Pros
- Can handle interesting geometry
- Cons
- Needs sorting for transparency
Especially when you have one semi-transparent object in front of another semi-transparent object, you need to render them from back to front, otherwise you won’t get the correct result, so that’s the limitation of z-buffer rasterization, but that’s what we have on the GPU, so that’s what we’re going to have to deal with. So we’re going to have to be careful when we are rendering semi-transparent objects.
This is not a fundamental limitation of rasterization broadly. There are rasterization algorithms that can handle things like this, for example, the A-Buffer rasterization, it is another rasterization algorithm, that can handle these type of situations. But it requires more dynamic memory allocation. That’s why our GPUs are not designed to use A-Buffer rasterization.
A-Buffer Rasterization
Pros and Cons of A-Buffer rasterization:
- Pros
- Can handle intersecting geometry
- Supports order-independent transparency
- Cons
- Requires more (dynamic) memory
Here is how it works
With A-Buffer rasterization, I don’t just store a single depth and color value, for each pixel, I am going to store a list of fragments of these triangles, which contain everything that’s visible through that pixel so in this case, the background, the blue triangle and the red triangle. And together with their depth and RGBA values I also have a coverage mask so I know what part of that pixel they’re covering, and by doing this I can figure out which triangle is in front of which other triangle regardless of what order they come in.

Some forms of above rasterization can be implemented on modern GPUs, but that will require some custom code that you’re going to have to write and you’re going to have to handle this linked list and all that on your own, the GPU itself is not designed to do A-Buffer rasterization. We can do some forms of A-Buffer rasterization on the GPU, it is definitely possible to implement A-Buffer rasterization on the GPU on top of its Z-Buffer rasterization, but GPUs don’t do that by default, it will require you to write some code to handle this sort of linked list manipulation here.
REYES
Another rasterization based algorithm, very widely used.

Ray Tracing
Rasterization vs Ray Tracing
Rasterization
for each primitive
find pixel samples
Ray Tracing
for each pixel sample
find the closest primitive

![]() | ![]() |

So with that ray, I can use exactly the same algorithm for solving more general problems than just primary visibility problem that rasterization algorithms are able to handle.
So with ray tracing, it opens up these possibilities of different types of things that we could easily compute by using secondary rays and secondary rays are used for computing reflections, refractions if you have a transparent object, shadows, and realistic illumination. Ray tracing can handle that, so using ray tracing, you can generate really really realistic images.
Now that ray tracing is so good, why don’t we design our GPUs for ray-tracing not for rasterization? New GPUs do support ray tracing. Actually ray tracing support is something else, they are still designed for rasterization, they do a little bit ray tracing but they are designed for designed for rasterization still and there’s a good reason for that, because “find pixel samples” operation is super fast, but find the corresponding primitive for a ray is super slow. Moreover, there are also memory access issues and complexity issues.
So when you think about this, if you’re rendering a very very very large scene, lots and lots of triangles, you would expect ray tracing to be still faster than rasterization. Because in complexity theory, if you have an algorithm that has linear complexity, compared to an algorithm that has logarithmic complexity, you would expect that algorithm with logarithmic complexity to get faster at some point, there’s a constant factor of course. you don’t know how large your scene is supposed to be for ray tracing to be faster than rasterization, but you would expect it to be faster than rasterization at some point.

Nonetheless, with rasterization, you can only handle primary visibility, take all of your triangles, whether they land on the screen, that’s all you can compute and then you’re done. With ray tracing, we can do a lot more than that, and by doing a lot more we can generate really realistic looking images.
Rasterization + Ray Tracing ?
Rasterization
for primary visibility
Ray Tracing
for secondary effects:
- reflections/refractions
- shadows
- realistic illumination
- …
This is really a great idea, actually this is a very popular thing today, because that’s what our GPUs are capable of doing. Our GPUs with ray tracing support are still designed for rasterization, because they want to handle the primary visibility using rasterization because rasterization is really really fast as compared to ray tracing and they can do a little bit of ray tracing on top.
This is not what people use for offline rendering though. Offline rendering like for visual effects and movies and things like that. When you are not doing interactive rendering, we don’t need rasterization at all and the reason is that ray tracing is able to handle the “primary visibility” check as well, so why do I need rasterization, I don’t need it. Most of the time in offline rendering is used in the second part, for those nice effects.
[Note: Contents come from Professor Cem Yuksel’s Interactive Graphics course, check out his website for more details]