I know enough about 3D graphics to understand the difference between polygons and voxels, but at the same time I know enough that I realize I don't actually understand how voxels work, how they actually get drawn in the end.

So a voxel is a point/cube in 3D space, akin to how a pixel is a point/square in 2D space. But how does the computer draw it on screen? How does the logic work to put the thing on screen?

For a polygon cube, it uses 8 points and 12 triangle polygons, passes the info to the GPU, and tells it to draw a triangle from A to B to C a bunch of times.

How would this work for a cube represented by a voxel? Does the voxel coordinates represent the center of the cube? Lets keep it simple and say it represents a corner, and the size represents the length of each edge. How does the computer process this info to draw your 1 voxel cube? Does it figure out where all the corners of the cube are and plays connect the dots? Or am I fundamentally misunderstanding this, cause whenever I try to think about it, it always feels like at some point it must do something similar to how polygons are drawn.

Follow

@alyx from what I can ascertain, they are rendered entirely differently from polygons so perhaps looking at it through that lens is a mistake. There may very well be an entirely different algorithm that decides where they are placed when thinking of 3d space. As I understand it, polygons have the illusion of depth whereas volumetric pixels have actual depth. I don't know exactly how they are rendered, but it seems that they require the CPU to do so rather than the GPU so my immediate thought is that they require a higher degree of accuracy to render properly.

They're cool though, insanely good for physics simulations (like Teardown) and not needing to be textured. Instead of tricking the player into thinking they're looking at depth, it is what it says on the can, actual depth.

Then you get into weird fucking stuff like Dreams which is neither voxels nor polygons but something called "signed distance fields" which I am entirely in the dark on.

@beardalaxy
>As I understand it, polygons have the illusion of depth whereas volumetric pixels have actual depth.
That doesn't really make sense. Maybe you're thinking about volume. How a polygon model of a cube is empty on the inside, while a voxel cube would have volume inside it.

Something that I'm considering now, is that maybe they're using mathematical formulas for objects, like spheres, to draw the voxels. If you just applied the formula, you could get a far more precise and round sphere than any reasonable polygon model. But what usually confuses me is the case of "retro" games, with cubic voxels, cause as far as I can tell, you're back to defining the cube as a series of triangles. And considering voxels are presented as a way to get rid of polygons, it confuses me when I realize you're probably going back to polygons at some point in the render process.

@alyx i guess it depends on the engine, some use hybrids. a lot probably use voxels for things like physics calculations (particularly destruction) but then display it as polygons. there are engines that are capable of just rendering voxels, though.

another good reason to use them is for the whole not-needing-textures thing. since the voxels can be colored individually, there is no need for texturing. to get to that level of detail would require a lot more tris for polygon rendering.

and then there's something like that doom voxel mod, where the models still look like sprites but they actually have depth to them, which is a pretty unique way of doing it. although i'm unsure if the doom engine is actually rendering those voxels or if the creator just made the models out of voxels and converted it to polygons afterward or something.

@beardalaxy
The Doom mod is what brought the question back to my attention. I just managed to find again a video presenting the mod, and the guy explains it quite well.

The mod has 2 render paths possible:
- hardware rendering, in which the voxel object gets turned into a normal polygon mesh, that then the GPU draws like usual.
- software rendering, in which the individual voxels are displayed like square sprites.

So I guess in a way this kinda answers my question. You either turn your voxel objects into normal polygon models during rendering, and draw those using GPU; or have a software renderer draw them as a different shape, which might result in the individual voxel not even being 3D if you go the sprite route.

@beardalaxy
The whole voxel thing has fascinated me for a while, but I never fully grasped the rendering part. Everything else makes perfect sense to me.

I used to wonder how come a 1998 shooter called Delta Force somehow managed to have a much more detailed landscape than even newer games. Then I found out it had a hybrid engine, using both voxels and polygons. Ever since I'm still dumbfounded that people didn't invest more in this technology.

@beardalaxy
It's hard to see from a single picture, but it's unbelievable how many bumps and dips there are in the ground, and how smooth it looks overall. To this day I still don't think I've seen something quite like it without the usage of tricks like bumps maps.

Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.