Each voxel can be represented as a point in three-dimensional space, that is, as a piece of volume. It is possible to draw a voxel in flat space if the pixel is positioned in the same place. Or vice versa: take a pixel from the screen and find a voxel in space located in the same place.
The reverse approach is called ray casting. The beam goes straight into 3D space, and flies until it hits a voxel. In practice, as many rays are “thrown” into space as needed to cover all the necessary points.
This technique was first used in Wolfenstein 3D. In it, the rooms consisted entirely of voxels. Rendering was pretty fast because one ray rendered a whole column of pixels on the screen.
The result, in fact, turned out to be two-dimensional, which is why such 3D graphics are sometimes called 2.5D (because the third dimension seems to be not real).
Now Wolfenstein is not usually called a voxel game, but it was she who gave impetus to the development of voxel engines of the nineties.
At first, voxels were only used to create locations. Due to a lack of resources, developers could not store information about each cell in space, but could record the height of the location of voxels on a flat map (also known as a height map).
Since all information about voxels could only be contained in heightmaps, games could not create rocks hanging over the player. But, God, how detailed the locations turned out!
End of voxels
Ray casting was not the only voxel rendering technology in the nineties. There were others. Each with its own strengths: destructible environments, support for processing car and character models, and so on. It was something incredible! But, ironically, it was precisely this diversity that ultimately led to the decline of technology.
In 2000, the era of graphic cards or GPUs began. Special devices built into the computer, now called the GPU, did an excellent job of processing 3D polygons. They did it very quickly, but there was nothing more they could do. Unfortunately, various voxel rendering algorithms (including ray casting) have been left out.
The voxel engines moved to the CPU, but that too had its own problems. The processor thought about such important things as physics, gameplay and game AI.
Graphic cards were created in order to “relocate” rendering to a separate chip. As a result, rendering is significantly accelerated, and the processor has freed up resources to perform other tasks. Voxel engines couldn’t keep up with polygonal graphics. And so they died.
10 years have passed since then, and suddenly voxels are back. Help came from an unexpected quarter. A game has appeared that has found a completely new approach to voxels. Voxel is a cube, right? And now these cubes could already be safely processed by the video card.
A pixel is the smallest element of a two-dimensional space, discretely divided into many equal parts.
Each pixel is defined by a vector with two integers X and Y. This is why pixel space is discrete, while in vector graphics, coordinates are defined by real numbers.
Accordingly, a voxel is the smallest element of a three-dimensional discrete space, where all elements have the same size.
Pure 2D graphics
In the old days, to display a 2D sprite on the screen, you had to directly copy bits from the memory that stored the colors of the sprite to the memory that stored the data about the colors displayed on the screen.
This technology is called bit blit or bit BLT – bit block transfer (transfer of blocks of bits). Now almost no one renders two-dimensional graphics in this way.
The PICO-8 virtual console is one of the few modern blitting engines, but in the past, 2D graphics could not be rendered any other way.
Textures in 2D/3D graphics
Now most graphics engines work with vectors, because video cards are designed specifically for them. Under such conditions, to display an image on a flat screen, it must be mapped onto a polygon using a texture map.
Textures are 2D bitmaps placed on a 3D polygon.
Now let’s deal with 2D. If we stretch the texture over a flat rectangle, we get modern 2D graphics. On modern hardware, each 2D image (most often we call it a sprite in this context) is displayed on a rectangle consisting of two triangles. Two triangles (their pair is called a quad) are rendered with a sprite stretched over them. And so the image is in place.
We already know that pixel textures can be applied to low poly 3D models without problems, even on high resolution screens. Think Minecraft again. After all, low-poly cubes are still rendered on displays with a resolution of 1920×1080.
The same can be done with polygons on the plane. It is possible to take pixel art, attach it to a 2D quad and render the result on a high resolution monitor. Then each pixel on the original image will color several pixels on the display in a certain color.
This is called pixel art with big pixels. Each pixel on the sprite increases in size and becomes a large square in the image.