How video game graphics work

Digital game graphics are all about pixels: how you store them, how you process them, and how you display them. More pixels per inch means more detail, but the more pixels you have, the more hardware you need to manage them.

The word “pixel” originated as an abbreviation for the term “picture element”, coined by computer researchers in the 1960s. Pixels are the smallest possible part of any digital image, regardless of resolution. In modern computers, they are usually represented as square blocks, but not always, depending on the nature and aspect ratio of the display device.

In the abstract, most video game graphics work by storing a grid of pixels (known as a bitmap) in a piece of video memory called the framebuffer. Then a special circuit reads this memory and converts it into an image on the screen.

The amount of detail (resolution) and the amount of color that you can store in this image is directly related to how much video memory is available on your computer or game console.

Some console and arcade games did not use frame buffers. In fact, the Atari 2600 console, released in 1977, reduced its costs by using special logic to generate a signal on the fly as the TV scan line moved down the screen. “We were trying to be cheap, but that put the vertical in the hands of programmers who were a lot smarter than the hardware designers intended,” Decure says of the 2600.

In the case of pre-framebuffer games, graphical detail was limited by the cost of ancillary circuitry (as in early Atari arcade games with discrete logic) or by the size of the program code (as in the Atari 2600).

Exponential memory and resolution changes

The scale of improvement in the technical capabilities of computers and game consoles has been exponential over the past 50 years, which means that the cost of digital memory and processing power has declined at a rate that defies common sense.

This is because improved chip fabrication technologies have allowed manufacturers to cram exponentially more transistors into a given area of ​​silicon, allowing for vastly increased memory capacity, processor speed, and graphics chip complexity.

The cost of transistors affected every electronic component in which they were used, including RAM chips. In the early days of the computerized game console in 1976, digital memory was very expensive. The Fairchild Channel F used just 2 kilobytes of RAM to store the screen bitmap—just 128×64 pixels (102×58 visible), with one of four colors per pixel. Similar capacity RAM chips with the four RAM chips used in the Channel F retailed for about $80 at the time, which is $373 now adjusted for inflation.

Fast forward to 2021, when the Nintendo Switch will include 4GB of RAM that can be shared with working memory and video memory. Let’s say the game uses 2 GB (2,000,000 kilobytes) of video memory. At 1976 prices, that 2,000,000 kilobytes of RAM would have cost $80 million in 1976, and today it’s over $373 million. Madness, right? This is the illogical nature of exponential change.

As the price of memory has fallen since 1976, console manufacturers have been able to include more video memory in their consoles, allowing for much higher resolution images. As resolution increased, individual pixels became smaller and harder to see.

Even if high-definition graphics could be displayed in the 1980s, it was not possible to move these images from memory and draw them on the screen at 30 or 60 times per second. “Check out the wonderful Pixar animated short The Adventures of Andre and Wally B,” says Golson. “In 1984, it took a $15 million Cray supercomputer to make this movie.”

Low TV resolution limiting detail

Of course, in order for a console to display a 4K resolution picture like today’s high-end consoles, you need a display that can do that, which wasn’t the case in the 1970s and 80s.

Before the HDTV era, most video game consoles used relatively old-fashioned display technology developed in the 1950s, long before anyone expected to play high-definition home video games. These TVs have been designed to receive radio broadcasts through an antenna connected to the rear.

Ideally, an NTSC analog television signal can handle about 486 interlaced lines about 640 pixels wide (although this is implementation dependent due to the analog nature of the standard).

But game console developers discovered early on that they could save memory by using only half of two NTSC interlaced fields per second to produce a very stable 240-pixel high image, now called “240p”. In order to maintain the 4:3 aspect ratio, they limited the horizontal resolution to around 320 pixels, although this exact number varied considerably between consoles.

Some older arcade games, such as Nintendo’s Popeye (1982), took advantage of the much higher resolutions (512×448) made possible by arcade monitors using a non-standard interlaced video mode, but these games could not be played on home game consoles.

In addition, today’s displays vary in sharpness and precision, which enhances the pixelated effect in some older games. What looks square and blocky on a modern LCD monitor is often flattened when displayed on an old CRT or TV.