Since we turn on the computer and the logo appears Windows until we start a game loaded with lights, shadows and textures, everything goes through the same piece: the Graphic cardThis component can be integrated into the processor or come in a dedicated form factor, and its mission is to transform data into images with fluidity, precision, and, in recent years, with a touch of artificial intelligence.
On this trip we review, with a magnifying glass, the Evolution of graphics cards "from VGA to GPU", the shift from early monochrome adapters to real-time ray tracing, and how all of this has impacted the experience within Windows and the video gamesWe'll cover history, key technologies, APIs, manufacturers, buses, memory, purchasing advice, and even how to check your graphics card in Windows in two clicks with dxDiag.
What is a graphics card and how does it work with the CPU?
A graphics card (or GPU in the strict sense) is a processor specialized in floating point operations, designed to run thousands of calculations in parallel that shape pixels. The integrated version is within the CPU (iGPU/APU), while the dedicated one is placed on the motherboard via PCI Express and has its own memory VRAM, power and cooling.
The typical flow on a Windows PC and games is: CPU prepares geometry (vertices), orders and physics; the GPU organizes the scene (spatial ordering and clipping), and then executes the pixel/fragment shaders that provide color, materials, effects, and post-processing. The signal is then output via VGA, DVI, HDMI, USB-C, or DisplayPort to the monitor, which displays it at a specific refresh rate (50/60/120/144Hz...).
The dedicated ones are usually much more powerful than the integrated ones, so for editing, video games or AI today prioritize models with fast VRAM and high bandwidth. In gaming laptops, Max-Q chips optimize power and temperature to bring desktop performance closer to fewer watts.

From MDA and CGA to HGC, EGA, VGA and SVGA: the foundations
The starting point on the PC was the IBM adapters in the early 80s. MDA (Monochrome Display Adapter) displayed only alphanumeric text (80x25) with 4 KB of memory and monochrome monitors. The controller read ASCII values, the character generator composed a raster array by sign, and the monitor played it back at about 50 Hz.
In 1981 the first color graphics arrived with the CGA (Color Graphics Adapter), which inaugurated the world of RGB on PCs: up to 16 colors (8 with two intensities) and resolutions such as 320x200 (4 colors) or 640x200 (2 colors). It wasn't perfect, but it put color on the domestic map.
In parallel, in 1982 the Hercules Graphics Card (HGC): Monochrome, yes, but capable of 720x348 and with 64 KB of memory. It allowed for sharp text (14x9 matrix) and a graphics mode that was remarkable for its time.
The next step was the EGA (Enhanced Graphics Adapter) from IBM, compatible with MDA and CGA, with 256 KB of memory and 16 colors at 640x350 chosen from a palette of 64. In addition, it made screen changes smoothly, reducing the annoying flicker typical of CGA.
In 1987 the industry embraced the VGA (Video Graphics Array): 640×480 in graphics mode, 720×350 in text, 256 colors chosen from a palette of 262.144, and the big difference: signal analog to the monitor. That's why VGAs incorporated the famous RAMDAC (digital-to-analog converter memory), which operated at up to 450 MHz in later models. VGA also included 800×600, 1024×768, and 50/60/70 Hz refresh rates.
From 2D to 3D: buses, chips and the first big revolution
The 90s brought two revolutions: the buses and the jump to 3D. The VESA local bus standard gave way to PCI in 1993, with more compact cards from brands such as Matrox, Creative or 3dfx (Voodoo). Shortly after came the AGP (x2, x4, x8) to accelerate texture traffic, with theoretical peaks of up to 2,1 GB/s, a prelude to the current PCI Express 16 lanes.
On the chip side, manufacturers such as S3 (Trio, ViRGE), Rendition, Matrox, 3dfx, NEC PowerVR, ATI and a then fledgling NVIDIA (RIVA TNT/TNT2) emerged. The first 3D APIs on PCs were established with OpenGL (from Silicon Graphics), Glide (owner of 3dfx) and Direct3D within Microsoft DirectX, which on Windows would end up dominating the game on PC.
The real turning point was the GeForce 256 (1999), the first chip with hardware T&L (Transform & Lighting), unifying polygonal 3D acceleration and offloading geometric calculations from the CPU. From then on, ATI named its family "Radeon», giving rise to modern rivalry.
Unified shaders, hot shaders, and the jump to DirectX 11
NVIDIA experimented with so-called «hot shaders», running shaders at higher MHz than the rest of the GPU (e.g. 600 MHz for GPU and 1.500 MHz for shaders on 8800 GT). Meanwhile, VRAM became popular 1 GB and bandwidth was boosted to avoid bottlenecks.
The GeForce GTX 400/500 and Radeon HD 5000/6000 stage brought DirectX 11, more shader count, more bandwidth and editions that for the first time reached the 3 GB of VRAM (in the high-end NVIDIA range, although to a limited extent). It was the rule: add computing units and memory to increase raw power.
GCN, asynchronous computing, and the end of hot shaders
AMD responded with Certificate 1.0 (Radeon HD 7950/7970), an architecture ahead of its time that favored DirectX 12 and asynchronous computing. It also standardized on more generous VRAM (3 GB versus 2 GB on NVIDIA equivalents), a decision that would be noticeable in heavier games.
NVIDIA, with Kepler (GTX 600/700), said goodbye to hot shaders, tripled shaders between generations (GTX 580 to GTX 680) and gained momentum in DX11, although its early DX12 support was lukewarm. Still, with the 780 GTX Ti (2013) doubled shaders and managed to run 4K games on Windows PCs with surprising ease.
The big leap came with Maxwell (GTX 900): much more efficient and performant per watt; a GTX 970 with 1.664 shaders surpassed the GTX 780 Ti with 2.880, in addition to adding VRAM (4 GB) and getting along better with DX12It was one of the most beloved graphics for its balance and longevity.
With Pascal (GTX 10), NVIDIA fully embraced DX12 and Vulkan; a GTX 1070 (1.920 shaders, 8 GB) outperformed the GTX 980 Ti. The GTX 1080 Ti It became a legend with 4K performance still valid in classic rasterization.
Turing, RT Cores, and Tensor: Ray Tracing and AI in Windows and Gaming
The next big milestone came with Turing (RTX 20), which added two specialized blocks: RT cores (ray tracing) and tensor cores (AI and inference). From then on, a GPU stopped being "just shaders + textures + raster" and could handle effects that were previously prohibitive in terms of performance costs, integrating with DirectX ray tracing in Windows.
To offset the impact of ray tracing, NVIDIA launched DLSS, an AI upscaling that failed in its first version but succeeded with DLSS 2 thanks to its temporary reconstruction of the image. AMD responded with FSR/FSR 2, which doesn't use AI but offers a good cross-platform balance.
Ampere and Ada (RTX 30/40), RDNA2 and RDNA3: efficiency, caches and chiplets
With Ampere (RTX 30), NVIDIA massively increased shaders per SM; an RTX 3060 (3.584 shaders) doubled the 2060's core count, improved RT/tensor, and boosted clock speeds. Afterwards, Ada Lovelace (RTX 40) made a huge leap in efficiency: a 110W RTX 4060 outperforms a 170W 3060 by around 20% in raster.
La RTX 4090 It is about 40% ahead of the 3090 in raster and is the only one that smoothly runs Cyberpunk 2077's Overdrive mode (path tracing). In addition, DLSS-3 introduces frame generation, alleviating CPU bottlenecks on Windows by “interleaving” synthesized frames on the GPU.
On the red side, RDNA2 It doubled shaders compared to RDNA, standardized on 16 GB in the high-end, boosted frequencies and added a large block of L3 cache type «infinity cache» to reduce dependence on external bandwidth. The first ones also arrived ray tracing units on AMD for DXR gaming.
With RDNA3, AMD refined efficiency, added second-generation RT cores, incorporated AI accelerators, and, most importantly, introduced a design multi-chiplet innovative: it maintained a monolithic die for the GPU and externalized the L3 cache to chiplets. This reduces silicon surface area, improves the yield and lower costs.
Looking ahead, everything points to GPUS MCM (multi-chip module) with several interconnected GPUs. It's just a matter of time: the area of a single die no longer scales well in cost and complexity when we're talking about tens of thousands of shaders.
Essential components: GPU, VRAM, RAMDAC, VRM and cooling
The plate coexists with GPU (calculation core with L1/L2 caches), the memory VRAM (textures, framebuffers, intermediate buffers), the VRM (power phases with MOSFETs, chokes and capacitors) and the system of refrigerationDedicated ones usually use 6+2 pin connectors because the PCIe slot only delivers up to 75 W.
The historical RAMDAC It converted digital data into an analog signal for VGA/CRT monitors. Although everything is digital today (HDMI/DP), it is key to understand its role in the transition: its frequency determined image stability; advanced models achieved 450 MHz.
In dissipation, turbine designs coexist (blowers, which expel hot air from the box) and axial flow (several fans pushing air over a finned radiator). Blowers are compact but noisy and less efficient; axial They are the standard in custom models due to their improved thermal performance.
Video memories: from EDO/SGRAM/VRAM/WRAM to GDDR6 and HBM2
Before the modern era, graphics cards used EDO RAM and SDRAM, then SGRAM (Graphics Optimized SDRAM), VRAM (dual port for reading and writing at the same time) and WRAM (faster than VRAM and with block acceleration functions, ideal for windows in Windows). That marked the transition from 300–800 Mbps to much larger bandwidths.
Today they dominate GDDR6 and GDDR6X: "DDR" memories with very high effective frequencies (14–21 Gbps) and 128- to 384-bit buses, achieving massive bandwidths. AMD has used HBM2 (up to 2048-bit bus with 3D stacking), less MHz but a brutal width, useful in extreme bandwidth scenarios.
Classic memory-resolution ratio (2D era): with 512 KB, 1024x768 at 16 colors; with 1 MB, 1280x1024 at 16 colors or 1024x768 at 256; with 2 MB, 1280x1024 at 256 and 1024x768 at 65.536; with 4 MB, the following were already popular: 16,7 million of colors in 800×600 and higher. Today, for modern games, 4–8 GB is a reasonable minimum. 1080p–1440p.
Video ports: VGA, DVI, HDMI, DisplayPort and USB‑C
The analog VGA signal is a thing of the past, but it is worth knowing DVI: DVI‑D (digital only), DVI‑A (analog only), and DVI‑I (both). HDMI 2.1 handles up to 4K@120 and 8K@60; 2.0 stops at 4K@60 (8-bit). DisplayPort 1.4 enables 4K@120 and 8K@60 with DSC; DP is the preferred interface for high refresh rate monitors on PCs.
USB-C with DP/Thunderbolt 3 Alt Mode can output 4K@60 video and combine data and power. On modern Windows computers, it's common to see DP and HDMI coexisting, and, on laptops, USB‑C as a multipurpose output.
3D APIs on Windows: OpenGL, Glide, DirectX and Vulkan
APIs are the “language” the game speaks to the GPU. OpenGL (industrial and very capable) and Glide (subset optimized for 3dfx) marked the 90s. Microsoft integrated the family into Windows DirectX (Direct3D), which started off limping, but from DX8/DX11 onwards became the dominant standard on PC.
In addition, there Vulkan (Khronos), a low-level, cross-platform game with roots in AMD's Mantle. In practice, most major games on Windows use DirectX 11/12, with DXR for ray tracing; OpenGL/Vulkan coexist in specific engines and ports.
Buses: PCI, AGP and PCI Express
The bus defines the data path between the GPU and the system. PCI was the bridge of the 90s. AGP increased bandwidth and allowed RAM to be drawn from the system (at the cost of latency). PCIe x16 (3.0, 4.0 and 5.0), the GPUs communicate directly with the CPU by 16 dedicated lanes. PCIe 3.0 x16 offers ~15,8 GB/s bidirectional; 4.0 doubles; 5.0 doubles again. Today, in games, it rarely saturates.
Performance and metrics: FPS, TFLOPS, TMUs/ROPs and overclocking
The FPS They determine fluidity: higher FPS, smoother feel, limited by the monitor's refresh rate (vertical synchronization via V-Sync/G-Sync/FreeSync). To see your GPU's "ceiling," disable synchronization and look at the frametime in testing tools.
The TFLOPS They measure floating point operations per second, a reference of raw power, but not definitive: architecture, caches, color compression, bandwidth and drivers weigh a lot. TMUs (texture mapping/filtering) and ROPs (rasterization, blending, z-buffer, antialiasing) determine the actual throughput of pixels and textures.
El overclock GPU speeds are usually around +100–150 MHz and on GDDR6 VRAM even +900–1000 MHz effective, with appreciable FPS gains if the GPU is not CPU limited. Popular tools: MSI Afterburner, EVGA Precision X1 or AMD Adrenalin (WattMan).
Choosing the right graphics card, matching it with the CPU, and avoiding bottlenecks
For office work and multimedia, a modern iGPU (integrated Intel UHD/ARC or AMD Radeon Vega on APUs) is sufficient; investing in a dedicated one is not worth it. For gaming 1080p, a mid-range GPU with 6–8 GB and a 6-core CPU offer a great price/performance balance.
At 1440p/4K, think high-gamut (more shaders, improved RT, and headroom VRAM). Remember: the CPU Determines how much geometry/physics the GPU is fed; lowering the resolution relieves the GPU's load, but barely reduces CPU load. Settings that "hit" the CPU: object density, NPCs, simulation, physics; those that "hit" the GPU: resolution, textures, AA, ambient occlusion, tessellation and of course ray tracing.
Form factors and dissipation matter: measure your case and choose between two- or three-fan designs, or even liquid AIO in extreme models. Manufacturers (ASUS, MSI, Gigabyte, etc.) often overclock the factory clocks and install VRM more robust.
Gaming and Max-Q Laptops
On laptops, the GPU is soldered and optimized (RTX/GTX series) Max-Q) with lower power consumption and slightly less performance than the desktop. They share GDDR6 VRAM, unified drivers for Windows, and support for technologies such as DLSS and RT, prioritizing contained temperatures and autonomy, and options such as the mux switch to improve performance.
How to find out which graphics card you have in Windows (dxdiag)
If you want to identify your card in seconds from Windows, use the diagnostic tool DirectX:
- Click on Home.
- Open "Run" from the menu Home.
- Write dxDiag and click OK.
- When the utility opens, go to the tab Screen.
- Check under “Device” for the name of your GPU and the the memory available.
Manufacturers and ecosystem: NVIDIA, AMD, Intel and assemblers
Today the market is led by NVIDIA (GeForce RTX) and AMD (Radeon RX). Intel is back in the game with Arc in dedicated and integrates graphics into most CPUs. In 2006, AMD acquired ATI; since then, it has merged CPUs and GPUs (APUs) and competes head-to-head in gaming and desktop PCs.
Assemblers (ASUS, MSI, Gigabyte, etc.) buy GPUs/memory and design their own PCBs, VRMs, and heatsinks. Some add RGB, dual BIOS, sensors, and "OC" profiles. In 2004, NVIDIA reintroduced SLI (multi-GPU) to add performance, and there have been cloud computing solutions such as those of the GRID that move graphics from remote servers, ahead of current game streaming.
Looking at the entire journey, from the analog signal of a VGA to the path tracing and AI frame generation, it is clear that Graphics cards have evolved from being simple adapters to becoming massively parallel processors. With a direct impact on Windows, engines, and games, the future will see increased specialization (RT/AI), chiplet designs, and efficiency, with the promise of increased fidelity and higher FPS without increasing power consumption.


