GPU

PG()
Bartosz Roguski
Machine Learning Engineer
July 4, 2025
Glossary Category
AI

GPU (Graphics Processing Unit) is a specialized processor designed to accelerate parallel computing tasks, particularly matrix operations and mathematical calculations essential for artificial intelligence and machine learning workloads. Unlike CPUs that excel at sequential processing, GPUs contain thousands of smaller cores that execute multiple operations simultaneously, making them ideal for training neural networks, deep learning models, and AI inference tasks. Modern GPUs feature dedicated tensor cores optimized for AI computations, high-bandwidth memory architectures, and CUDA or OpenCL frameworks that enable developers to harness massive parallel processing power. Leading GPU manufacturers include NVIDIA, AMD, and Intel, with NVIDIA’s A100, H100, and RTX series dominating AI applications. GPUs have become the cornerstone of modern AI infrastructure, powering everything from computer vision and natural language processing to generative AI models like large language models and diffusion models.