Intel and Alphabet are strengthening their collaboration in a critical yet often overlooked area of artificial intelligence: the underlying infrastructure that supports AI operations.
The two companies have recently signed a multi-year agreement to build next-generation AI and cloud systems using Intel's Xeon chips and custom infrastructure processing units. In essence, they aim to make AI workloads run more smoothly, quickly, and cost-effectively when deployed at scale.
The approach works as follows: Instead of burdening the main central processing unit with all tasks, these infrastructure processing units take over background duties such as networking, storage, and security. This allows the CPU to free up resources and focus on computationally intensive AI tasks. The result is improved performance and greater energy efficiency—both of which are becoming increasingly vital as AI workloads surge.
Comments