Intel, in a bid to help enterprises scale their AI infrastructure cost-effectively, has launched the Intel Xeon 6 with Performance Cores and Gaudi 3 AI accelerators. This will enable powerful AI-powered systems with optimal performance per watt at a low total cost of ownership, according to Intel. “With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security,” said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group.
Intel Xeon 6 with P-cores and Gaudi 3 AI Accelerators: All You Need To Know
Intel says its Xeon 6 chipset offers double the performance of its predecessor, featuring more cores, double the memory bandwidth, and AI power in every core. Simply put, this processor is ideal for AI compute demands from data and cloud infrastructure.
As for the Gaudi 3 AI accelerator, Intel claims it has been specifically tuned to handle large generative AI models, with 43 tensor processor cores and 8 multiplication engines, enhancing neural network capabilities for computations. Additionally, Intel mentions 128 gigabytes (GB) of HBM2e memory for training and inference, and 24 200 Gigabit (Gb) Ethernet ports for scalable networking. A key example of Gaudi 3 implementation is Intel’s collaboration with IBM, which is using Gaudi 3 AI accelerators as a service on IBM Cloud—lowering the cost of scaling AI.
How Are AI Systems Benefiting?
Intel claims to deliver competitive price-performance ratios, making AI accessible using its x86 architecture while maintaining optimal total cost of ownership (TCO) and performance per watt. In fact, Dell and Supermicro are already developing AI solutions using Intel’s Gaudi 3 AI accelerators and Xeon 6 processors.
Also Read: Scammers dupe women of ₹3 crore after impersonating as movie star – All details