Cisco has entered an increasingly competitive race to dominate AI data centre interconnect technology, becoming the latest major player to unveil purpose-built routing hardware for connecting distributed AI workloads across multiple facilities.
The networking giant unveiled its 8223 routing system on October 8, introducing what it claims is the industryâs first 51.2 terabit per second fixed router specifically designed to link data centres running AI workloads.Â
At its core sits the new Silicon One P200 chip, representing Ciscoâs answer to a challenge thatâs increasingly constraining the AI industry: what happens when you run out of room to grow.
A three-way battle for scale-across supremacy?
For context, Cisco isnât alone in recognising this opportunity. Broadcom fired the first salvo in mid-August with its âJericho 4â StrataDNX switch/router chips, which began sampling and also offered 51.2 Tb/sec of aggregate bandwidth backed by HBM memory for deep packet buffering to manage congestion.
Two weeks after Broadcomâs announcement, Nvidia unveiled its Spectrum-XGS scale-across networkâa notably cheeky name given that Broadcomâs âTridentâ and âTomahawkâ switch ASICs belong to the StrataXGS family.
Nvidia secured CoreWeave as its anchor customer but provided limited technical details about the Spectrum-XGS ASICs. Now Cisco is rolling out its own components for the scale-across networking market, setting up a three-way competition among networking heavyweights.
The problem: AI is too big for one building
To understand why multiple vendors are rushing into this space, consider the scale of modern AI infrastructure. Training large language models or running complex AI systems requires thousands of high-powered processors working in concert, generating enormous amounts of heat and consuming massive amounts of electricity.Â
Data centres are hitting hard limitsânot just on available space, but on how much power they can supply and cool.
âAI compute is outgrowing the capacity of even the largest data centre, driving the need for reliable, secure connection of data centres hundreds of miles apart,â said Martin Lund, Executive Vice President of Ciscoâs Common Hardware Group.
The industry has traditionally addressed capacity challenges through two approaches: scaling up (adding more capability to individual systems) or scaling out (connecting more systems within the same facility).Â
But both strategies are reaching their limits. Data centres are running out of physical space, power grids canât supply enough electricity, and cooling systems canât dissipate the heat fast enough.
This forces a third approach: âscale-across,â distributing AI workloads across multiple data centres that might be in different cities or even different states. However, this creates a new problemâthe connections between these facilities become critical bottlenecks.
Why traditional routers fall short
AI workloads behave differently from typical data centre traffic. Training runs generate massive, bursty traffic patternsâperiods of intense data movement followed by relative quiet. If the network connecting data centres canât absorb these surges, everything slows down, wasting expensive computing resources and, critically, time and money.
Traditional routing equipment wasnât designed for this. Most routers prioritise either raw speed or sophisticated traffic management, but struggle to deliver both simultaneously while maintaining reasonable power consumption. For AI data centre interconnect applications, organisations need all three: speed, intelligent buffering, and efficiency.
Ciscoâs answer: The 8223 system
Ciscoâs 8223 system represents a departure from general-purpose routing equipment. Housed in a compact three-rack-unit chassis, it delivers 64 ports of 800-gigabit connectivityâcurrently the highest density available in a fixed routing system. More importantly, it can process over 20 billion packets per second and scale up to three Exabytes per second of interconnect bandwidth.
The systemâs distinguishing feature is deep buffering capability, enabled by the P200 chip. Think of buffers as temporary holding areas for dataâlike a reservoir that catches water during heavy rain. When AI training generates traffic surges, the 8223âs buffers absorb the spike, preventing network congestion that would otherwise slow down expensive GPU clusters sitting idle waiting for data.
Power efficiency is another critical advantage. As a 3RU system, the 8223 achieves what Cisco describes as âswitch-like power efficiencyâ while maintaining routing capabilitiesâcrucial when data centres are already straining power budgets.
The system also supports 800G coherent optics, enabling connections spanning up to 1,000 kilometres between facilitiesâessential for geographic distribution of AI infrastructure.
Industry adoption and real-world applications
Major hyperscalers are already deploying the technology. Microsoft, an early Silicon One adopter, has found the architecture valuable across multiple use cases.Â
Dave Maltz, technical fellow and corporate vice president of Azure Networking at Microsoft, noted that âthe common ASIC architecture has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments.â
Alibaba Cloud plans to use the P200 as a foundation for expanding its eCore architecture. Dennis Cai, vice president and head of network Infrastructure at Alibaba Cloud, stated the chip âwill enable us to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices.â
Lumen is also exploring how the technology fits into its network infrastructure plans. Dave Ward, chief technology officer and product officer at Lumen, said the company is âexploring how the new Cisco 8223 technology may fit into our plans to enhance network performance and roll out superior services to our customers.â
Programmability: Future-proofing the investment
One often-overlooked aspect of AI data centre interconnect infrastructure is adaptability. AI networking requirements are evolving rapidly, with new protocols and standards emerging regularly.Â
Traditional hardware typically requires replacement or expensive upgrades to support new capabilities. The P200âs programmability addresses this challenge.Â
Organisations can update the silicon to support emerging protocols without replacing hardwareâimportant when individual routing systems represent significant capital investments and AI networking standards remain in flux.
Security considerations
Connecting data centres hundreds of miles apart introduces security challenges. The 8223 includes line-rate encryption using post-quantum-resilient algorithms, addressing concerns about future threats from quantum computing. Integration with Ciscoâs observability platforms provides detailed network monitoring to identify and resolve issues quickly.
Can Cisco compete?
With Broadcom and Nvidia already staking their claims in the scale-across networking market, Cisco faces established competition. However, the company brings advantages: a long-standing presence in enterprise and service provider networks, the mature Silicon One portfolio launched in 2019, and relationships with major hyperscalers already using its technology.
The 8223 ships initially with open-source SONiC support, with IOS XR planned for future availability. The P200 will be available across multiple platform types, including modular systems and the Nexus portfolio.Â
This flexibility in deployment options could prove decisive as organisations seek to avoid vendor lock-in while building out distributed AI infrastructure.
Whether Ciscoâs approach becomes the industry standard for AI data centre interconnect remains to be seen, but the fundamental problem all three vendors are addressingâefficiently connecting distributed AI infrastructureâwill only grow more pressing as AI systems continue scaling beyond single-facility limits.Â
The real winner may ultimately be determined not by technical specifications alone, but by which vendor can deliver the most complete ecosystem of software, support, and integration capabilities around their silicon.
See also: Cisco: Securing enterprises in the AI era
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.





Be the first to comment