Wiwynn, a leading provider of cloud IT infrastructure, has unveiled its next-generation AI solutions at Computex 2025. Showcasing innovation in servers, cooling systems, and networking, Wiwynn aims to accelerate AI performance while meeting rising demands for efficiency in data centers. Let’s explore the highlights from their latest showcase in Taipei.
Wiwynn Unveils Powerful AI Servers and Platforms at Computex 2025
At Computex 2025, Wiwynn introduced several next-generation AI servers that push the boundaries of computing power. In collaboration with industry leaders, they launched:
- NVIDIA GB300 NVL72 – A fully liquid-cooled rack-level AI system featuring 72 NVIDIA Blackwell Ultra GPUs. This platform delivers a 50X increase in AI inference output compared to previous NVIDIA Hopper systems.
- NVIDIA HGX B300 – Built on NVIDIA Blackwell Ultra technology, this 10U system offers up to 7X more AI compute than earlier generations. It seamlessly integrates with NVIDIA ConnectX-8 SuperNICs for enhanced performance.
- AMD Instinct™ MI350 Series GPUs – The new AI server, based on AMD CDNA 4 architecture, offers an impressive 35X boost in inference compared to the MI300X, utilizing the high-speed AMD Pollara 400 Card.
These platforms address the computational needs of both modern and legacy data centers in the evolving AI landscape.
Advanced Liquid Cooling and Thermal Solutions for Data Centers
Wiwynn also highlighted major advancements in direct liquid cooling (DLC) and thermal management:
- Optimized Double-Sided Cold Plates – Designed for future high-power chips, these solutions enhance cooling for demanding workloads.
- 3.5Kw ECAM Cold Plates – Developed with Fabric8Labs, these plates lift thermal capability for efficient operation.
- 200 kW AALC Sidecar – Built with Shinwa Controls, this technology leverages air-assisted liquid cooling for legacy data center upgrades.
- In-rack CDU – Co-developed with nVent, this cooling unit is fully integrated with the NVIDIA GB300 NVL72 rack, delivering robust performance and thermal efficiency.
These solutions are engineered to support the increasing thermal demands of AI-driven infrastructure.
Next-Generation Networking Technologies Accelerate AI Workloads
To complement their new servers and cooling systems, Wiwynn showcased advanced AI networking capabilities:
- NVIDIA Spectrum-X Ethernet – Built upon Spectrum-4 MAC technology, this platform integrates SONiC and NVIDIA Cumulus to enable flexible, high-performance Ethernet connectivity for hyperscale AI clouds.
- Broadcom Tomahawk 5 Switch – Using Near-Packaged Optical technology, this switch integrates copper and optical solutions for superior data transmission speeds.
- 102T Broadcom Switch – A 4RU air-cooled solution supporting 1.6Tbps x 64 ports, enabling lightning-fast datacenter connections.
Additionally, Wiwynn presented NVIDIA RTX PRO 6000 servers, providing universal acceleration for various enterprise and scientific AI workloads.
By unveiling these comprehensive innovations, Wiwynn sets a new standard for AI infrastructure and data center technologies. Their focus on advanced computing, efficient cooling, and next-generation networking ensures that both modern and legacy data centers are ready for the rapidly evolving AI era. As demand for AI computing grows, Wiwynn’s solutions will play a key role in shaping the industry’s future.
Don’t miss our latest Startup News: Gemtek Technology Unveils Powerful Innovations at COMPUTEX 2025