SuperX Unveils Multi-Model AI Server with OpenAI's GPT-OSS LLMs

Techpark

SuperX AI Technology Limited (NASDAQ: SUPX) has announced the official launch of its All-in-One Multi-Model Servers (MMS), positioning them as a pivotal enterprise-grade AI infrastructure. This new offering is designed to facilitate the dynamic collaboration of multiple AI models, emphasizing immediate usability, integrated multi-model capabilities, and deep integration into diverse application scenarios. The company aims to provide businesses with secure, efficient, and comprehensive AI solutions, available in customized specifications to suit organizations of all sizes. This release follows closely on the heels of SuperX’s XN9160-B200 AI Server debut in late July, further expanding its portfolio of enterprise AI infrastructure products.

The All-in-One MMS arrives pre-configured with OpenAI’s recently released, high-performance open-source large language models (LLMs), GPT-OSS-120B and GPT-OSS-20B. According to OpenAI’s own benchmarks, the GPT-OSS-120B model demonstrates performance that not only matches but, in crucial tests such as Massive Multitask Language Understanding (MMLU) and the American Invitational Mathematics Examination (AIME), even surpasses that of several leading closed-source models. This translates directly into a significant advantage for SuperX’s clientele, offering world-class AI inference and knowledge-processing capabilities with superior cost efficiency.

For enterprises, the introduction of this server represents a shift from complex, time-consuming AI deployments to a streamlined, “turnkey” experience. Businesses can bypass the months traditionally spent on intricate model integration, hardware adaptation, and performance tuning. SuperX’s solution provides an out-of-the-box, secure, and fully optimized generative AI platform, enabling immediate deployment of advanced applications and intelligent agents, and allowing for rapid responses to evolving market demands. This integrated approach, transforming the server from mere hardware into a complete enterprise-grade generative AI solution, is SuperX’s core differentiator, designed to accelerate business innovation and intelligent decision-making.

The MMS boasts a sophisticated multi-model fusion architecture, supporting the pre-configuration, invocation, acceleration, management, and iteration of a wide array of AI models. This includes inference models, general-purpose models, multi-modal models, speech synthesis and recognition models, embedding models, reranking models, and text-to-image models. This deep integration with end application scenarios unlocks significant functional advancements. For instance, the collaborative power of multiple intelligent agents can handle more complex business scenarios, such as precisely locating specific video clips from a text description by identifying people, actions, and objects within the footage. A built-in portal assistant and knowledge base system further empower users with over 60 pre-configured scenario-based agents, ranging from official document drafting to legal consultation and policy comparison, fostering a seamless, intuitive business process. The system also features cloud-coordinated model caching, linking local and cloud-based model repositories to provide instant access to the world’s latest AI models without delay. Crucially, the MMS offers an all-in-one integration, unifying the entire technology stack from the chip level to model service delivery, thereby abstracting complex technical architectures and allowing users to focus purely on application development.

Designed with real-world enterprise needs in mind, the MMS directly addresses common obstacles in AI adoption, including data privacy, deployment complexity, and operational scalability. For AI data security, the server incorporates NVIDIA Confidential Computing technology on the NVIDIA Blackwell platform, featuring a Trusted Execution Environment (TEE). This secure enclave protects AI intellectual property and enables confidential AI training, inference, and federated learning, all while maintaining high performance. Deployment is simplified and cost-efficient, with full-stack hardware-software integration allowing the server to be deployed in minutes, requiring minimal additional infrastructure or IT resources. While optimized for small and medium-sized enterprises, it is also scalable for large organizations through clustering, offering a high-performance alternative to traditional cloud-based Model-as-a-Service (MaaS) API offerings. Workflow efficiency is further enhanced by pre-configured templates and operational guides, enabling business users to rapidly build intelligent agents via simplified no-code or low-code interfaces for a diverse range of enterprise application scenarios.

Kenny Sng, CTO of SuperX, articulated the company’s vision, stating, “A single model cannot solve the problems of a complex world. Multi-model collaboration is a vital step in the evolution of AI towards Artificial General Intelligence (AGI) to serve people.” He emphasized SuperX’s commitment to building a collaborative ecosystem with enterprise partners and AI agent developers, pushing the boundaries of AI capabilities with this All-in-One MMS. The All-in-One MMS series is now available for order, with pricing options ranging from the AI Workstation Standard at $50,000 for individual enterprise use, up to the Cluster Edition AI Server starting at $4,000,000 for comprehensive application scenarios.