Today's Bulletin: August 28, 2025

More results...

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Africacom
AfricaCom 2024
AI
Apps
Apps
Arabsat
Banking
Broadcast
CABSAT
Cabsat
Cloud
Column
Content
Corona
DTT
eCommerce
Editorial
Education
Entertainment
Events
Fintech
Fixed
Gitex
Gitex Africa
Gitex Africa 2025
GSMA Cape Town
Healthcare
IBC
Industry Voices
Infrastructure
IoT
MNVO Nation Africa
Mobile
Mobile Payments
Music
MWC Barcelona
MWC Barcelona 2025
MWC Kigali
News
Online
Opinion Piece
Orbiting Innovations
Podcast
Q&A
Satellite
Security
Software
Startups
Streaming
Technology
TechTalks
TechTalkThursday
Telecoms
Utilities
Video Interview
Follow us

OpenAI and NVIDIA Launch Global Open-Weight Reasoning Models, Ushering in Scalable AI Era

August 6, 2025
3 min read
Author: Editorial Team

NVIDIA’s collaboration with OpenAI on these open models — gpt-oss-120b and gpt-oss-20b — is a testament to the power of community-driven innovation and highlights NVIDIA’s foundational role in making AI accessible worldwide.

Two new open-weight AI reasoning models from OpenAI  released bring cutting-edge AI development directly into the hands of developers, enthusiasts, enterprises, startups and governments everywhere — across every industry and at every scale.

NVIDIA’s  collaboration with OpenAI on these open models — gpt-oss-120b  and gpt-oss-20b  — is a testament to the power of community-driven innovation and highlights NVIDIA’s foundational role in making AI accessible worldwide.

Anyone can use the models to develop breakthrough applications in generative, reasoning and physical AI, healthcare and manufacturing — or even unlock new industries as the next industrial revolution driven by AI continues to unfold.

OpenAI’s new flexible, open-weight text-reasoning large language models (LLMs) were trained on NVIDIA H100 GPUs and run inference best on the hundreds of millions of GPUs running the NVIDIA CUDA platform across the globe.

With software optimizations for the NVIDIA Blackwell platform, the models offer optimal inference on NVIDIA GB200 NVL72 systems, achieving 1.5 million tokens per second — driving massive efficiency for inference.

“OpenAI showed the world what could be built on NVIDIA AI — and now they’re advancing innovation in open-source software.The gpt-oss models let developers everywhere build on that state-of-the-art open-source foundation, strengthening U.S. technology leadership in AI — all on the world’s largest AI compute infrastructure.”

– Jensen Huang, Founder and CEO, NVIDIA 

NVIDIA Blackwell Delivers Advanced Reasoning

As advanced reasoning models like gpt-oss generate exponentially more tokens, the demand on compute infrastructure increases dramatically. Meeting this demand calls for purpose-built AI factories powered by NVIDIA Blackwell, an architecture designed to deliver the scale, efficiency and return on investment required to run inference at the highest level.

NVIDIA Blackwell includes innovations such as NVFP4 4-bit precision, which enables ultra-efficient, high-accuracy inference while significantly reducing power and memory requirements. This makes it possible to deploy trillion-parameter LLMs in real time, which can unlock billions of dollars in value for organizations.

Open Development for Millions of AI Builders Worldwide

NVIDIA CUDA is the world’s most widely available computing infrastructure, letting users deploy and run AI models anywhere, from the powerful NVIDIA DGX Cloud  platform to NVIDIA GeForce RTX – and NVIDIA RTX PRO -powered PCs and workstations.

There are over 450 million NVIDIA CUDA downloads to date, and starting today, the massive community of CUDA developers gains access to these latest models, optimized to run on the NVIDIA technology stack they already use.

Demonstrating their commitment to open-sourcing software, OpenAI and NVIDIA have collaborated with top open framework providers to provide model optimizations for FlashInfer, Hugging Face, llama.cpp, Ollama and vLLM, in addition to NVIDIA Tensor-RT LLM  and other libraries, so developers can build with their framework of choice.

A History of Collaboration, Building on Open Source

The model releases underscore how NVIDIA’s full-stack approach helps bring the world’s most ambitious AI projects to the broadest user base possible.

It’s a story that goes back to the earliest days of NVIDIA’s collaboration with OpenAI, which began in 2016 when Huang hand-delivered the first NVIDIA DGX-1 AI supercomputer to OpenAI’s headquarters in San Francisco.

Since then, the companies have been working together to push the boundaries of what’s possible with AI, providing the core technologies and expertise needed for massive-scale training runs.

And by optimizing OpenAI’s gpt-oss models for NVIDIA Blackwell and RTX GPUs, along with NVIDIA’s extensive software stack, NVIDIA is enabling faster, more cost-effective AI advancements for its 6.5 million developers across 250 countries using 900+ NVIDIA software development kits and AI models — and counting.

The TechAfrica News Podcast

Follow us on LinkedIn

Newsletter signup

Sign up for our weekly newsletter and get the latest industry insights right in your inbox!

Please wait...

Thank you for sign up!