For the week ending April 5, Chinese large language models swept the top six positions on OpenRouter, a platform that tracks real-world usage from millions of users. Alibaba's Qwen3.6 Plus (free version) led with 4.6 trillion tokens processed, while its preview version took third at 1.64 trillion. The new Qwen3.6-Plus even set a single-day record with over 1.4 trillion tokens right after launch. Overall, global LLM usage hit 27 trillion tokens, up nearly 19 percent from the prior week. Chinese models handled 13 trillion of those, a 31 percent jump, compared to just 3 trillion for US ones, which barely grew. This marks five straight weeks where China outpaced the US in weekly tokens.
How did this happen? China's edge comes from embedding AI deeply into everyday apps like e-commerce and social media, plus aggressive free access that pulls in huge crowds. Experts point to better computing power, cheap energy, and optimized setups for high-volume tasks like AI agents that chain multiple steps. Models like Qwen excel in math, coding, multilingual work, and long-context handling, often matching or beating Western rivals despite US chip export curbs.
In the US-China AI race, this shows China's strength in open-source models and mass adoption. While US leaders like OpenAI's GPT series still dominate proprietary high-end spots, Chinese ones surge in volume thanks to speedier releases and ecosystem integration. It's not total dominance, but a sign of closing gaps through scale and affordability.









