Tsinghua University and Microsoft Research Asia trained a full AI model using only fake data. No real-world samples at all.
The entire dataset was artificially generated through a new pipeline called SynthSmith, and the system ran on Nvidia chips from start to finish. The team didnβt just pull off a novelty test. They built a working model with 7 billion parameters that beat much bigger models trained on human data.
Their paper, posted January 11 on arXiv, claims that the X-Coder they trained outperformed coding models with 14 billion parameters, even though it never saw real-world text.
βIn-depth analysis reveals that scaling laws hold on our synthetic dataset,β the researchers wrote. This team included names from Tsinghua University, Microsoft Research Asia, and Wuhan University.
Researchers use Nvidia chips to skip real-world data entirely
The training setup leaned hard on Nvidia hardware. For supervised fine-tuning, they used 128 Nvidia H20 chips for 220 hours straight. After that, they switched to 32 H200 chips for another seven full days to handle the reinforcement learning phase. These werenβt random choices. The H20 is tuned for inference, and the H200 is built for high-end training. These are the most powerful chips available to Chinese firms right now, thanks to export control exemptions the Trump administration approved after Nvidia lobbied hard to make them available in China.
The researchers said the pipeline itself wasnβt the problem when it came to scaling. It was all about compute power.
Wu Jie, the lead author and a masterβs student at Tsinghua, said the real reason they hadnβt taken the pipeline to 100 billion or trillion-parameter models was simply, βcomputational constraints, rather than limitations of the pipeline itself.β
By releasing the code publicly, they hope others can build on the project without needing to pay massive training costs. The paper also points out a trend in AI.
Models are now expected to βthinkβ over longer timeframes and handle complex reasoning, which has pushed the need for way more compute during inference, not just training.
Chinese team builds faster chip using old fabrication tech
Separately, a new chip called ACCEL was built by Chinese scientists using light particles, not electricity. The chip (short for All-Analogue Chip Combining Electronics and Light) was tested in a lab and hit 4.6 PFLOPS.
Thatβs 3,000 times faster than Nvidiaβs A100, and the Chinese chip used 4 million times less energy. This makes it one of the most efficient AI chips ever made for specific tasks like image recognition or autonomous driving.
It wonβt replace CPUs or smartphone chips yet, but the team thinks it could work in wearables, electric vehicles, or smart factories.
The chip was built using a 20-year-old process by Semiconductor Manufacturing International Corporation. It avoided the need for advanced lithography machines that China still canβt access.
βDeployment of photonic computing systems used to be a challenge due to complicated structural design and vulnerability to noise and system errors,β Tsinghua said in an article.
The chip avoids this by combining photonic and analog electronics in a new framework. It doesnβt handle general computing tasks like file compression, but itβs great for AI vision and low-light sensing.
One crazy detail: the energy it takes to run modern chips for an hour could keep ACCEL running for 500 years. That low power demand also makes it easier to deal with heat issues, which limit how small chips can get.
The chipβs functions include traffic identification, lowlight imaging, and real-time vision, using ambient light directly in the sensing process. The team said itβs not a general-purpose chip, but it fills a very specific need.
Funding came from the National Key R&D Programme and the National Natural Science Foundation of China. A Beijing chip company called MakeSens, co-founded by one of the researchers, was involved and recently launched a low-power analog chip too.
Tsinghuaβs Dai Qionghai, one of the project leads, said building a new computing architecture was just the first step.
βThe more important challenge is to bring this new architecture to practical applications, solving major national and public needs, which is our responsibility.β
The team hasnβt said anything about when this chip might hit the market.
Want your project in front of cryptoβs top minds? Feature it in our next industry report, where data meets impact.


















English (US)