Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Making a "Touch"-Based Hundred-Billion Valuation? Meta, BYD, JD.com Enter This Embodied AI Company
(Source: Huashang Strategy)
Text | Huashang Strategy, Fang Ledi
The market’s sense of smell is more honest. In the hot embodied intelligence sector, a company called Pasini (Pasini) didn’t rely on flashy robot moves or backflips, but instead did a “hard job,” which earned it over 1 billion RMB in Series B funding, pushing its valuation past the 100 billion mark.
Behind this round of funding are heavyweight players: Huangpu River Capital leading the investment, along with Meta affiliates, BYD, JD.com, and other industry giants.
This isn’t just another concept hype driven by capital. The big players’ real money reflects collective recognition of an “invisible treasure”—multimodal tactile data from the real world. They’re investing in Pasini’s core competitive advantage in data closed-loop.
The future market for embodied intelligence is vast. McKinsey predicts it will exceed one trillion dollars by 2030, spanning manufacturing, logistics, and service industries. However, the entire industry suffers from a “data hunger.” Gartner data shows that up to 80% of AI projects are delayed due to data quality issues. While data is abundant, high-quality tactile data that truly enables robots to “get hands-on” is extremely scarce.
Imagine you pick up a paper cup, a sponge, and a piece of iron with your hand—you can effortlessly apply three completely different forces. This is instinctive for humans, a muscle memory built from countless life experiences.
But for robots, this simple action is as difficult as climbing a mountain. They don’t know how much force to use—squeeze the paper cup to crush it, or grip the iron piece firmly? The subtle torque control behind this requires massive, high-quality data to “feed” and train.
Pasini stands out because it provides not just ordinary data, but a “definition right”—a standard that defines how robots perceive and interact with the physical world. This definition is built on three solid pillars.
【Pillar 1: From “seeing” to “touching,” filling the sensory gap for robots】
For a long time, mainstream AI models relied heavily on visual data—like giving robots eyes but forgetting to equip them with a sensing hand. This results in mediocre physical interaction capabilities. For example, Boston Dynamics’ Atlas robot excels in running, jumping, and balancing, but due to lack of tactile feedback, it struggles with fine manipulation tasks.
Pasini takes a different approach by developing its own hardware to massively collect multimodal data beyond vision, such as tactile information, creating a billion-level real-world database called OmniSharing DB. This is a globally unique resource, redefining “full-modal data”: not only visual information but also physical contact data.
IDC estimates that this multidimensional data can improve model generalization by 20% to 30%. After raising $6.8 billion, Figure AI also emphasized multimodal training and partnered with BMW for automotive assembly, validating the critical role of data dimensions in application efficiency.
Pasini’s exploration is driving the industry from “vision-first” to “full-sensory” mode, unlocking huge potential for automation in manufacturing and service sectors.
【Pillar 2: Moving from manual workshops to “super factories” for industrial-scale data production】
Collecting core tactile data through a few robotic arms in a lab workshop is not feasible. To solve this, Pasini invested heavily to build a “Super EID” mega data collection factory in Tianjin.
Here, thousands of grasping, assembling, and contact operations happen daily in the real physical world, with machines and humans working together, producing nearly 10 billion high-quality real-world data points annually. This approach upgrades data collection from scattered projects to standardized assembly line production, creating a strong scale barrier.
Compared to traditional data collection methods, which struggle with scale, quality, and consistency, even OpenAI’s robot projects faced long iteration cycles due to data limitations. McKinsey’s analysis shows that large-scale data can reduce AI deployment costs by 15% to 40%. Tesla’s Dojo system is a similar example—by self-collecting massive driving data, Tesla nearly monopolized the autonomous driving market and gained significant market value. Pasini’s strategy is to replicate this success in embodied intelligence.
【Pillar 3: Based on reality, rejecting “living in simulators” AI】
Other data approaches have limitations, such as the “gap” between simulation and reality. Many models perform perfectly in simulators but fail in real environments. Google’s PaLM-E faced such generalization issues.
Pasini has always adhered to a “human-centered” real-world data collection approach, sourcing all data from real interaction tasks. This ensures “authenticity” of the data and establishes clear causal links between data and models, greatly improving the efficiency of model performance conversion. According to Forrester, real-world data can shorten robot training cycles by 30%. Amazon’s warehouse robots are a prime example—after early simulation failures, the team shifted decisively to real data, boosting sorting efficiency by 50%.
Today, Pasini’s data advantage is helping partners like BYD solve practical assembly pain points, driving profound changes in logistics and manufacturing value chains.
Holding the “data definition right,” Pasini’s hardware, data, and models work together, enabling its self-developed OmniVTLA large model to maintain a leading edge in fine manipulation. Moreover, “data definition right” has evolved into “technology definition right”—Pasini is not just selling data but setting industry standards.
Business collaborations with giants like Meta and JD.com validate Pasini’s data value in key scenarios like automotive and logistics, further consolidating its leadership. The new round of funding will expand data factories and accelerate model iteration, making the momentum unstoppable.
Pasini’s story signals that embodied intelligence has entered a new era of “data assets + closed-loop efficiency.” Whoever masters the deepest, most authentic data will define the industry’s future. Pasini not only opens the door to higher valuation but also provides a new paradigm for all hard-tech companies to build deep moats.