Physical Intelligence is a technology company focused on developing general-purpose physical intelligence in the fields of artificial intelligence (AI) and robotics. Founded in 2024 in San Francisco, California, the company aims to build foundational models and learning algorithms for robots and physical devices. Its work is directed at enabling AI systems to function effectively not only in digital environments but also in the physical world. Physical Intelligence develops systems that allow robots to learn new tasks and adapt across diverse environments.
Foundation and General Information
Physical Intelligence (abbreviated as Pi or π) was established in 2024 in San Francisco, California, United States. The founders include Karol Hausman, Sergey Levine, Chelsea Finn, Brian Ichter, Lachy Groom, Adnan Esmail, and Quan Vuong. The number of employees ranges between 11 and 50.
Shortly after its founding, the company raised $70 million in seed funding from investors such as Thrive Capital, Khosla Ventures, Lux Capital, OpenAI, and Sequoia Capital. In November 2024, it raised an additional $400 million in a Series A funding round that included participation from Jeff Bezos, reaching a valuation of $2 billion. As of 2025, the company's estimated market value stands at $2.4 billion.
Objectives and Areas of Focus
Physical Intelligence seeks to integrate AI models into the physical world. By adapting methods used in language models to physical data, the company aims to equip robots with environmental perception, flexible motor control, and general task-solving capabilities. Its efforts focus on enabling robots to recognize new objects, learn unfamiliar tasks in real-world environments, and adapt to changing conditions.
Technological Developments
Physical Intelligence introduced its first general-purpose robotic foundation model, π0. Trained on multi-task and multi-platform data, π0 is a Vision-Language-Action (VLA) model capable of performing physical tasks such as folding laundry, cleaning tables, and picking up coffee beans.
In 2025, the π0.5 model was introduced, expanding on π0 by incorporating open-world generalization capabilities. It can carry out daily tasks such as cleaning and organizing in never-before-seen households. π0.5 was developed using multi-source co-training methods, enabling broader generalization at both physical and conceptual levels.
The company also developed a system called Hi Robot, which enables robots to follow multi-step, open-ended natural language commands and user feedback step by step. Hi Robot integrates high-level and low-level planning through Vision-Language-Action models.
Products and Publications
Key projects and technologies developed by Physical Intelligence include:
- π0 (October 2024): The first general-purpose Vision-Language-Action policy.
- FAST (January 2025): A method that accelerates action tokenization for more efficient training.
- π0-FAST (February 2025): An autoregressive and accelerated version of the π0 model.
- π0.5 (April 2025): The second-generation VLA model with open-world generalization capabilities.
- Hi Robot (February 2025): A hierarchical control system capable of interpreting complex instructions and user feedback.
The π0 and π0-FAST models have been released as open-source, allowing researchers to explore and improve upon them.
Autonomous Laundry Folding (Physical Intelligence)
Research and Development
Physical Intelligence’s R&D focuses on enabling AI-based robotic systems to operate flexibly and reliably in open-world environments. The company aims to develop general-purpose models capable of functioning in new environments and handling previously unseen tasks.
The π0 model was trained on datasets gathered from various robotic platforms and is capable of zero-shot task performance. Built on this foundation, π0.5 introduces open-world capabilities and has completed long-duration, multi-step tasks in entirely new households. The training involved multi-source co-training with data collected from mobile robots, fixed robot arms, and bimanual systems. It incorporated demonstrations, high-level task descriptions, object recognition tasks, and voice-commanded missions.
In addition to physical manipulation, the company emphasizes conceptual understanding. Robots are trained to understand objects not only by physical traits but also by their functional context.
Vision-Language-Action models developed by Physical Intelligence integrate both low-level (motor control) and high-level (planning and comprehension) decision-making processes. Hi Robot, for example, can interpret open-ended commands, break them into sub-tasks, and convert them into appropriate motor actions. It can also process real-time user corrections and update its plans accordingly.
The company also explores scaling data and designing model architectures to bring the success of large language models into the physical domain. Experiments with π0.5 using data from roughly 100 household environments achieved near-parity performance with models trained directly on target test conditions.
Physical Intelligence aims to expand data collection using low-cost, flexible robotic platforms. The company conducts large-scale experiments with robots such as ALOHA and DROID, collecting data through direct operation, natural language interactions, user feedback, and environmental observation.
Future goals include enabling robots to learn independently from their experiences, infer new information from feedback, and autonomously learn new tasks with minimal human supervision. Further objectives include the development of capabilities such as asking for help in ambiguous situations or dynamically altering strategy during tasks.
Investors
Backers of Physical Intelligence include prominent names in technology and AI such as Bond, Jeff Bezos, Khosla Ventures, Lux Capital, OpenAI, Redpoint Ventures, Sequoia Capital, and Thrive Capital. OpenAI contributed to both seed and Series A rounds.
Future Plans
The company plans to continue large-scale data collection and model development to enable robots to operate more autonomously, reliably, and flexibly in the physical world. It seeks to develop next-generation Vision-Language-Action models capable of executing increasingly complex tasks with less human intervention. Other research areas include enabling robots to learn from experience and further advancing their physical interaction skills.