Humanoid robot maker Figure has unveiled Helix, a groundbreaking AI Vision-Language-Action model that brings us closer to the reality of practical household robots. This innovative system allows robots to understand voice commands and manipulate items they've never encountered before, marking a significant milestone in robotics technology.
Helix combines a 7-billion-parameter model for cognitive understanding with a 80-million-parameter model for precise movement control. This dual-model architecture ensures that robots can not only comprehend natural language instructions but also execute tasks with remarkable accuracy.
In a recent demonstration, Figure showcased two robots working together to put away groceries they had never seen before, guided solely by voice commands. This highlights the system's ability to adapt to unfamiliar objects and scenarios, a critical requirement for household applications.
One of Helix's standout features is its efficiency. The model runs smoothly on basic onboard GPUs and requires just 500 hours of training data, a fraction of what previous approaches demanded. This makes it more accessible and scalable for widespread adoption.
The launch of Helix comes just weeks after Figure ended its partnership with OpenAI, signaling the company's confidence in its in-house technology. This move underscores Figure's commitment to pushing the boundaries of robotics innovation.
While robots have already proven their worth in industrial settings, the question is no longer if but when humanoid robots will become integral to household tasks. Helix's ability to handle the unpredictability of home environments brings us one step closer to this future.
With Helix, Figure is paving the way for robots that can reliably navigate the chaos of everyday life, making the dream of practical household robots a tangible reality.