Energy News
ROBO SPACE
Can robots learn from machine dreams?
MIT CSAIL researchers (left to right) Alan Yu, an undergraduate in electrical engineering and computer science (EECS); Phillip Isola, associate professor of EECS; and Ge Yang, a postdoctoral associate, developed an AI-powered simulator that generates unlimited, diverse, and realistic training data for robots. Robots trained in this virtual environment can seamlessly transfer their skills to the real world, performing at expert levels without additional fine-tuning. Credits:Photo: Michael Grimmett/MIT CSAIL
Can robots learn from machine dreams?
by Rachel Gordon | MIT CSAIL
Boston MA (SPX) Nov 20, 2024

For roboticists, one challenge towers above all others: generalization - the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. This process traditionally requires human oversight, with operators carefully challenging robots to expand their abilities. As robots become more sophisticated, this hands-on approach hits a scaling problem: the demand for high-quality training data far outpaces humans' ability to provide it.

Now, a team of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers has developed a novel approach to robot training that could significantly accelerate the deployment of adaptable, intelligent machines in real-world environments. The new system, called "LucidSim," uses recent advances in generative AI and physics simulators to create diverse and realistic virtual training environments, helping robots achieve expert-level performance in difficult tasks without any real-world data.

LucidSim combines physics simulation with generative AI models, addressing one of the most persistent challenges in robotics: transferring skills learned in simulation to the real world. "A fundamental challenge in robot learning has long been the 'sim-to-real gap' - the disparity between simulated training environments and the complex, unpredictable real world," says MIT CSAIL postdoc Ge Yang, a lead researcher on LucidSim. "Previous approaches often relied on depth sensors, which simplified the problem but missed crucial real-world complexities."

The multipronged system is a blend of different technologies. At its core, LucidSim uses large language models to generate various structured descriptions of environments. These descriptions are then transformed into images using generative models. To ensure that these images reflect real-world physics, an underlying physics simulator is used to guide the generation process.

The birth of an idea: From burritos to breakthroughs
The inspiration for LucidSim came from an unexpected place: a conversation outside Beantown Taqueria in Cambridge, Massachusetts. ??"We wanted to teach vision-equipped robots how to improve using human feedback. But then, we realized we didn't have a pure vision-based policy to begin with," says Alan Yu, an undergraduate student in electrical engineering and computer science (EECS) at MIT and co-lead author on LucidSim. "We kept talking about it as we walked down the street, and then we stopped outside the taqueria for about half-an-hour. That's where we had our moment."

To cook up their data, the team generated realistic images by extracting depth maps, which provide geometric information, and semantic masks, which label different parts of an image, from the simulated scene. They quickly realized, however, that with tight control on the composition of the image content, the model would produce similar images that weren't different from each other using the same prompt. So, they devised a way to source diverse text prompts from ChatGPT.

This approach, however, only resulted in a single image. To make short, coherent videos that serve as little "experiences" for the robot, the scientists hacked together some image magic into another novel technique the team created, called "Dreams In Motion." The system computes the movements of each pixel between frames, to warp a single generated image into a short, multi-frame video. Dreams In Motion does this by considering the 3D geometry of the scene and the relative changes in the robot's perspective.

"We outperform domain randomization, a method developed in 2017 that applies random colors and patterns to objects in the environment, which is still considered the go-to method these days," says Yu. "While this technique generates diverse data, it lacks realism. LucidSim addresses both diversity and realism problems. It's exciting that even without seeing the real world during training, the robot can recognize and navigate obstacles in real environments."

The team is particularly excited about the potential of applying LucidSim to domains outside quadruped locomotion and parkour, their main test bed. One example is mobile manipulation, where a mobile robot is tasked to handle objects in an open area; also, color perception is critical. "Today, these robots still learn from real-world demonstrations," says Yang. "Although collecting demonstrations is easy, scaling a real-world robot teleoperation setup to thousands of skills is challenging because a human has to physically set up each scene. We hope to make this easier, thus qualitatively more scalable, by moving data collection into a virtual environment."

Who's the real expert?
The team put LucidSim to the test against an alternative, where an expert teacher demonstrates the skill for the robot to learn from. The results were surprising: Robots trained by the expert struggled, succeeding only 15 percent of the time - and even quadrupling the amount of expert training data barely moved the needle. But when robots collected their own training data through LucidSim, the story changed dramatically. Just doubling the dataset size catapulted success rates to 88 percent. "And giving our robot more data monotonically improves its performance - eventually, the student becomes the expert," says Yang.

"One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments," says Stanford University assistant professor of electrical engineering Shuran Song, who wasn't involved in the research. "The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks."

From the streets of Cambridge to the cutting edge of robotics research, LucidSim is paving the way toward a new generation of intelligent, adaptable machines - ones that learn to navigate our complex world without ever setting foot in it.

Research Report:Learning Visual Parkour from Generated Images

Related Links
Computer Science and Artificial Intelligence Laboratory (CSAIL)
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
Smart robots poised to boost green energy efficiency with wave prediction
Edinburgh, UK (SPX) Nov 15, 2024
Underwater robots capable of predicting real-time wave movements could significantly cut the costs of offshore renewable energy production, according to a new study. Engineers from the University of Edinburgh have developed innovative computational and experimental methods that allow autonomous robots to function stably in turbulent seas. This advancement could make the maintenance of offshore wind farms and tidal turbines more efficient and safer, eliminating current challenges posed by erratic w ... read more

ROBO SPACE
Climate finance can be hard sell, says aide to banks and PMs

'Vague' net zero rules threaten climate targets, scientists warn

US says its climate financing reached $11 bn this year

Trump picks Doug Burgum as energy czar in new administration

ROBO SPACE
Spinning fusion fuel for efficiency

Tackling the energy revolution, one sector at a time

NASA opens Power Systems essay contest for K12 students

In search of high-performance materials for fusion reactors

ROBO SPACE
Sweden blocks 13 offshore wind farms over defence concerns

Sweden's defence concerned by planned offshore wind power

On US coast, wind power foes embrace 'Save the Whales' argument

Renewables revolt in Sardinia, Italy's coal-fired island

ROBO SPACE
Scientists to create solar lasers for space power inspired by nature

Perovskite advancements improve solar cell efficiency and longevity

More energy and oil possible through combining photovoltaic plants with hedgerow olive groves

'Nobody can reverse' US progress on clean energy: Biden

ROBO SPACE
Framatome's PROtect fuel achieves key milestone at Gosgen Nuclear Plant in Switzerland

Framatome and Nuclearelectrica partner to produce Lutetium-177 in Romania

Vietnam looks to restart nuclear power projects

US touts Trump-proof nuclear expansion plans at COP29

ROBO SPACE
Turning automotive engines into modular chemical plants to make green fuels

Sacred cow: coal-hungry India eyes bioenergy to cut carbon

Waste heat from London sewers eyed to warm UK parliament

Bio-based fibers may have greater environmental impact than traditional plastics

ROBO SPACE
COP29: TotalEnergies chief defends oil's climate 'progress'

Trump taps oil exec, climate skeptic Chris Wright for energy secretary

Gore says 'absurd' to hold UN climate talks in petrostates

Oil execs work COP29 as NGOs slam lobbyist presence

ROBO SPACE
Redefining net zero will not prevent global warming scientists warn

UN chief urges G20 'leadership' on stalled climate talks

All eyes on G20 for breakthrough as COP29 climate talks stall

Trump's Republican allies tread lightly on Paris pact at COP29

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.