Google DeepMind, Google’s AI research lab, on Wednesday announced new AI models called Gemini Robotics designed to enable real-world machines to interact with objects, navigate environments, and more.
DeepMind published a series of demo videos showing robots equipped with Gemini Robotics folding paper, putting a pair of glasses into a case, and other tasks in response to voice commands. According to the lab, Gemini Robotics was trained to generalize behavior across a range of different robotics hardware, and to connect items robots can “see” with actions they might take.
Leave feedback about this
You must be logged in to post a comment.