Anthropic Cloud takes control of a robot dog


As many robots as possible Popping up in warehouses, offices, and even people’s homes, the idea of ​​hacking large language models into complex systems sounds like the stuff of science fiction nightmares. So, naturally, Anthropic researchers were eager to see what would happen if Cloud tried to take control of a robot—in this case, a robot dog.

In a new study, Anthropic researchers found that the cloud can automate much of the work involved in programming a robot and making it perform physical tasks. At one level, their findings demonstrate the agent coding capabilities of modern artificial intelligence models. On the other hand, they point out how these systems may begin to expand into the physical realm as models master more aspects of coding and become better at interacting with software as well as physical objects.

Logan Graham, a member of Anthropic’s red team that studies the models for potential risks, told WIRED: “We suspect that the next step for AI models is to start reaching the world and influencing the world more broadly. “It really requires models that communicate more with robots.”

Courtesy of Entropic

Courtesy of Entropic

Anthropic was founded in 2021 by former OpenAI employees who believed that artificial intelligence could become problematic and even dangerous as it progressed. Today’s models aren’t smart enough to take full control of a robot, Graham says, but future models may be. Studying how people use LLM to program robots could help industry prepare for the idea of ​​”models that eventually become self-representing,” he says, referring to the idea that artificial intelligence might one day take over physical systems.

It’s not yet clear why an AI model would decide to take control of a robot, let alone do something evil with it. But speculating about the worst-case scenario is part of the Anthropic brand and helps position the company as a key player in the responsible AI movement.

In the experiment, called Project Fetch, Anthropic asked two groups of researchers with no prior experience in robotics to take control of a four-legged robot dog called the Unitree Go2 and program it to perform specific activities. Teams were given access to a controller, then asked to complete more complex tasks. One group used the cloud coding model – the other group wrote code without the help of AI. The group using the cloud could do some tasks faster than the human-only programming group. For example, he was able to get the robot to walk and find a beach ball, something the human-only group couldn’t figure out.

Anthropic also studied the dynamics of collaboration in both teams by recording and analyzing their interactions. They found that the group that did not have access to the cloud showed more negative emotions and confusion. This may be because the cloud made connecting to the bot faster and coded a simpler user interface.

Courtesy of Entropic

The Go2 robot used in the entropic experiments costs $16,900, which is relatively cheap by robot standards. It is commonly deployed in industries such as construction and manufacturing to conduct remote inspections and security patrols. The robot is capable of walking independently, but generally relies on high-level software commands or a person controlling the controller. Go2 is developed by Unitree, located in Hangzhou, China. According to a recent SemiAnalysis report, its AI systems are currently the most popular on the market.

The large language models that power ChatGPT and other intelligent chatbots typically generate text or images in response to a request. Recently, these systems have become adept at generating operational code and software, making them agents rather than text producers.

Leave a Reply

Your email address will not be published. Required fields are marked *