Our research on humanoid robot security made it to the U.S. Senate

Our security research on the Unitree humanoid robot was cited before the U.S. Senate. Here is what we found.

Our research on humanoid robot security made it to the U.S. Senate Subcommittee on Science, Manufacturing, and Competitiveness

Last week, Damion Shelton, co-founder and Chairman of Agility Robotics, testified before the U.S. Senate Subcommittee on Science, Manufacturing, and Competitiveness. His statement covered the state of humanoid robotics in America, the labor market, and the competitive pressure from China.
In the closing section of his testimony, Shelton cited our work directly:

"Two recent papers by security researchers have identified a critical security vulnerability in a Chinese humanoid robot allowing for remote takeover, as well as a 'phone home' data logging mechanism that sends continuous operational data to a remote server."


What the research found

Our paper documents a critical security vulnerability in the Unitree humanoid robot, one of the most widely distributed Chinese humanoid platforms now available for sale in the United States for under $20,000.

The findings include:

  • A remote takeover vulnerability that allows an attacker to gain unauthorized control of the robot.
  • A data exfiltration mechanism that continuously sends operational data to a remote server without explicit user knowledge.

These are not theoretical risks. They are documented, reproducible vulnerabilities with real-world implications for any organization — or home — operating one of these systems.


Why this matters beyond the lab

Shelton's citation came in the context of a broader argument: that China's rapid progress in humanoid robotics is creating a compelling value proposition for early adopters, but one that comes with security risks the market is not yet fully pricing in.

His point was direct: "This is a siren's call that we would be well served to take seriously."

We agree. And we would go further.

As humanoid robots move from controlled industrial environments into general-purpose deployment: warehouses, logistics, care settings, and eventually homes, the attack surface expands dramatically. The risks we documented in a research context will become operational risks at scale.

Security cannot be an afterthought in robotics. It needs to be designed in from the start, validated continuously, and treated with the same rigor applied to safety in physical systems.


Robot cybersecurity as a policy issue

The fact that this research was cited in a U.S. Senate hearing signals where the conversation is heading.

Policymakers are beginning to connect the dots between the rapid commercialization of humanoid robots, the concentration of supply chains, and the cybersecurity posture of these systems. That is the right conversation to be having.

At Alias Robotics, we have been part of that conversation for years. We were born with the mission to secure robots. Today, we've evolved into world leaders in AI for cybersecurity, bringing the same automation principles from robotics to security operations and extending them to protect all automated complex systems.

The Unitree research is one example of what happens when security is not built in from the start. As humanoid robots scale into the real world, the risks stop being theoretical. They become operational. the question is not whether vulnerabilities will exist, but whether organizations have the capability to find them, validate them, and respond continuously.


From research to continuous security operation

The Unitree research reaching the U.S. Senate is a signal. Humanoid robots are no longer a future technology. They are here. They are affordable. And as we have documented, they come with serious security gaps that the market is not yet equipped to handle.

As these systems scale globally, the organizations that will stay ahead are not those that react to vulnerabilities after the fact. They are the ones that build continuous security capability into their operations from the start.

That is what Cybersecurity AI (CAI) enables. CAI gives security teams the capability to continuously discover real exposure, validate security assumptions and respond, across IT, OT and robotic environments, from defensive to offensive security, all embedded into our agents. Not as a one-time assessment, but as an ongoing operational capability that grows with every iteration.

The future of security in an automated world will not be defined by who finds the most vulnerabilities. It will be defined by who can prove, continuously, that their systems remain secure.


Additional Resources