At the time of writing, RSF has reportedly been used and adopted by several groups and applied in several commercial projects. Our work has been cited in the results that other companies delivered to clients and more than 50 starts have been given to the template we made available back in 2018. A repository containing such template remains available here. This article presents a summary of the original report. For more information, refer to the original paper.

Robots are going mainstream. From assistance and entertainment robots used in homes, to those working in assembly lines in industry and all the way to those deployed in medical and professional facilities. For many, robotics is called to be the next technological revolution. Yet, similar to what happened at the dawn of the computer or the mobile phone industries, there is evidence suggesting that security in robotics is being underestimated. Even though the first dead human from a robot happened back in 1979 [1], the consequences of using these cyber-physical systems in industrial manufacturing [2] [3], professional [4], [5] or commercial [6] environments are still to trigger further research actions in the robotics security field. Robot security is being underestimated. To address this issue, we present the Robot Security Framework (RSF), a methodology to perform systematic security assessments in robots. We propose, adapt and develop specific terminology and provide guidelines to enable a holistic security assessment following four main layers (Physical, Network, Firmware and Application). We argue that modern robotics should regard as equally relevant internal and external communication security. Finally, we advocate against “security by obscurity”. We conclude that the field of security in robotics deserves further research efforts.


Over the last 10 years, the domains of security and cybersecurity have been substantially democratized, attracting individuals to many sub-areas within security assessment. According to recent technical reports summarizing hacker activity per sector [7] [8] , most security researchers are currently working assessing vulnerabilities in websites (70.8%), mobile phones (smartphones, 5.6%) and Internet of the Things (IoT) devices (2.6%), amongst others. Notwithstanding the relevance of robot vulnerabilities for most sectors of application, no formal study has yet published relevant data about robotics nor seems to be an active area of research when compared to other fields. We believe that the main reasons for this gap are twofold. In a first aspect, security for robots is a complex subject from a technological perspective. It requires an interdisciplinary mix of profiles, including security researchers, roboticists, software engineers and hardware engineers. In a second aspect and to the best of our knowledge, there are few guidelines, tools and formal documentation to assess robot security. Overall, robot security is an emerging challenge that needs to be addressed immediately.

In an attempt to provide a solution for the second problem, this paper presents the Robot Security Framework (RSF), a systematic methodology for performing security assessments in robotics. We argue that security, privacy and safety in robotic systems should clearly be recognized as a major issue in the field. Our framework proposes a standardized methodology to identify, classify and report vulnerabilities for robots within a formal operational protocol. Throughout the description of the RSF, we present exemplary scenarios where robots are subject to the security issues hereby exposed.

Previous work

Robot security is becoming a concern that extends rapidly. However, to date, and as already briefed in the Introduction section, there are few honest and laudable efforts that elaborate into methodologies for analyzing robot’s security or cybersecurity. The most relevant of those pioneering contributions is Shyvakov’s work [9]. Him and collaborators aimed to develop a preliminary security framework for robots, described from a penetration tester’s perspective. The cited research piece is, to the best of our knowledge, the best piece of literature addressing robot security concerns. Nonetheless, on the basis of the content and structure of that particular work, we largely found motivation for the present work. We found extremely relevant to review, discuss, complete it, and motivate the full picture assesment from a robotics standpoint.

The author’s classification [9:1] proposed 4 levels of security: a) physical security, b) network security, c) operating system security and d) application security. However, we find that the author lacks to some extent, the background knowledge related to the robotics field, particularly regarding the internal organization of these systems. For instance, he states that robot have ”internal networks for wiring together internal components (nodes), yet, these networks miss the fact that each is a security critical element which can potentially influence the overall robot security″. Shyvakov even includes a brief category, devoted to internal networks, within his proposed framework. However, under the assumption that ”normal user is usually not supposed to connect to the internal network″, he advises that of cases where ”it is not possible to implement full network monitoring due to hardware limitations but provides no further details on the rationale. By claiming that At least there should be a capability to detect new unauthorized devices on the network″ he suggests the idea that dedicated robot network security is needed. Moreover, the author discusses that ”thresholds on IDS of the internal network should be lower than on the external network″ but provides no additional foundation for such a claim. We argue that such approach would lead to an incomplete security framework by obscurity. We also believe that modern robotics should converge towards enforcing identical security levels on both inner and outer communication interfaces. Therefore, we advocate for an holistic approach to robot security on the communications level into which we will elaborate.

In an attempt of providing real use-case scenarios, the author [9:2] recommends a preliminary implementation of the framework and provides exemplification for real robots, yet this particular part of his work remains hidden or sanitized. Even if the reasons behind this to be kept confidential may include the interest of robot manufacturers or stakeholders, it does little favour for actual enforcement of any security framework. Therefore, we find it necessary to provide illustrative real public cases whereto any framework may be applied.

Other contributions to robot security, have primarily focused upon providing only partial contributions, e.g. hardening particular aspects of robots, such as middleware [10], and elaborated on further efforts towards the application aspect [11] or the lower communication aspects [12].

Recently, some pieces of research [13] have brought focus onto the necessity of a framework for the evaluation of IoT device security. Such existing frameworks were targeted by Shyvakov [9:3] and duly criticized as not suitable due to incompleteness. We share the view that IoT frameworks are not applicable nor valid to provide guidance into the assessment of security to the robotics landscape. It is a common misconception that robots are a particular subset of IoT devices. Due to the fact that robots are often orders of magnitude more complex than common IoT devices, robots are to be considered, if any, a sophistication of a “network of computers”, consisting of a distributed logic working in an array of sensors, actuators, power mechanisms, user interfaces and other modules that have particular connectivity and modularity requirements. Other recent researches [14] claim to perform structured security assessment of a particular IoT robot. Yet, all these aforementioned pieces of research remain, in our opinion, very partial and not stablishing the. Therefore, we find it necessary to systematize assessment by further elaborating on a common and universal reference procedure for robotic systems.
Our contributions

Inspired by the current state of the art, inter alia [11:1], [12:1], [10:1], [9:4], we propose the subsequent Robot Security Framework (RSF). We also extend the initial ideas presented in prior art and add our contribution from a roboticist’s perspective. Our main contributions on top of previous work are:

  • Reformulation of the categorization terms. In particular, the term component becomes aspect. Component is a rather generic term in robotics and it typically refers to a discrete and identifiable unit that may be combined with other parts to form a larger entity [15]. Components can be either software or hardware. Even a component that is mainly software or hardware can be referred to as a software or hardware component respectively. In order to avoid any confusions, rather than component, the term aspect will be used to categorize each layer within RSF.
  • Overall restructuring of the content. The original structure of the work presented by Shyvakov [9:5] hinders its comprehension, specially for those more familiar with robots and their components. Therefore, we propose a layer-aspect-criteria structure where each criteria is analyzed in terms of its objective, the rationale or relevance, and the systematics of assessment or method.
  • Formalized firmware layer. We adopt a commonly accepted definition of firmware suitable for the context of robotics: software that is embedded in robots. We apply this definition to the previous ‘Firmware and Operating System layer’ and generalize it simply as ‘Firmware layer’. Besides the operating system, we include robot middleware as a relevant topic of assessment and group them both into Firmware, according to the adopted definition.
  • Adoption of generic “component” and “module” terms. As an alternative to the proposed “internal component” and “external component” terminology, we suggest the generic terms “component”, as defined above, and “module”. Both are commonly accepted as a component with special characteristics that facilitate system design, integration, inter-operability and re-use. This way, we simplify the message when speaking about components1. In light of the above, we elaborate on the following notion: robots are composed by components and modules. Some of them are physically exposed and some others are not. Among the modules and components, some are part of the “internal network”, thereby hidden from the outside from a network perspective, whereas others are freely accessible from the outside and thereby part of the external network.
  • Improved internal networking security model. As pointed out above, according to our vision, modern robotics should converge towards the enforcement of identically strict security levels on both internal and external communication interfaces. Therefore, we propose changes to assess internal network security and justify them by presenting existing study cases.
  • Improved model for physical tampering attacks. We include a series of aspects and criteria to detect physical attacks on robots. We highlight the use of logging mechanisms, already present in most robots, in order to monitor suspicious physical changes therein.
  • Added exemplary scenarios. Throughout the framework content we add exemplary scenarios to illustrate how our methodology helps to assess the security of existing robots.
  • We open source our work and provide a variety of user-friendly representations to simplify its adoption. This work is available and freely accessible at under GPLv3 license.

  1. Jan. 25, 1979: Robot kills human  link
    Kravets, D., 2010. Wired. ↩︎

  2. Robot kills factory worker  link
    Whymant, R., 2014. TheGuardian. ↩︎

  3. Robot kills man at Volkswagen plant in Germany  link
    Huggler, J., 2015. Telegraph. ↩︎

  4. Dallas deployment of robot bomb to kill suspect is “without precedent”  link
    Farivar, C., 2016. Ars Technica. ↩︎

  5. Robotic surgery linked to 144 deaths in the US
    Unknown, ., 2015. BBC. ↩︎

  6. Mall security bot knocks down toddler, breaks Asimov first law of robotics  link
    Vincent, J., 2016. The Verge. ↩︎

  7. The hacker-powered security report  PDF
    hackerone, ., 2017. ↩︎

  8. The 2018 hacker report  PDF
    hackerone, ., 2018. ↩︎

  9. Master thesis: Developing a security framework for robots  PDF
    Shyvakov, O., 2017. University of Twente. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  10. Security for the Robot Operating System  link
    Dieber, B., Breiling, B., Taurer, S., Kacianka, S., Rass, S. and Schartner, P., 2017. Robot. Auton. Syst., Vol 98(C), pp. 192—203. North-Holland Publishing Co. DOI: 10.1016/j.robot.2017.09.017 ↩︎ ↩︎

  11. Application-level security for ROS-based applications
    Dieber, B., Kacianka, S., Rass, S. and Schartner, P., 2016. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol (), pp. 4477-4482. DOI: 10.1109/IROS.2016.7759659 ↩︎ ↩︎

  12. Secure communication for the robot operating system
    Breiling, B., Dieber, B. and Schartner, P., 2017. 2017 Annual IEEE International Systems Conference (SysCon), Vol (), pp. 1-6. DOI: 10.1109/SYSCON.2017.7934755 ↩︎ ↩︎

  13. Security assessment framework for IoT service
    Park, K.C. and Shin, D., 2017. Telecommunication Systems, Vol 64(1), pp. 193—209. Springer. ↩︎

  14. Adding Salt to Pepper: A Structured Security Assessment over a Humanoid Robot  PDF
    Giaretta, A., De Donno, M. and Dragoni, N., 2018. ArXiv e-prints. ↩︎

  15. Systems and software engineering—Vocabulary ISO/IEC/IEEE 24765: 2017
    Association, I.S. and others, ., 2017. ISO/IEC/IEEE, Vol 24765. ↩︎

The Robot Security Framework (RSF)

A template for security analysts is available at

We hereby propose a framework based on four layers that are relevant to robotic systems. We subsequently divide them into aspects considered relevant to be covered. Likewise, we provide relevant criteria applicable for security assessment. For each of these criteria we identify what needs to be assessed (objective), why to address such (rationale) and how to systematize evaluation (method). The previous image pictures our framework.