Presentations
Photo: Geir Hageskal
Presentations
Monday, March 11th
Welcome and introduction Arrangements and whitepaper |
|
Modeling and assessing risks of autonomous systems: Challenges and perspective on solutions |
|
The Norwegian maritime authority´s approval process of autonomous ships - Our challenges and guideline When it comes to new technology and alternative design the Norwegian Maritime Authority (NMA), as the flag authority, is the one to evaluate the projects towards a certificate. As a basis for evaluating alternative design, NMA uses IMO circ. 1455 which is gives the general process. For project regards to autonomy and where people are taken away from functions onboard ships NMA is working on a guidance the evaluation process for this, with basis in IMO circ. 1455. Any alternative design needs to be shown to be as safe or safer than a conventional design. The burden of proof for showing this lays on the project. The presentation point at this process and what needs to be evaluated and shown during such a process. At the end some of the challenges with projects with new technology and autonomy are listed. |
|
Qualification of autonomy for risk and regulation - A behavioral approach Trusted operations of Autonomous Systems (AS) with increased levels of autonomy, moving from remotely operated systems onto higher levels of system decision making and autonomy to act, are far from being enabled today. Enabling these systems and their operations will require the development of trust in the combined technological, regulatory, and social environments. In this presentation, I discuss some technological challenges associated with autonomous system capability and a potential framework to assess system behaviours in relation to risk assessment and certification. In our approach, we take a behavioural viewpoint of mathematical system theory, link it to an epistemic view of uncertainty quantification and finally to the decision-making process of different stake holders. I review part of our previous and current work at Boeing in these areas and discuss challenges and extensions that will be needed to assess systems with high-levels of autonomy. |
|
Industry perspective on the development of autonomous busses - Robustness development |
|
Unmanned aerial systems and risk |
|
Cybersecurity for autonomous systems – Vulnerabilities and threats To assess safety, reliability and security for autonomous systems we primarily consider the three factors: software, hardware and human-in-the-loop. For properly addressing cybersecurity for such systems we should also take into account the possible cyberattack-agents which act as ghosts-in-the-loop. In the presentation this is exemplified with the Trisis-attack at a Saudi-Arabian petrochemical plant in 2017. At the plant there were several, repetitive malfunctions of a specific type of safety-controller. The safety controllers are part of the most critical automation systems at the plant, and they are in place only to detect unsafe conditions in the production process. When this is detected these controllers automatically run production shutdown or emergency shutdown. At the specific plant in Saudi-Arabia, after weeks of troubleshooting, including malfunctioning of several replacement units, cybersecurity-analysts detected an advanced, unknown malware in the safety-networks. This malware exploited vulnerabilities of the controllers and replaced the firmware on them once they were installed in the safety-network. All autonomous systems have central controllers that act as the hardware- and software-based brains of the system. These get input from sensors and provides output to effect-generators, such as actuators and motors. The controllers also provide an interface into the system for the Human-Machine-Interface (HMI). All parts of these systems are increasingly consisting of programmable components, which results in new unknown vulnerabilities, and new unknown ways of attacking the systems. This problem is accentuated in Operation Technology (OT) and autonomous systems due to a higher focus on availability compared to systems in Information Technology. Doing security upgrades on the former is a rigid process that can not be done often, which results in most parts of the systems on a daily basis having more security vulnerabilities than in an upgraded state. In the presentation several known attack vectors are presented and some of the major cyberattacks within industrial systems are listed. Trends and possible future threats are discussed. The presentation concludes with an open challenge to the industries to establish barriers which provide safety more by the laws of mechanics, physics and electronics. This is already being done within electronic opto-isolators, rupture discs, spring-return-valves and can be further developed for industrial- and autonomous systems for example with the use of data diodes to segregate safety-critical components from the rest of the system. |
|
Intelligent machinery systems for autonomous ships |
|
Trust in autonomy: Cyber-human learning loops This presentation addresses the ethical and societal implication of autonomous systems. It argues that these are far more complex that the mere aspiration to embed ethical reasoning into algorithms. The presentation argues that morality is a characteristic of human beings and cannot be transported into machines. It is important to distinguish between explainability of autonomous systems versus trustworthiness. Trust is underpinned by shared ethical and societal values, and the conditions for trusting technologies are similar to those of trusting other people or institutions. In the case of autonomy this means both, an assessment of the goals and purpose of the technology as well as assessment of the technical robustness of the system. The core ethical and societal issues associated with autonomous systems emerges from the complex interactions between software, hardware and human beings, alongside the context in which the system operates and the consequences it may have-- directly or indirectly, on people and the environment. Even if autonomous, human beings are part of their design, construction, deployment, operation, maintenance, evaluation and verification of these systems. A potentially normative approach to aim towards is the generation of cyber (physical)-human (social) learning loops, requiring true interdisciplinarity, in particular with the social sciences and the humanities. |
Tuesday, March 12th
Some recent advances in human-automation interaction design methods and future research directions for safety his presentation reviews recent advances in human-automation interaction modeling approaches, including new ideas to account for how tasks are interactively managed and traded by humans and machines. Another aspect of the work is focused on how these new design methods may be synergized and applied throughout the systems design and engineering cycle to better support human-machine system design. An additional section focuses on the development of advanced vehicle automation based on current practices of automation design and implications for systems safety. This research reveals a paradox of automation for safety in which operator reliance on low-level automation for low severity hazard exposures may lead to skill decay for manual performance necessary in system negotiation of complex and high severity hazards. |
Game theoretic simulation for verification and validation of autonomous vehicles Autonomous vehicles have been the subject of increased interest in recent years in defense, industry and academia. Serious efforts are being pursued to address legal, technical, and logistical problems and make autonomous vehicles a viable option for broad ranges of applications. One significant challenge is the time and effort required for the verification and validation of the decision and control algorithms employed in these vehicles to ensure a safe and reliable experience. For example, for driving, hundreds of thousands of miles of tests are required to achieve a well calibrated control system that is capable of operating an autonomous vehicle in an uncertain traffic environment where interactions among multiple drivers and vehicles occur simultaneously. Traffic simulators where these interactions can be modelled and represented with reasonable fidelity can help to decrease the time and effort necessary for the development of the autonomous driving control algorithms by providing a venue where acceptable initial control calibrations can be achieved quickly and safely before actual road tests. In this talk, we present a game theoretic traffic model that can be used to model human-driven and autonomous vehicles and their interactions, test and compare various autonomous vehicle decision and control systems and calibrate the parameters of an existing control system. Our simulator is highly scalable and can handle several dozen interacting vehicles in near real time. We demonstrate applications to highway driving and intersections, and discuss extensions to other transportation domains. |
Organizers
Department of Marine Technology
