Addressing uncertainty in the safety assurance of machine-learning
[摘要] There is increasing interest in the application of machine learning (ML) technologies to safety-critical cyber-physical systems, with the promise of increased levels of autonomy due to their potential for solving complex perception and planning tasks. However, demonstrating the safety of ML is seen as one of the most challenging hurdles to their widespread deployment for such applications. In this paper we explore the factors which make the safety assurance of ML such a challenging task. In particular we address the impact of uncertainty on the confidence in ML safety assurance arguments. We show how this uncertainty is related to complexity in the ML models as well as the inherent complexity of the tasks that they are designed to implement. Based on definitions of uncertainty as well as an exemplary assurance argument structure, we examine typical weaknesses in the argument and how these can be addressed. The analysis combines an understanding of causes of insufficiencies in ML models with a systematic analysis of the types of asserted context, asserted evidence and asserted inference within the assurance argument. This leads to a systematic identification of requirements on the assurance argument structure as well as supporting evidence. We conclude that a combination of qualitative arguments combined with quantitative evidence are required to build a robust argument for safety-related properties of ML functions that is continuously refined to reduce residual and emerging uncertainties in the arguments after the function has been deployed into the target environment.
[发布日期] 2023-04-06 [发布机构]
[效力级别] [学科分类]
[关键词] machine learning;safety;assurance arguments;cyber-physical systems;uncertainty;complexity [时效性]