RMF: A Risk Measurement Framework for Machine Learning Models
2024
Online
report
Machine learning (ML) models are used in many safety- and security-critical applications nowadays. It is therefore important to measure the security of a system that uses ML as a component. This paper focuses on the field of ML, particularly the security of autonomous vehicles. For this purpose, a technical framework will be described, implemented, and evaluated in a case study. Based on ISO/IEC 27004:2016, risk indicators are utilized to measure and evaluate the extent of damage and the effort required by an attacker. It is not possible, however, to determine a single risk value that represents the attacker's effort. Therefore, four different values must be interpreted individually.
Comment: Accepted at CSA@ARES 2024
Titel: |
RMF: A Risk Measurement Framework for Machine Learning Models
|
---|---|
Autor/in / Beteiligte Person: | Schröder, Jan ; Breier, Jakub |
Link: | |
Veröffentlichung: | 2024 |
Medientyp: | report |
Schlagwort: |
|
Sonstiges: |
|