You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: content/ai_exchange/content/docs/ai_security_overview.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -63,7 +63,7 @@ In AI we distinguish 6 types of impacts, for three types of attacker goals (disc
63
63
3. disclose: hurt confidentiality of input data
64
64
4. deceive: hurt integrity of model behaviour (the model is manipulated to behave in an unwanted way to deceive)
65
65
5. disrupt: hurt availability of the model (the model either doesn't work or behaves in an unwanted way - not to deceive but to disrupt)
66
-
6. confidentiality, integrity, and availability of non AI-specific assets
66
+
6.disrupt: confidentiality, integrity, and availability of non AI-specific assets
67
67
68
68
The threats that create these impacts use different attack surfaces. For example: the confidentiality of train data can be compromised by hacking into the database during development-time, but it can also leak by a _membership inference attack_ that can find out whether a certain individual was in the train data, simply by feeding that person's data into the model and looking at the details of the model output.
0 commit comments