You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: content/ai_exchange/content/docs/ai_security_overview.md
+24-2
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ While AI offers powerful perfomance boosts, it also increases the attack surface
19
19
### Threat model
20
20
We distinguish three types of threats: during development-time (when data is obtained and prepared, and the model is trained/obtained), through using the model (providing input and reading the output), and by attacking the system during runtime (in production).
21
21
The diagram shows the threats in these three groups as arrows. Each threat has a specific impact, incidcated by the letters, referring to the Impact legend. The control overview section contains this diagram with groups of controls added.
22
-

22
+

23
23
24
24
### AI Security Matrix
25
25
The AI security matrix below shows all threats and risks, ordered by type and impact.
@@ -28,7 +28,29 @@ The AI security matrix below shows all threats and risks, ordered by type and im
28
28
## Controls overview
29
29
30
30
### Threat model with controls
31
-

31
+
The below diagram puts the controls in the AI Exchange into groups and places these groups in the right lifecycle with the corresponding threats:
32
+
-**Datascience development controls**:many things data scientists can do such as adding noise to training data, federative learning, data quality control, etc.
33
+
-**Conventional security of the development environment** plus new attention to the **supply chain of data and models** obtained from third parties
34
+
-**Governance** of AI projects and risks, information security and software lifecycle
35
+
-**Minimizing data** in development (e.g. anonymizing training data) and in runtime (e.g. not storing user details with prompts)
36
+
- Applying controls on the input of the model (**monitoring, rate limiting and access control**): conventional controls but with AI attention points, for example: which use patterns are suspect?
37
+
-**Datascience input controls** require data scientists to develop mechanisms to detect and filter malicious use
38
+
-**Filter sensitive output** can help reduce data leaking through model output
39
+
-**Behaviour limiting controls** are very important in AI, as the model can behave in unwanted ways wheb it hasn't been trained perfectly, or it has been manipulated. Examples: oversight, guard rails, model privilige control, and continuous validation.
40
+
-**Conventional rumtime security**: last but not least: an AI system is an IT system with an application and an infrastructure, so it requires 'regular' security controls, taking into account the AI-specific assets and threats eg. sensitive model I/O, senstive model paramaters, plugin security, and output that may contain injection attacks.
41
+
42
+
All these threats and controls are discussed in the further content of the AI Exchange.
43
+
44
+

45
+
46
+
Below diagram restricts the threats and controls to Generative AI only, for situations in which **training or finetuning** is done by the organization (note: this is not very common given the high cost and required expertise).
47
+
48
+

49
+
50
+
Below diagram restricts the threats and controls to Generative AI only, for situations in which the model is used **as-is** by the organization. Several threats still exist but they are the responsibility of the model provider. Nevertheless, the organization using the model should take the risks into account and gain assurance about them from the provider.
51
+
52
+

53
+
32
54
33
55
### Navigator diagram
34
56
The navigator diagram below shows all threats, controls and how they relate, including risks and the types of controls.
0 commit comments