You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Contributions/Linux_Memory_Management_Essentials.md
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,7 +123,12 @@ The following considerations are of a more deductive nature.
123
123
1. Because of the way pages, fractions and multiples of them are allocated, freed, cached, recovered, there is a complex interaction between system components at various layers.
124
124
2. Even using cgroups, it is not possible to eliminate indirect interaction at the low level between components with different levels of safety integrity (e.g. the recirculation of pages related to critical processes in one group might be affected by less critical processes in another group)
125
125
3. Because of the nature of memory management, we cannot rule out the possibility that memory management mechanisms will interfere with safe processes, either due to a bug or due to the interference toward the metadata they rely on. For example, the memory management might hand over to a requesting entity a memory page that is currently already in use either by a device driver or by a userspace process playing a role in a safety use case.
126
-
4. Still due to te complex interaction between processes, kernel drivers and other kernel code, it is practically impossible to qualify the kernel as safe through positive testing alone, because it is impossible to validate all the possible combinations, and it is equally impossible to assess the overall test coverage and the risk associated with not reaching 100%. The only reliable way to test is to use negative testing (simulating a certain type of interference) and confirming that the system response is consistent with expectations (e.g detect the interference, in case of ASILB requirements). And even then, the only credible claim that can be made is that, given the simulated type of interference, on the typology of target employed, the reaction is aligned with the requirements. Other types of targets will require further ad-hoc negative testing.
126
+
4. These complex interaction between processes, kernel drivers and other kernel code mean that it is practically impossible to qualify the kernel through positive testing alone.
127
+
1. Specifying requirements and implementing a credible set of tests to cover all of the kernel's functions for the general case is certainly infeasible (and arguably impossible), because the range of potential applications and integrations for Linux is too broad.
128
+
2. We can constrain this scope by specifying a kernel version and configuration, for a given set of target systems and software integrations, and specifying the set of functions it is intended to provide. However, this would still not be sufficient to assert that the kernel is devoid of certain classes of bug (e.g. for bugs caused by interference).
129
+
3. Negative tests derived from credible analysis of the kernel could be used to address this, by verifying its behaviour (and/or mitigations provided by other components of a target system) for a documented set of failure modes.
130
+
4. This might be achieved, for example, by simulating an identified type of interference for a range of positive test cases, and confirming that the overall integrated system's response is consistent with a specified set of expectations (e.g. the interference is detected and a kernel- or system-level mitigation is triggered).
131
+
5. This, in combination with requirements-based functional testing, could be a viable approach for qualifying a specific integration and configuration of Linux, for a given set of target systems and use cases.
127
132
5. Linux Kernel mechanisms like SELinux and cgroups/containers do not offer any protection against interference originating from the kernel itself.
128
133
6. The Linux Kernel must be assumed to not be safe; possibly QM at best, unless specific subsystems are qualified through both positive and negative testing.
129
134
7. Claims about kernel integrity (or detection of its loss), do not guarantee system availability; safety arguments for a Linux-based system that rely upon a level of availability must separately show that this is supported.
0 commit comments