Skip to content

The core parts of the Kernel

Philipp Ahmann edited this page May 29, 2024 · 6 revisions

Motivation - Why is ELISA defining this core set?

  • It should end in a higher quality for the kernel.

How do we define the way to find the core kernel?

Guide for others on how to define a core on your own

How does ELISA define "core"?

  • It should run a process. Not a complete set, but a smallest, minimal thing, this is a starting point for analysis.
  • Give priority to subsystems and drivers most common to most use cases.

Agreed Features (one by one)

  • In the beginning it is not about the favourable feature like a specific fs, but to have a fs at all.

Appendix

TSC Material

  • What is the definition of core?
    • minimal config
    • what is required in any configuration
      • Something everybody needs differs from the minimum for a use cases
    • everyone may be 90%.
  • Pick use cases and what constitutes core for them.
  • Select and remove configs
  • What are base definitions of integrators like Suse, Canonical, Red Hat, Windriver, Linuxtronics
    • Can we get some configs from Linutronics and/or Windriver how a core looks like?
    • Create a first framing first and then reach out to them.
  • 2 phases
      1. common subsystems used by almost every use case. (things that are always there)
  • Config is not analogues to "the thing".
  • Filesystem like vfs may be always there, but for higher criticality no other fs may be needed while for other levels it is.
    • We cannot say which filesystem, but one filesystem should be there, so e.g. initramfs. So it may be core.
  • Core definition helps to set priorities. What is most important to focus on. What are configs and properties
    • RT was one example, which people request. (Even if many use cases may not require it.)
  • This can also be an input for the plumbers session.
  • What do we want to reach with the core? Which demands do we put on the core (like requirements, design, testing, documentation).
  • We need to get acceptance by the kernel community that the core part is relevant.
  • Why are we defining this core set? The goal is as important as the analysis.
    • It should end in a higher quality for the kernel.
    • Is defining requirements and increase coverage the right approach?
    • Is it better to check on "what can go wrong"? What is the right approach?
    • Traceability for changes? Will the change impact my original functionality and will everything work as expected?
    • Give priority to subsystems and drivers most common to most use cases.
  • Agreeing to features (one by one) is also way to set the core. (Even if it is not your favourable feature, like specific fs).
  • ELISA can show "this is how you could do it".
  • Focus for now is really on kernel functionality.
    • Discussion about packages is taken at a later stage, e.g. by consulting CIP project on their definition of core packages and extended packages

Workshop Material

Manchester Workshop meeting notes (access restricted to workshop participants)

Key takeways:

  • What are fundamental things of Linux, regardless of the use case?
    • Assumption: Use cases bring a different point of view, but not dealing with different kind of problems
    • Valid to almost any use case:
      • Managing hardware resource
      • Synchronization (timing)
      • Exchange of information / Communication between processes (IPC)
      • System initialization
      • dynamic memory allocation (there are accepted methods for safety qualification)
        1. process creation
        1. context switching
      • Interrupt management, exception management
        1. DAC, MAC, capabilities, namespaces, cgroups (resource access management)
    • What will we have in the system but not directly safety relevant?
      • Networking
      • filesystems
      • Graphics
      • power
      • thermal

Munich Workshop meeting notes (access restricted to workshop participants)

Key takeways:

  • What is core? Is it minimal, minimal-minimal or is it common?
  • Is it the core or a core?
  • What about “simple” as the MVP rather than minimal or common part?
  • common versus minimal (in the sense of memory footprint)
  • existing configs from distro providers may be too huge for safety critical systems
  • Functions may not be used at all, but are still configured in the kernel.
  • A core part should be maintainable for 20+ years (as this is required by critical infrastructure products)
  • When you configure a kernel now you get thousands of questions without context why questions are even asked. If you know how your system looks like you can make choices like do you prefer “plug & play” or do you want something specific. Currently it goes down on which feature you may want for a SCSI device.
  • Remove the drivers completely. Can Linux be treated as a “micro-kernel”, but need to be able to communicate with the outside world.
  • What could a linux microkernel look like and let the integrator derive from it. Have a heat map on what is used.
  • For minimal: It should run a process. Not a complete set, but a smallest, minimal thing, this is a starting point for analysis.
  • If we provide the methodology you can select your config. But we need to show the feasibility and this needs agreement on a common set of realistic configs
  • Strong encapsulation and limited interfaces is the idea of isolation, but this will be very complicated for the kernel. This plays also into FFI.

Further resources and links

Clone this wiki locally