Skip to content

The core parts of the Kernel

Philipp Ahmann edited this page Jun 5, 2024 · 6 revisions

Motivation - Why is ELISA defining this core set?

  • It should end in a higher quality for the kernel.
    • Improved documentation is one element to get to higher quality. Testing is another thing.
  • Core definition helps to set priorities. What is most important to focus on. What are configs and properties.
  • Ingredients needed to define the safety case.
  • We cannot look at the whole
  • The goal is not that Linux is only booting, but to set subsystems needed for many use cases.
  • It is important to have the core to set the priorities where available engineering resources are best spent.
  • Extending tiny with other components it shows a methodology to enhance your system

What is not part of the motivation?

  • Safety goals.
  • Focus on safety of the core parts initially.

How do we define the way to find the core kernel?

  • start from the tiny config and incrementally add parts.
  • check if something can be removed/disabled from tiny
    • if you remove more the system becomes
    • consider that tiny is meant for desktop and devices may require different modules
  • The first starting point is most likely not enjoyable from a product perspective, but it is the foundation.
  • Collect the core set of Linux subsystems common to different use cases
  • Priorities need to be created based
  • We do not want to focus on hardware and arch specific code in the beginning.
    • Do not focus on ARCH.
  • Find an RFS to have a fully bootable system.

Guide for others on how to define a core on your own

How does ELISA define "core"?

  • Engineering approach: It should run a process. Not a complete set, but a smallest, minimal thing, this is a starting point for analysis.
  • Democratic approach: Common subsystems/drivers used by almost every use case. (things that are always there)
  • Minimum: If you remove this feature/component/subsystem the system will not work at all (like booting)
  • It may be needed to select drivers which are never safety relevant
  • Reusability
  • Hardware agnostic components

Type of system

  • tiny is homogenous system, so we start with this and only look into the kernel.

Agreed Features (one by one)

  • In the beginning it is not about the favourable feature like a specific fs, but to have a fs at all.

What is the safety perspective?

  • The safety depends on the use cases. This will result in different features
  • Apply proper tagging to the different parts for integrity, safety, or not safety related

The role of the hardware

  • Define a reference "hardware" (could be also a VM like qemu)
  • Qemu is used in industry also for development of new features where physical hardware is not yet available.
  • An architecture also needs to be set when selecting qemu. Focus will be ARMv8 and x86

The role of the boot process as part of the core.

  • What do we need to boot. What is needed most of the time?

The role of the kernel community

Appendix

Definitions

  • What do we call a driver?
  • What is a subsystem?

TSC Material

  • What is the definition of core?
    • minimal config
    • what is required in any configuration
      • Something everybody needs differs from the minimum for a use cases
    • everyone may be 90%.
  • Pick use cases and what constitutes core for them.
  • Select and remove configs
  • What are base definitions of integrators like Suse, Canonical, Red Hat, Windriver, Linuxtronics
    • Can we get some configs from Linutronics and/or Windriver how a core looks like?
    • Create a first framing first and then reach out to them.
  • 2 phases
      1. common subsystems used by almost every use case. (things that are always there)
  • Config is not analogues to "the thing".
  • Filesystem like vfs may be always there, but for higher criticality no other fs may be needed while for other levels it is.
    • We cannot say which filesystem, but one filesystem should be there, so e.g. initramfs. So it may be core.
  • Core definition helps to set priorities. What is most important to focus on. What are configs and properties
    • RT was one example, which people request. (Even if many use cases may not require it.)
  • This can also be an input for the plumbers session.
  • What do we want to reach with the core? Which demands do we put on the core (like requirements, design, testing, documentation).
  • We need to get acceptance by the kernel community that the core part is relevant.
  • Why are we defining this core set? The goal is as important as the analysis.
    • It should end in a higher quality for the kernel.
    • Is defining requirements and increase coverage the right approach?
    • Is it better to check on "what can go wrong"? What is the right approach?
    • Traceability for changes? Will the change impact my original functionality and will everything work as expected?
    • Give priority to subsystems and drivers most common to most use cases.
  • Agreeing to features (one by one) is also way to set the core. (Even if it is not your favourable feature, like specific fs).
  • ELISA can show "this is how you could do it".
  • Focus for now is really on kernel functionality.
    • Discussion about packages is taken at a later stage, e.g. by consulting CIP project on their definition of core packages and extended packages

Workshop Material

Manchester Workshop meeting notes (access restricted to workshop participants)

Key takeways:

  • What are fundamental things of Linux, regardless of the use case?
    • Assumption: Use cases bring a different point of view, but not dealing with different kind of problems
    • Valid to almost any use case:
      • Managing hardware resource
      • Synchronization (timing)
      • Exchange of information / Communication between processes (IPC)
      • System initialization
      • dynamic memory allocation (there are accepted methods for safety qualification)
        1. process creation
        1. context switching
      • Interrupt management, exception management
        1. DAC, MAC, capabilities, namespaces, cgroups (resource access management)
    • What will we have in the system but not directly safety relevant?
      • Networking
      • filesystems
      • Graphics
      • power
      • thermal

Munich Workshop meeting notes (access restricted to workshop participants)

Key takeways:

  • What is core? Is it minimal, minimal-minimal or is it common?
  • Is it the core or a core?
  • What about “simple” as the MVP rather than minimal or common part?
  • common versus minimal (in the sense of memory footprint)
  • existing configs from distro providers may be too huge for safety critical systems
  • Functions may not be used at all, but are still configured in the kernel.
  • A core part should be maintainable for 20+ years (as this is required by critical infrastructure products)
  • When you configure a kernel now you get thousands of questions without context why questions are even asked. If you know how your system looks like you can make choices like do you prefer “plug & play” or do you want something specific. Currently it goes down on which feature you may want for a SCSI device.
  • Remove the drivers completely. Can Linux be treated as a “micro-kernel”, but need to be able to communicate with the outside world.
  • What could a linux microkernel look like and let the integrator derive from it. Have a heat map on what is used.
  • For minimal: It should run a process. Not a complete set, but a smallest, minimal thing, this is a starting point for analysis.
  • If we provide the methodology you can select your config. But we need to show the feasibility and this needs agreement on a common set of realistic configs
  • Strong encapsulation and limited interfaces is the idea of isolation, but this will be very complicated for the kernel. This plays also into FFI.

Further resources and links

Clone this wiki locally