Skip to content

Minutes 07 Aug 2025

Paul Albertella edited this page Aug 7, 2025 · 2 revisions

Host: Paul Albertella

Participants: Florian Wuehr, Victor Lu, Igor Stoppa, Daniel Krippner

Agenda

  • Breaking things
    • Can an understanding of how things break help to inform safety?
    • 'Practical' approach as an adjunct / alternative to pure analysis
    • Fault induction concept from RAFIA [1]
  • Quality: what does this mean in the context of safety
    • What might a baseline of this for FOSS look like?
  • Continue discussions / Q&A on Trustable Software Framework (TSF)
    • How does this relate to the Quality and Fault Induction topics?

Notes

  • Paul: Is there a topic anyone would particularly like to discuss today?
  • Victor: What are the differences between safety and security from a process perspective, in relation to open source
    • Security adds the complication of a possible human ‘bad actor’
    • Does the introduction of AI mean this applies for safety too?
  • Igor: Security concerns have a different focus: bad actors with a defined purpose in defeating security
    • What is the attack surface?
    • What are the opportunities for attacks?
      • May have a vulnerability without opportunity to exploit
  • Safety does not always need to consider these, but have to consider a potentially broader scope and even very remote possibilities / probabilities
  • Are there differences in processes for safety? Physical environment, mechanical aspects may apply - not always relevant for (digital) security
  • Some aspects may be common from a process perspective - e.g. threat / hazard analysis techniques - but the nature of the threat / hazard may be very different
    • e.g. Security often assumes that a threat is external, but the ‘bad actor’ in safety may be a fault within the kernel, or a dependency, or the hardware
  • Paul: A lot of the underlying processes are the same from a software perspective: How do I record the results of my hazard / threat analysis? How do I identify how we are addressing these, and verifying that the mitigations / protections are effective?
  • Igor: With the kernel, part of the challenge is getting the kernel community to pay attention to a problem / threat.
    • Attention is determined by business drivers - e.g. power management and security became increasingly important when mobile devices came into play.
    • The business drivers have not been as important for safety-related concerns, but this is changing.
  • As critical systems become more complex (and more connected) the opportunity for new exploits and hazards increases, but also the possibility to change software
    • A safety issue that may be exposed to exploit by a bad actor
  • If the scale of the problem is growing in both safety and security, because of the complexity and ubiquity of software
  • Igor: Functional safety is (or should be! PA) a formalisation of good engineering practise
    • Should be familiar not ‘arcane’
    • Always has a cost, so it must either add value or be justified by other motivations
  • Daniel: The connection between the engineering applicable in the immediate context of the safety standards and the processes that apply in an open source component is often very hard to see
  • Paul: If you don’t know the ‘safety context’, the only things you can consider are what my software is intended to do, how it can go fail, how we can verify that, how I can be confident that these things are still true for my current iteration
  • In a commercial context, then drivers for these things are clear, but in an open source context the relationship between such drivers and how we develop software may be less obvious

Clone this wiki locally