Artificial Intelligence in safety-critical: Are hopes “real” or “artificial”?

Engineers love acronyms but normally need the words spelled out … Not so with A.I.  Any mention of “A.I.” elicits excitement, but also  myriad hopes and fears for the future.  Is such founded? What is A.I. really? Does it, or will it, have a home in our safety-critical world?  Let’s go …

You woke up this morning and checked your feeds while sipping the automatically prepared cappuccino from your Italian (is there any other?)  fresh-roast machine in your kitchen – clearly you are using A.I., right?  Your news feed says you are, but that news feed also has articles about daily space aliens, muscles without exercise, and the flat earth … is it news or entertainment?  More importantly, is it FACTUAL? True A.I. means exactly that:  “Artificial Intelligence”.  What is true intelligence:  an ability to learn resulting in identical inputs yielding a subsequently different output.  (This is AFuzion’s definition, but 100 experts will provide 101 different definitions; A.I. is not The Calculus:  there is no equation or deterministically provable answer). 

What is true Artificial Intelligence?

Let’s re-visit that last phrase:  “identical inputs yielding a subsequently different output”.  Exactly:  that is true A.I. because the computer program has learned from, and thus modified, its prior outputs for the same set of inputs.  Your news feed may have “learned” which news you like and provided you more of that (thus revealing why most humans are increasingly mentally vertical while losing horizontality in our quickly evolving world); but that news feed is probably not A.I.:  the program didn’t change but rather provided you different news in a formulaic pre-programmed fashion.  It’s 100% deterministic:  any software tester could show exactly what results would occur given a specific sequence of your actions. Fake A.I., but “A.I.” for lightweight folks who rely upon news feeds for news. You are better than that … 😉

Facebook CEO Mark Zuckerberg is obviously an intelligent and capable person, as is Elon Musk. But  Musk’s 2017 quotation about Zuckerberg’s A.I. knowledge is telling:  says Musk:  “I’ve talked to Mark about this (A.I.)  His understanding of the subject is limited.”  So let’s expand our perhaps limited understanding of A.I. for safety-critical systems.  True A.I. typically uses a deep neural network to enable real learning.  This learning enables the program (or programs) to provide continually evolving (and hopefully improving) responses for real-time scenarios.  However, a fundamental aspect of safety-critical systems is determinism:  explicitly proving that the same inputs provide the same outputs, every time.  Those safety-critical standards then require basic proof of this determinism via requirements-based test cases covering real-world scenarios. One of the better-known safety-critical guidelines is DO-178C  for avionics (a good tutorial on DO-178C is available here: https://afuzion.com/do-178-introduction/ ).   

Deterministic A.I. – are you kidding?

Analyzing the above it is clear we have a paradox:  A.I. seemingly embodies non-determinism whereas safety-critical standards REQUIRE determinism.  Therefore you’ll never see true A.I. in safety-critical systems during your lifetime, right?  Folks, modern technological engineering is about creating what previously was uncreated and often what was seemingly thought impossible.  While we have yet to certify an airborne system utilizing true A.I. (per my definition above), many engineers including those at AFuzion are working on deterministic A.I. solutions now.  Those solutions include:

  • Installing an external monitor, which itself deterministically (no A.I. involved here) assesses the decisions of the Artificial Intelligence engine for in-bounds safety ranges
  • Using redundancy but with different deep neural nets (dissimilar) to monitor (too complex for triple – no way to really “vote” as the decisions are not binary gates)
  • Reversion to “safe” mode default state of prior pre-learning/delta when potentially unknown or unsafe decisions are exhibited
  • Programming special rules for limited (deterministically defined) learning patterns; pre-defined yet limited
  • Final reversion to full static program (sans A.I.)

There you have it: the above frameworks are in development as are likely many more. Yes, you will see A.I. in your safety-critical systems in your lifetime, presuming you keep healthy/safe and live another decade.   Cheers to the next decade!

Further readings

Quality assurance for safety-critical vs. non-safety-critical

Share:

Legal Notice

This text is the intellectual property of the author and is copyrighted by coderskitchen.com. You are welcome to reuse the thoughts from this blog post. However, the author must always be mentioned with a link to this post!

Leave a Comment

Related Posts

Martin Heininger

Testing AI-based systems

AI-based systems are becoming popular. However, the broad acceptance of such systems is still quite low in our society. Trust in and acceptance of AI-based

DO-178C and OWASP in avionics app testing
Óscar Lugo Ruíz

DO-178C and OWASP in avionics app testing

My post series is explaining what tests should be applied to an avionics app – in my example to simulate the Instrument Landing System (ILS).

Hey there!

Subscribe and get a monthly digest of our newest quality pieces to your inbox.