“The lesson here, digging beyond the flashy headline, is that “AI” in general and machine learning in particular breaks current approaches to safety engineering. Those approaches are based on traceability from requirements through implementation and into validation. We don’t test to assure safety – we test to assure that the safety engineering process turned out with the quality we expected it to have. The difference is as night and day. (This 3 minute video explains: https://lnkd.in/eJVdZVp2 )
Any exercise that gives a simplistic objective function (“destroy enemy” or “don’t hit other road users”) will have issues scaling up. Even the newly invented and still-arcane area of prompt engineering is unlikely to solve that fully. It might get things to mostly work, at best it will create a good set of requirements. But to get safety, requirements are only the starting point. You still need reviewable traceability through implementation and test.
Part of good engineering is that the engineers working on the design and implementation push back on requirements that seem ambiguous or prone to likely undesirable outcomes. (Many dysfunctional projects are ones in which such pushback is discouraged.) Automating the detailed design and implementation via a training process takes away that check and balance. You’re testing to see how it turned out, not to check that the engineering process was executed in a robust way.
If we want safe AI we’re going to need a rigorous engineering approach to get it. Lots of smart people are working on that, but we’re closer to the beginning of the journey than the end.”
Source: Phil Koopman’s post on LinkedIn related to the US Airforce denying running a simulation in which an AI drone ‘killed’ an operator.