Can innovative AI-based training techniques prove autonomous vehicle safety?

Terence Broderick

Can innovative ai based training techniques prove autonomous vehicle safety 800x391

A new report into UK autonomous vehicle regulations recommends how to define the legal actors in play when fully autonomous vehicles (AVs) are operated, in the latest step towards AV proliferation. The recommendation poses key questions for AV driving system operators, who need to ensure that their software is sufficiently safe for widespread public use. Yet with hundreds of billions of road miles required to adequately train AV driving systems and test for reliability, innovative alternatives are needed. Here is what such alternatives could look like and how patent protection can help to drive and protect their development.

No user in charge

The report in question (published jointly by the Scottish Law Commission and the Law Commissions of England and Wales) recommends definitions for legal actors in ‘user in charge’ (UIC) and ‘no user in charge’ (NUIC) AV driving scenarios.

A UIC entity is an individual in a vehicle with the ability to operate the controls while an automated driving system (ADS) is engaged. An NUIC entity is an organisation which operates a vehicle during the engagement of an ADS feature that is authorised for use when no driver is in charge.

When an NUIC ADS feature is engaged, the responsibility for the journey will lie with the NUIC operator — a licensed entity which, among other things, is professionally competent to run the service.

The report defines an AV as a vehicle, equipped with an ADS, able to conduct the entire driving task in one or more operational design domains (ODDs), including different road types (like a motorway) or weather conditions (such as rain).

Hundreds of billions of miles

The key technical challenge in configuring NUIC features lies in training the corresponding ADS and underlying model, which determine how the ADS responds to driving scenarios — in addition to ongoing monitoring and improvement.

The volume of data needed to realise fully autonomous driving was outlined in a study by RAND Corporation, which set out that hundreds of billions of driving miles may be needed to demonstrate AV reliability. Therefore, the study called for the development of new, alternative techniques to train AVs effectively.

Alternative training techniques

The history of applied mathematics and statistics is littered with examples of techniques and ideas used to solve a problem in one field being copied to address problems in another. The study by RAND extrapolates (using statistical inference techniques based on an assumption of normally distributed accident fatalities) from known accident statistics to predict the number of miles needed to achieve the statistically confident levels that would be required of an AV.

It is likely that the development of new training techniques will look at alternative assumptions, especially in some of the ODDs which are more difficult to simulate. This will enable reliability statistics to be extrapolated from less (but more specific) data. Challenging the assumptions regarding normally distributed data has seen some useful results yielded in fields such as telecommunications and finance. Alternatively, where data sets are large and contain many variables, perhaps the application of manifold theory could enable the dataset to be simplified for analysis.

Any techniques which provide insight into how a smaller amount of data could be useful will surely be welcome as the industry moves forward with the proliferation of AVs.

Technical challenges

As such technical challenges present themselves, investment and effort drives solutions to emerge. In these situations, it is natural to examine whether patent protection is viable.

In the area of AV training, artificial intelligence (AI) will inevitably be involved, given the emphasis on data and the need for vehicles to autonomously respond to situations which may not have formed part of the testing or training of the underlying model. The principles of AI will doubtlessly be combined, at least in part, with the enhanced statistical inference techniques developed to address the problems associated with the required amount of data. The European Patent Office (EPO) outlines that AI-based innovations are patentable if they contribute to the solution of a technical problem.

A method of ADS training could be determined to be non-technical if the output of the claimed subject matter is the model itself — i.e., the model on which the ADS is based, rather than a method which has any material effect on a vehicle. The EPO’s guidance on this indicates that such subject matter is seen as abstract.

However, there is another way to look at this. A method of ADS training is ultimately tantamount to a method of configuring that ADS to provide specific outputs which respond to specific inputs — and since those outputs are used in controlling a vehicle, this must be considered a technical effect.

The likelihood of success in pursuing patent protection for a method of ADS training will depend on how it’s described in the patent application. It will be important to clearly link the training and testing of the underlying model to a feature of an autonomous vehicle. It will also be important to provide a clear description of the training data and testing methods that are used.

If you are innovating in this space, get in touch with me for a free initial chat about your technology and how our automotive and software experts can help.

IAM300 LOGO RGB COL WEBIAM1000 LOGO RGB COL WEBFT LOGO RGB COL WEBWWL LOGO RGB COL WEBIP STARS LOGO RGB COL WEBWTR LOGO RGB COLTHE LEGAL 500 LOGOTYPE RGB