Sunday, May 20, 2018

Failsafe & Fail Secure: Update for Improved System for Self-Driving or Autonomous Cars

Auto-pilot and autonomous (self-driving) vehicles are poised to become the defacto technology of the next decade, but not before working through some considerable technological challenges which, if unaddressed, could eventually delay or derail the entire venture.

As of this writing, these vehicles, in their current version and state of development and premature deployment, constitute a serious threat to life and property, merely because the state of the technology has not yet warranted or earned status required to operate in live traffic, thereby causing them to be both deadly and dangerous.

Before any more tragic accidents occur, the vehicles must be pulled from the live traffic and entered into a long stage of beta development on closed roads and training facilities, with specially trained human drivers as Instructors or coaches. These human mentors, teamed with the most sophisticated A.I. technology, will together rewrite the code that powers the decision-making capabilities that power these autonomous systems.  Further, the cars should not be returned to the road until all the current bugs have been expunged from their systems, making them safe to operate around human drivers and pedestrians.

Until this is accomplished, a self-driving car is nothing more than a novelty, and nothing less than a lethal weapon.

Final Caveat: Until they are perfectly safe, self-driving, autonomous, or auto-piloted vehicles should sport warning lights (similar to police or taxicab lights) and issue steady, periodic warning sounds alerting all pedestrians that such vehicles are under autonomous control, and potentially dangerous.




Accidents Involving Self-Driving Vehicles


Incident: Vehicle plowed into a concrete highway lane divider, and then burst into flames.
Maker/Operator: Tesla
Failure/Cause: Vision failure with some stationary objects, also objects with color that blends into background
Fatalities/Casualties: One, driver Wei Huang

Incident: Autopilot slammed into stopped fire truck
Maker/Operator: Tesla
Failure/Cause: System cannot detect non-moving objects while autonomous vehicle is traveling more than 50 mph/80 kph and the vehicle ahead moves aside, to be replaced by a non-moving vehicle in the front position. (Such systems as autopilots are designed to ignore static, non-moving objects.)
Fatalities/Casualties: None

Incident: Self-driving vehicle kills pedestrian
Maker/Operator: Uber
Failure/Cause: Software bug, pedestrian was detected by sensors, software ‘decided’ to ignore her, determining her to be a ‘false positive’.
Fatalities/Casualties: One, a pedestrian was struck as she crossed the street in front of the vehicle, which did not slow upon approaching her.

Incident: Self-driving car aborts a lane change suddenly, knocks over motorcyclist moving into previously vacated spot.
Maker/Operator: GM (General Motors)
Failure/Cause: Autopilot ‘misjudged’ when it was safe to switch lanes.
Fatalities/Casualties: One, motorcyclist knocked from his bike by car abruptly switching back to previously vacated lane.

Note: In the case involving the fatality in Arizona, where the car ran into the pedestrian without slowing (regarding her as a ‘false positive’ to be ignored) “The National Highway Traffic Safety Administration concluded that the system was operating as intended, wasn’t defective, and that Tesla didn’t need to recall any cars.” Although it is true that the vehicle’s systems operated as intended, it still killed a woman, and, so is still in error, and should have been removed from the road, including all models with the same driving systems.




1.    Self-driving cars are potentially deadly weapons and should be treated like as such.
2.    Default to Halt: During the beta stages of development (which should be on the number of years not months or weeks) certain safety protocols should be in place which immediately halt the vehicle (as soon as it is safely possible to do so) until it has been determined (by a human driver or coach) that it is safe to proceed.
3.    Human Instructors, specially licensed and trained human drivers need to be present in the vehicle throughout the beta stage to make decisions, keep notes, and pass this information on to the programmers and hardware developers for continuous and safe evolution and development of the car’s navigation systems.
4.    Specialized training: Instructors should also receive regular training along with the vehicles, all with an eye toward improving the car’s autonomous driving ability by uploading or ‘imprinting’ the decision-making skills of the coach into the car’s A.I. systems, and having those systems analyze the driving practices of all coaches and instructors to look for aggregate, repeatable practices to emulate.
5.    Extended Beta Stage: While in the beta stage of development, autonomous vehicles should be designed and built with systems that prohibit the car from starting, moving, or even turning on unless there is a coach in the passenger seat, logged in to the car’s navigation system, with his/her own steering and controlling system.
6.    Driving and Training Range: Self-driving cars should also be routinely subjected to a driving range especially designed for them, to shake out unforeseen bugs in the system without endangering human lives or property. This debugging stage should not come at the cost of any more human lives.
7.    License Revoked: With pedestrian safety in mind, all autonomous vehicles now on the road should be suspended from driving in ‘live traffic’ until the end of the beta period.
8.    Failsafe & Secure: The proposed systems, as previously defined in the initial document, Improved System for Self-Driving or Autonomous Cars, have several layers of built-in redundancy. At least three (or more) of these different subsystems (out of the dozen or so) must agree on a course of action before the car proceeds. If these criteria are not met, then the vehicle should stop, instead of plowing ahead. The traffic situation, which causes the vehicle to halt, should be fully detailed by the car’s own systems, and by the on-board, human coach. This information should in turn be added to a comprehensive database, which will become part of the ‘experience’ set of all autonomous vehicles. As such, such a database should be monitored and updated regularly. Eventually, the A.I. learning systems will parse the data submitted by all drivers and vehicles, using that data to ultimately learn repeatable, ‘best driving practices’, which can in turn be passed on to the vehicles’ programming.
9.    Mimic Human Behavior: In time, autonomous cars can become better drivers than their human counterparts, but until that time, those vehicles can still ‘learn’ from human drivers, even the worst of them. Human drivers, on encountering something ‘unknown’ in the road, will come to a sudden stop, if only to preserve their own lives and property. Autonomous or robotic vehicles should mimic this behavior, especially since human lives, property, and expensive prototype systems (the car itself) may be endangered or compromised.
10.  Self-Diagnostics: On startup (with coach present), the car should run through a series of self-diagnostics for all its navigation and communication systems. If any of these systems fails, and second diagnostic should be attempted before the car reverts itself to non-autonomous mode, secures it doors and windows, and, if it can, call in its status to the nearest Control Center. During this time, only a Coach or a systems developer should be able to unlock the vehicle and drive it manually, otherwise, it should be towed to the nearest certified service facility for a full systems analysis.
11.  The Obstacle Driving Course Police and other armed safety and security officers often have to train on a specially designed course called Shoot Don’t Shoot’ where they are presented with several ‘targets’ which they must decide, at a moment’s notice, whether to shoot or not shoot. Such a course, modified to present various traffic situations, needs to be developed for the A.I. systems, which control autonomous vehicles. This exercise would allow the software and hardware developers (and the A.I. itself) to gain useful practice and insight into which conditions on the ‘real road’ require the vehicle to stop, or not without endangering human lives. When there is even the slightest doubt, the vehicle should be trained or programmed to stop.
12.  Hardware Sensors are the car’s eyes and ears, and, as such, must be near flawless. Because no hardware systems are (flawless), redundancy has been prescribed for these systems, as described in the parent document (Improved System for Self-Driving or Autonomous Cars). These vehicles should not be working as human drivers do, peering ahead into darkness (or foul weather) then reacting too slowly, too late, or worse, not at all, when something unexpected appears or occurs on the road ahead of them. The purpose of having self-driving or autonomous vehicles is to actually improve on human drivers, not emulate their bad habits. If this is not the case, then why have them? These vehicles should have and area-awareness far superior to that of humans beings. They should also react faster, and have access to a constant data stream of road and weather conditions, which would preclude them from ever being ‘surprised’. For this to be possible, they must have superior detection systems with multi-layered redundancy, and a ‘voting system’, where a significant majority of its systems decides. Any less, and the vehicle stops; it does not ‘plow ahead’; a human driver would not, unless he or she was somehow incapacitated.
13.  Digital Rear View Mirrors should be able to capture and store video to the diagnostic system or ‘black box’, and also to aid the car or its owner/Instructor in operating the vehicle should the need arise.
14.  Laser -enhanced Headlights can be used to measure the distance between one vehicle and another, and as an additional way to detect other obstacles in the road, such as human beings or animals - another layer of redundancy in the protection of human life and property. (The rear taillights can also have this feature.)
15.  Proximity Awareness is something that every car on the road should have, whether it is a self-driving vehicle, or not. (Older cars can be retrofitted with a cellular-based device.) Autonomous cars should always know where other vehicles are in relation to their own current position, given the variables of speed, distance, and predetermined AOC (‘area of concern’). This information, along with an awareness of construction, roadblocks, bottlenecks, and weather conditions, will determine how fast the car travels, and how closely to other cars immediately around it.
16.  A governing body should be assigned by each municipality (state, city, county, or country). This body will determine how long vehicles stay in the beta stage of development, when Instructors and owners (and their vehicles) must retrain and re-certify, and when cars must be pulled off the road for maintenance and checkups. Divisions of this governing body would also exercise authority over other types of autonomous devices, such as drones and aircraft. (See below.)
17.  A designated and certified owner/operator should be in the vehicles at all times during the beta stage, and should have the ability to either stop the vehicle, or seize control of it whenever necessary. Interface with the car can include voice, control panel, smartphone, and, of course, the traditional steering/stopping mechanisms.
18.  Remote Retrieval and Operation will allow an unattended or disabled vehicle to be remotely operated and driven (by PC) into a service and diagnostic facility, preferably (at first) by a human operator.
19.  A.I. Systems and Diagnostics are essential to safely operating autonomous or robotic vehicles. The current state and development of A.I. technology is quickly becoming a mystery, even to its developers. This state of affairs must end. If these vehicles are to be re-released safely to the road, then every decision made by the A.I. system controlling the car must be understood if it is to be successfully debugged. If a vehicle, or class of vehicles continually makes the same poor decisions on the road (or, preferably, on the driving and training range) then the reason(s) it makes these mistakes must be discovered, and more importantly, understood. That means knowing what the A.I. is ‘thinking and doing’ all the time. To assist in this, a series of self-diagnostic routines must be built into the software. These routines should show variable value dumps and changes, routine and modular calls, test states, multiple options considered (and why they were chosen or discarded), and status of the vehicle when (fatal or flawed) decisions were made. In essence, the A.I. needs its own ‘black box’ built-in, in conjunction with its programming and decision-making routines, and the processor(s) for that code. There should also be several levels of diagnostics (similar to the Star Trek: Next Generation series.)
20.  System Health will be constantly monitored by sensors built into the vehicle, which will report the status of the cars major systems (propulsion, brakes, combustion, electronics, environmental, steering, etc.) to the owner upon starting the car. This information will be gathered during the diagnostic phase, and relayed to the owner visually (onscreen), and by email, if the owner desires. It can also be immediately uploaded to the nearest Control Center or service technician upon request of the owner. (This, for instance, would be requested if the car were performing oddly or erratically.)
21.  Wireless or Remote Recharging should be the goal of all vehicles, especially autonomous ones. Such vehicles should be able to recharge (or refuel) remotely, either wirelessly (while still in motion), at a charging stations (as some vehicles are doing as of this writing), and by remotely dispatched flying drones capable of charging several vehicles.
22.  Emergency Navigation: All autonomous vehicles should be constantly aware of and able to find at a moment’s notice all emergency service locations, such as hospitals, fueling (or gas) stations, police stations, and, of course, home.
23.  Status Monitoring is a feature of the B.E.A.C.O.N. (see original document) software running on the smartphone or iPhone. Once removed from its Command Cradle, the handheld device should constantly monitor the status and location of the vehicle at all times, like an intelligent LoJack system, alerting the owner if the car is tampered with, vandalized, moved, or even ticketed and towed. A police or municipality officer ticketing or causing such a vehicle to be towed would relay this information to his or her precinct, which, in turn, would report pertinent info to the nearest Control Center, which will then relay the info wirelessly to the owner.
24.  Glass Break Technology should be developed (and deployed in a fashion similar to airbags) in the event that a vehicle is suddenly submerged with passengers trapped inside. The mechanism should be either manually triggered, or automatically, if the system senses water has filled the passenger space. Most importantly, it should be configured and designed in such a way that the shattering glass does not pose an additional threat to passengers in the vehicle.
25.  Fire Control and Suppression: Each vehicle should have its own dual part fire suppression system: one for the passenger compartment (safe to breathe for short periods) and one for non-passenger areas of the car, such as under the hood, in the trunk, and inside the car’s inner workings.
26.  Authentication Stream and Authentication Fingerprint is a steady flow of authentication data, approximately hundreds or even thousands of times per second, which will help protect the vehicle from unauthorized remote control, access, seizure, theft, or driving. This stream can be of the highest possible encryption (or some new type, yet to be developed), and include blockchain technology. Any significant breach or break in the flow or rhythm of the stream may indicate that the vehicle’s online security has been compromised. The authentication fingerprint will be different for each vehicle, based on the maker of the car, information about the owner, the current trip, the pre-established rhythm of the data flow, with its own unique pauses and stops, and the home Control Center alpha-number, several dozen digits and letters long. (Each vehicle will have access to all Control Centers, but call only one ‘home’ at a time. Home Control Centers can be reassigned, but only by the governing authority, or by petition to the Authority by the owner.)

27.  Co-Pilot: With the dozens of sophisticated systems onboard, the driver/owner, if he or she so desired, could monitor the steady flow of information via an onscreen output called the system co-pilot, which would intercept and manage this data to make it more accessible for human occupants of the vehicle. It could be configured to share only that information that would prove to be most vital to the human driver, while withholding the more mundane (or complex) information unless it is preconfigured to do otherwise. (The decision and the configuration settings will be at the discretion of the driver.)