back to publications search

Autonomous Vehicles: What Taking a Back Seat Means for Insurance Coverage & Liability

February 16, 2017

According to Statistics Canada, 2014 saw 149,900 total injuries resulting from motor vehicle accidents (“MVAs”) – including 1,834 fatalities.[1] When exploring the cause of these accidents, human error is typically to blame.[2] To combat these statistics, each year brings new technological advances to our consumer goods, and the auto industry is no exception. Cars are built with increasingly more driver-assisted features, such as active cruise control, blind spot detection, and automated parking systems that allow a car to self-park. We are now entering a transitional period in which human drivers are supplemented, perhaps even replaced, with lines of computer codes. Autonomous vehicles are rapidly becoming reality, as evidenced by the mainstream commercial investment into, and adoption of, autonomous systems technologies by technology giant Google Inc. and global vehicle manufacturers such as Volvo, BMW, Tesla, Mercedes-Benz, Honda, and Ford, just to name a few.[3]

COMMERCIAL INTEGRATION

Estimated to possibly reduce the frequency of MVAs by up to 80% by 2040,[4] the continued development of this technology is not limited to personal vehicles, but is also being developed for commercial purposes. Uber, one of the world’s most popular commercial transportation businesses, has also ventured into the use of autonomous cars. In September 2016, Uber released a fleet of autonomous Ford Fusions to take to the roads in Pittsburgh, Pennsylvania as part of a test program that involved picking up passengers in place of traditional Uber drivers.[5] While this test program had an Uber engineer in the driver’s seat who could take control if necessary, it is predicted among technology and auto experts that in the coming years, autonomous cars will replace the human-operated vehicles that currently fill the roads.[6] Further, Daimler, one of the world’s largest manufacturers of commercial vehicles, has begun testing semi-autonomous 18-wheel transport trucks on the basis that this new fleet could not only be safer, but more fuel efficient and predictable.[7]

EMERGING QUESTIONS FOR THE INSURANCE INDUSTRY

The introduction of autonomous systems technology will have a very real effect on the insurance industry, for both insurers and  insureds, as well as insurance litigation. According to the Insurance Institute of Canada’s 2016 Report, “Automated Vehicles: Implications for the Insurance Industry in Canada”, the introduction of autonomous technology will shift the liability of MVAs from primarily human error to a combination of human error and software error.[8] Should the technology continue to evolve such that fully-autonomous vehicles become the norm, software error would virtually be the sole cause of MVAs. This shift presents a number of challenges for manufacturers, provincial and federal regulators, consumers, as well as the insurance industry.

These developments raise a number of questions for the insurance industry and manufacturers alike including: how will manufacturers be able to verify whether or not a vehicle’s autonomous systems technology was engaged at the time of a collision? Under what circumstances will insurance companies be allowed to access this information? How will subrogated claims unfold where automakers are found to be at fault? In the event that human error and technology failure both contribute to a motor vehicle accident, how will damages awards be dealt with? [9] Further questions also emerge when considering the commercial introduction of fully-autonomous vehicles, including; to what extent will insurance coverage for the first self-driving vehicles be modelled on the product liability coverage currently in place for other categories of vehicles that already feature substantial use of autonomous technology, such as airplanes, ships, and trains? Will there be a need to redesign the coverage offered?[10] As stated by the Insurance Institute of Canada, due to the speed in which this technology is advancing, “much preparation needs to be completed in a short period of time”.[11]

LEGAL IMPLICATIONS

When considering how the widespread adoption of autonomous vehicles would change current policies surrounding motor vehicle operation and insurance, one of the first questions that comes to mind is also one of the most important: who (or what) is driving the vehicle? According to Paul Kovacs, the founder and executive director of the Institute for Catastrophic Loss Reduction, “as on-board computers begin to make driving decisions, responsibility for collisions will move beyond human drivers to include automakers, software developers, and maintenance professionals”.[12]

Currently all motor vehicle liability policies in Ontario are governed by Part VI of the Insurance Act, RSO 1990, c I.8.[13] While the term “driver” is not explicitly defined within this part of the Act, the distinction between a driver and a passenger is an important one, especially within the context of liability. As Canadian lawmakers have yet to formally address some of the common issues arising from the use of autonomous vehicles, a look to the United States could be informative when developing our own domestic policies and legal approaches. Paul A. Hemmersbaugh, Chief Counsel for the United States’ National Highway Traffic Safety Administration, sent Google’s self-driving car project a letter in which the issue of driver identity was touched upon, noting that the traditional conceptualization of a ‘driver’ will not apply to these vehicles.[14] Instead, the vehicle’s software would be considered the ‘driver’.[15] Designating software as the ‘driver’ of an autonomous vehicle has very real consequences with respect to how liability flows, in the event of an MVA.

Semi-Autonomous Vehicles & Attribution of Liability

If the vehicle in question is completely autonomous, then it is likely that an MVA caused by the vehicle would result in a product liability claim instead of a negligence claim against the human driver. By eliminating the human driver, auto manufacturers and software developers will likely assume greater liability for MVAs. However, it gets messier when the vehicle in question is only semi-autonomous. How will a Court apportion liability between a human driver and a car’s autonomous vehicle technology? In MVAs involving these types of vehicles, apportionment of liability will depend on various factors such as: whether the vehicle was functioning in the autonomous mode, whether the human driver was using the technology as intended, and whether there were adequate safety warnings about how to use the technology. These factors were certainly among those considered in the case of Joshua Brown, the first reported fatality resulting from the use of an autonomous vehicle.

The Dangers of Improper Operation: Joshua Brown

On May 7, 2016, Brown was driving his Tesla Model S in Williston, Florida, when he enabled the car’s Autopilot mode. Shortly thereafter, he collided with a white tractor-trailer. Designed only to assist the driver instead of replacing the need for a driver altogether, the Autopilot technology requires the driver to remain alert and to keep his or her hands on the wheel.[16] Brown however, appeared to believe that the technology enabled him to passively observe, as evidenced by a YouTube video that he had posted, in which he let go of the car’s wheel, allowing the car to maneuver itself in slow-moving traffic.[17] At the time of the fatal collision, it appeared that Brown was not paying attention to his surroundings, and was reportedly watching a movie on a portable DVD player immediately before the collision.

In this case, the liability for the collision would likely be shared by Brown and Tesla. Even though Brown was watching a movie while the autopilot was enabled, a clear misuse of the technology, the technology also failed in that it was unable to distinguish the white tractor-trailer from the bright sky.[18] With respect to product liability, in Canada, it is the manufacturer’s duty to produce a product which is “reasonably safe”.[19] One of the factors that the Canadian courts consider in determining whether a product is reasonably safe is the history of the product failure.[20] This factor is especially relevant in Brown’s case as this was not the only reported instance in which this issue with the technology emerged. A member of a Tesla owner message board noted that his Tesla’s Autopilot camera appeared to have a difficult time distinguishing lines during periods of exceptionally bright sunlight in the morning and nearing dusk.[21]

For vehicles like Brown’s Tesla Model S, where the driver is able to override the driver assisted technology and regain control of the vehicle, the current legal framework surrounding driver negligence could arguably continue to apply. The technology could be viewed as an extension of more basic driver assisted technologies currently in use. As is currently the case with technology such as parking assistance or cruise control,[22] drivers would likely remain liable for using the technology in any way other than for what it was intended.

Fully Autonomous Vehicles & Product Liability

The legal response for fully-autonomous vehicles is less complicated, albeit more technical. Instead of reviewing pages of witness statements, and discovery transcripts, it appears more likely that the search for human error will give way to the search for a coding error, or glitch in the operating system.

As previously mentioned, it is commonly accepted that most MVAs are the result of human error. Google has recognized this, and instead of implementing semi-autonomous technology in its vehicles, its vehicles are designed to prevent human drivers from taking control altogether.[23] In doing so, Google has acknowledged that human error is more likely to occur than system malfunction and that human intervention makes the vehicles inherently more dangerous.[24] By removing the human element, Google creates a situation where liability is limited to Google. Due to the emphasis placed on product liability in place of driver negligence, auto manufacturers and suppliers will have a greater need for comprehensive product liability insurance.[25] As stated in  “Marketplace of change: Automobile insurance in the era of autonomous vehicles” KPMG anticipates that auto product liability claims will grow from almost nothing today to become a market approaching the size of today’s commercial auto insurance market.[26]

THE CHANGING SCOPE OF INSURANCE NEEDS, COVERAGE & LITIGATION

While this growth in product liability insurance may appear to be a straightforward conclusion, it remains unclear what shape these insurance policies will take.

To shed some light on what these policies may look like, we can look to insurance policies that are available for other types of vehicles with driver-assisted technology, such as planes, trains, and ships.[27] These types of vehicles often have extensive automation, especially commercial aircraft[28] and subways,[29] and have coverage that is based on product liability, instead of driver negligence.[30] For autonomous vehicles, legislative amendments would be required in order to clarify whether there are circumstances in which an owner of an autonomous vehicle would be liable, in addition to, or in place of, the auto manufacturer or the software developer.

The advent of autonomous vehicles also introduces a relatively new concern: hacking. As evidenced by Fiat Chrysler’s recall of 1.4 million vehicles in July 2015, a hacker could easily disable the windows, doors could be unlocked, the engine disabled, and the brakes or accelerator could be engaged or disabled.[31] It is easy to imagine how this could result in litigation over whether the insurer of the owner of the vehicle will respond in such an event, and to what extent the manufacturers are to blame. This issue adds yet another layer to the complexity of auto insurance litigation and is something that insurers, insureds, and litigators must consider.[32]

CONCLUSION

Over the years to come, our roads will be shared by human-driven vehicles, semi-autonomous vehicles, and fully-autonomous vehicles that require no human assistance.[33] The current approach to auto insurance coverage and the legal framework that deals with resulting claims has developed from the expectation that drivers make the errors that cause the MVAs. As human error becomes less likely to be the cause of MVAs, both lawmakers and the insurance industry must adapt accordingly. While it is unclear what form insurance litigation will take as these vehicles become the norm on Ontario highways, one thing is clear – our legislators will have to rework existing legislation such as the Ontario Insurance Act and the Highway Traffic Act as drivers yield control to smart technology, and increasingly become the passengers. Typical claims involving questions surrounding driver negligence, impaired driving, and license restrictions may well soon give way to a reduced number of claims, that instead involve questions surrounding product liability, hacking, and technological failures.

 


[1] Transport Canada, “Canadian Motor Vehicle Traffic Collision Statistics” (online: Her Majesty the Queen in Right of Canada, represented by the Minister of Transport, 2016) <https://www.tc.gc.ca>.

[2] Kate McGillivray, “Human error biggest cause of traffic accidents, collision investigator says”, CBC News (16 November, 2016) online: CBC News <http://www.cbc.ca>.

[3] Paul Kovacs, “Automated Vehicles: Implications for the Insurance Industry in Canada” (online: The Insurance Institute of Canada, 2016) <https://www.insuranceinstitute.ca>, at 14.

[4] “Marketplace of change: Automobile insurance in the era of autonomous vehicles” (online: KPMG, 2015) <https://assets.kpmg.com> at 26.

[5] “Uber gives riders a preview of driverless future”, CBC News (14 September, 2016) online: CBC News <http://www.cbc.ca>.

[6] Ibid.

[7] “Marketplace of change: Automobile insurance in the era of autonomous vehicles” (online: KPMG, 2015) <https://assets.kpmg.com> at 24.

[8] Supra note 3 at 2.

[9] Supra note 3 at ii.

[10] Supra note 3 at 10.

[11] Supra note 3 at 10.

[12] Supra note 3 at 2.

[13] Insurance Act, RSO 1990, c I.8

[14] Letter from Paul A. Hemmersbaugh to Chris Urmson (4 February, 2016) online: National Highway Traffic Safety Administration <https://isearch.nhtsa.gov>.

[15] Brian Fung, “Google’s driverless cars are now legally the same as a human driver” (10 February, 2016) The Washington Post online: The Washington Post <https://www.washingtonpost.com>.

[16] Danny Yadron and Dan Tynan, “Tesla driver dies in first fatal crash while using autopilot mode” (1 July, 2016) The Guardian online: The Guardian <https://www.theguardian.com>.

[17] Ibid.

[18] Ibid.

[19] Mississauga (City) v. Keifer Recaro Seating, Inc., [2001] O.J. No. 1893, 2001 CarswellOnt 1725 at para 4 (ONCA); similar principle cited in Tabrizi v. Whallon Machine Inc., 1996 CanLII 3532 at para 30 (BCSC).

[20] Ibid.

[21] Supra note 19.

[22] Allied Systems Co. v. Sullivan Estate, 2005 CanLII 4576, para 41 (ONSC).

[23] Brian Fung, “The big question about driverless cars that no one seems able to answer” (17 February, 2016) The Washington Post online: The Washington Post <https://www.washingtonpost.com>.

[24] Ibid.

[25] Supra note 3 at 2.

[26] “Marketplace of change: Automobile insurance in the era of autonomous vehicles” (online: KPMG, 2015) <https://assets.kpmg.com> at 29.

[27] Supra note 3 at 16.

[28] “Aviation Safety” (online: Boeing, 2016) <http://www.boeing.com>.

[29] “Automatic Train Control” (online: Toronto Transit Commission, 2016) <https://www.ttc.ca>.

[30] Supra note 3 at 16.

[31] Brent Snavely, “Fiat Chrysler recalls 1.4 million vehicles to block hacking”, Detroit Free Press (24 July, 2015) online: Detroit Free Press <http:/www.freep.com>.

[32] Supra note 3 at 40.

[33] Supra note 3 at 2.

Do not miss the latest developments in Canadian insurance law

Subscribe