Autonomous cars raise important ethical questions
Who's liable when an autonomous car crashes? The driver? Google? The programmer?
Liability: Liability is a major ethical issue surrounding autonomous vehicles. Complex systems inherently have errors and bugs and Google's self-driving car is not immune to software failure. An ethical issue that will arise surrounding liability is assigning fault when an autonomous vehicle crashes. The only instance of the Google car crashing was attributed to human error in another car hitting the Google car. However as autonomous vehicles become more prevalent a system of responsibility must be established. If the software misinterprets a worn down sign does the blame fall on the department of transportation for poorly maintained signage or the company who produced the self-driving software? It is unclear where the future of liability will rest in the realm of self-driving cars however it is known that the United States is fast to place blame on car manufacturers. In 1992 Ford was hit with over 1,000 product liability suits in the United States and only 1 suit in Europe. The precedent set over the next few years will have a significant impact on how willing car companies will be in pursuing autonomous vehicle technology.
Will society be better off as a whole with autonomous vehicles?
Society: At an overall level self-driving cars seem to create an environment where society is better off as a whole. The creators of the Google self-driving car have the goal of saving millions of lives by eliminating automobile related accidents in the United States and eventually the World. The intent and final end product of less automobile related deaths would be accepted in both a Deontological and Utilitarian framework because the intent is to save millions of lives and the end result is the elimination car accidents. However these philosophical frameworks could diverge in their agreement at a lower level of examination. Consider the difference between a computer operated car versus a human operated car. If a crash is about to occur humans will almost always have a virtuous intention of avoiding the crash even if the crash is not avoided. Utilitarians would still likely favor the autonomous car at this level because the self-driving car will likely out preform the driver in avoiding the crash all together. While a Deontologist may struggle with the idea of a computer have a “good will” when acting in avoiding the crash. When a car must choose between killing a pedestrian or the driver will the act be with good intention or simply a process executed and arbitrarily carried out. A Deontologist, however, might still favor autonomous cars because the choosing to use a safer self-driving car in the first place could override the decisions made by the car's technology.
Regardless of which ethical philosophy is used when decided whether or not society is better off as a whole the proliferation of autonomous vehicles will depend on convincing the public that self-driving cars are significantly safer than manually operating a car. People tend to want control over avoiding accidents or bodily harm, it is unclear how willing drivers will be willing to give up their control in favor of safety and convenience. Many drivers may take a slightly increased chance of accident in exchange for maintaining their ability to avoid accident.
Other Ethical Issues:
Unemployment- There is a significant number of people in the United Sates who make a living driving(cabs, trucker drivers, delivery men). When self-driving cars become common place is it possible that these people lose their jobs?
Learn more about employment issues here
Trolley problem- Would the car choose to let 4 pedestrians die or kill the driver/pedestrian?
Current laws vs. future- Do you make autonomous cars conform to current laws and vehicle codes or establish new ones?
Vulnerability- How do you ensure autonomous cars remain safe when the system becomes advanced enough to warrant a car network? What about cybersecurity and the autonmous car?
Learn more about Vulnerability here
Expectation that a car is autonomous- Do you still allow manually operated cars when most cars are autonomous. Consider an elevator, people expect sticking their arm in the door will cause it to reopen. Will this become the norm for cars?
Private vs. public sector/Monopoly- How much influence should a single company like Google have in law making surrounding autonomous vehicles? Too much influence could lead to government created and enforced monopolistic barriers to entry for other companies. The current policy page shows the google cars logo directly next to the California state seal.
Differing laws in different countries/states- Should autonomous vehicles be regulated at an international/national or state level? The implications of self-driving cars regulated at a state level could lead to significant delay in innovation by manufacturers having to conform to many different state level regulations.
As science fiction begins to turn to reality it is likely Isaac Asimov's famous laws concerning robotics will come into the ethical discussion surrounding self-driving cars: