Are We There Yet?

May 15, 2019

Are We There Yet?

Matthew Frazer

The knowledge of good and evil.

The desire for it led to the original rebellion. Now cut off from its source, humankind have struggled throughout recorded history to attain it by our own means. How much ink has been spilled trying to distinguish the good from the bad? And how much more blood?

Morality seems to be a distinctly human conflict. The animal kingdom does not appear to worry about the nature of good and evil in the same way. But we stand at a point in history where humans are on the verge of creating a new type of mind—one made not from cells and organs, but from silicon and algorithms. Present attempts at Artificial Intelligence (AI) are fairly limited: we have created systems that are highly expert in extremely specific domains,[i] and even managed a few systems that have broad but shallow general capabilities.[ii] But then, we are just getting started; about 1800 years passed between the first experiments with steam power and the Flying Scotsman.

AI research is driven by a desire to understand our own intelligence sufficiently to reproduce it. We lack the technology to build an organic brain from scratch, but we can simulate one using a computer. As the hardware and software available grows in power and expressiveness, the results edge closer to something that may pass the venerable Turing Test.[iii] This test skirts around the difficult issue of defining what constitutes intelligence by applying the duck test: if it looks like a duck, walks like a duck, and quacks like a duck, it’s probably a duck. Or an intelligence, in this case.

Devotees of computer science and fans of Douglas Hofstadter’s Gödel, Escher, Bach[iv] have been toying with the ideas around AI for decades. But now the rubber has hit the road. Literally. Self-driving cars are no longer reserved for futuristic sci-fi movies. They’re here, right now, on a street near you… well, probably not near you unless you live in certain parts of California. But unless you are already in your twilight years, you will likely see one, or ride in one, in your lifetime. There may be children already born who will never need to learn to drive a car.

The classic trolley problem from philosophy classes is no longer a gruesome hypothetical; it’s a gruesome reality. How should an autonomous car behave when given the choice between two abhorrent outcomes? Should it hit the obstacle that has suddenly appeared and certainly kill the car’s occupant? Or swerve to avoid the obstacle and run down two pedestrians in the process? Should it matter that the pedestrians are elderly? Should it matter that the occupant of the car is pregnant?

Most modern AIs are not programmed with explicit rules for these (or any) situations, but rather are constructed as ‘learning machines’[v] that can extrapolate ‘correct’ behaviour from a set of training data taken as gospel truth. Researchers at MIT have constructed a website called Moral Machine[vi] that asks visitors to make judgements on a number of different scenarios to ascertain what society (or at least the portion that uses the site) thinks is the ‘moral’ response. This information may help develop these technologies or inform legal frameworks to deal with questions of liability.

The need for conversation around morality and its source has never been greater, especially as we start to hand over moral decisions to our creations. The applications are far broader than personal transport. We may not have a solution to the trolley problem, but if the next generation grows to believe that the source of morality is ‘the machine’, in much the same way that some of our present generation accept as truth whatever tops their Facebook feed, we may find ourselves even further from the garden than before.

 

Matthew Frazer graduated from UNSW in Engineering and Science. After a time programming machines, he switched to the more challenging task of programming teenagers, and now teaches high school Mathematics and Computing in Tamworth.

 

[i] https://www.techradar.com/au/news/i-tried-to-fool-google-duplex-googles-mind-blowing-ai-booking-assistant

[ii] Google Assistant, Siri, and Alexa are all amazing when they work, and hilariously inept when they don't.

[iii] https://en.wikipedia.org/wiki/Turing_test

[iv] Douglas R. Hofstadter, Gödel, Escher, Bach (Basic Books, 1979).

[v] https://en.wikipedia.org/wiki/Artificial_neural_network

[vi] http://moralmachine.mit.edu



Leave a comment

Comments will be approved before showing up.