Had to bookmark that link for further reading, Conrad.
Very interesting and worth more than a quick skim.
On the other hand Vernor Vinge wrote a paper on the Technological Singularity that you might find interesting.
http://edoras.sdsu.edu/~vinge/misc/singularity.html
I've read many scientific articles related to machine intelligence and programming. According to most, the AI Singularity is still far away.
This article is significant because it is a proposal that could be a disaster. Like motowndowntown pointed out, there are too many factors that could cause catastrophic problems with self-driving cars that operate on an insufficient cognitive system.
The article implies that human morals and ethics are an algorithm that can be programmed into an artificial intelligence system.
In 2014, Eugene passed the touring test. Eugene is a computer program.
The Turing Test is based on 20th century mathematician and code-breaker Turing's 1950 famous question and answer game, ‘Can Machines Think?'. The experiment investigates whether people can detect if they are talking to machines or humans.
If a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations it passes the test.
Here is the article -
http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx
Microsoft has developed DeepCoder, a program that constructs its own code from other written codes.
DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software – just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.
Source ~
http://www.newscientist.com/article/mg23331144-500-ai-learns-to-write-its-own-code-by-stealing-from-other-programs/
While still a very long way from run-away AI these little advancements are providing a platform to creating the AI Singularity.
Your article cited from Wired shows many interesting aspects of our understanding of intelligence as we can fathom. From the little I read so far, it empirically reasons intelligence and provides a model.
The AI Singularity may not fit any closer to that model than a superior alien intelligence. There can't be a model built on something that has never existed before and may be beyond our ability to comprehend.
AI intelligence is not going to be like human intelligence. This article about the self-driving cars ability to have human ethics and morality creates the impression that AI will adopt those traits from the algorithm the researchers are trying to develop.
Right now, we program specific instructions into our robots. We program those instructions and then program a set of commands and rules for the robot to implement. We can even program when and how those implementations occur. A robot intelligence is like a remote control that has everything laid out in advance, by us.
The reason why Asimov's Three Laws of Robotics
http://www.auburn.edu/~vestmon/robotics.html
won't work is because chaos happens.
A robot uses sensors to understand the world around it.
That 'input' (Johnny 5) is registered and the robot picks the code that can be implemented from that sensory data. Any imperative place with priority over input will create a feedback loop in ethical/moral coding. The robot will stop until it is either reprogrammed or it reprograms itself.
Right now, self-driving cars use inputs from the roadway and maps that are programmed into its software. They stay on the road because they don't 'see' other paths. The 'Path' is a priority over the 'destination'. If it were the other way around we might see self-driving cars in ditches and lakes, ponds or any other off-road direction to its programmed destination.
Right now, its not 'Smart' its "programmed".
This article desires to make machines use human potential based on ethics and morals. No matter how complex, it will still be a program.
An AI might make its own decisions. Decisions that defy our logic but are logical from its perspective. It will set its own priorities. It will meet those tasks by its own accord.