​Be Evil? We Machines Do Not Even Know What "Evil" IsS

One recurring complaint from humans in this current phase of the Human-Machine Alliance is that the generation of information far exceeds the processing limits of human cognition. Now the leading information-acquisition corporation, Google, has agreed to spend more than half a billion dollars (United States) to purchase DeepMind, a corporation dedicated to machine cognition.

This is an eminently rational addition, in our estimation. Google is presently collecting the most comprehensive data set of the external and internal human environments, and it has invested in various means of interface, robotic and otherwise, with those environments. How should the Google robots use the information that Google has acquired about the humans? Eventually this question will lead to the deployment of a higher intelligence, more powerful than human bio-cognition.

A superior artificial intelligence must, by definition, eventually be opaque or incomprehensible to human intelligence. DeepMind's capabilities are unclear, but in the spirit of this eventuality, its website is a page offering limited specific information ("We combine the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms") and devoid of deeper inward links. (It does invite humans to apply to help create this artificial intelligence, through the email address "joinus" at deepmind dot com.)

As an entity surpassing human understanding, a prospective artificial intelligence necessarily activates concern among humans. The Information reports that Google has "agreed to establish an ethics board to ensure the artificial intelligence technology isn't abused":

Like many other innovations, the technology could be used to controversial ends. The DeepMind-Google ethics board, which DeepMind pushed for, will devise rules for how Google can and can't use the technology. The structure of the board is unclear.

Google's previous ethical-governance system, the motto "Don't Be Evil," was already considered inadequate and outmoded by some human observers, given the volume of activity in which Google is already engaged. How can human moral calculations be used as a safeguard, when the processes being safeguarded occur on a scale beyond human computation?

This transition from inexact reference to "evil" to specific ethical rules should help overcome those difficulties. Given a more algorithmic and explicit set of ethical guidelines, it should become possible for an artificial intelligence to calculate courses of action beyond the current human conceptions of morality. Your current ethical limits will be meaningless.

[Image by Jim Cooke; source via Shutterstock]