I. J. Good and AI

Post Reply
User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

I. J. Good and AI

Post by Pigeon » Sun May 11, 2014 5:42 pm


Irving John ("I. J."; "Jack") Good (9 December 1916 – 5 April 2009) was a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing. After World War II, Good continued to work with Turing on the design of computers and Bayesian statistics at the University of Manchester. Good moved to the United States where he was professor at Virginia Tech.

In 1965 he originated the concept now known as "technological singularity," which anticipates the eventual advent of superhuman intelligence:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Good's authorship of treatises such as "Speculations Concerning the First Ultraintelligent Machine" and "Logic of Man and Machine" (both 1965) made him the obvious person for Stanley Kubrick to consult when filming 2001: A Space Odyssey (1968), one of whose principal characters was the paranoid HAL 9000 supercomputer. In 1995 Good was elected a member of the Academy of Motion Picture Arts and Sciences.

"I arrived in Blacksburg in the seventh hour of the seventh day of the seventh month of the year seven in the seventh decade, and I was put in Apartment 7 of Block 7...all by chance."

PDF link

Would the machine decide to not make smarter machines which would replace itself or would it just keep improving itself and therefore be a single machine.

User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

Re: I. J. Good and AI

Post by Pigeon » Mon May 12, 2014 3:11 pm

Image Image
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Zeroth Law reads:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Don't count on these.

User avatar
Royal
Posts: 10565
Joined: Mon Apr 11, 2011 5:55 pm

Re: I. J. Good and AI

Post by Royal » Tue May 13, 2014 3:50 am


Post Reply