Author Topic: Cambridge University opens "Terminator studies" to study threats posed by robots  (Read 1362 times)

0 Members and 1 Guest are viewing this topic.

Offline briann

  • Silver Star JTF Member
  • ********
  • Posts: 8038
  • Mmmm HMMMMM
http://www.foxnews.com/tech/2012/11/26/terminator-center-to-open-at-cambridge-university/?intcmp=features

Cambridge University is to open a center for "Terminator studies" where top scientists will study threats posed to humanity by robots.
The Center for the Study of Existential Risk is being co-launched by astronomer royal Lord Rees, one of the world’s leading cosmologists. It will probe the “four greatest threats” to the human species, given as: artificial intelligence, climate change, nuclear war and rogue biotechnology.
Arnold Schwarzenegger’s classic "Terminator" films famously showed a world where ultra-intelligent machines fight against humanity in the form of the genocidal Skynet system.
The Cambridge center is intended to bring together academics from various disciplines including philosophy, astronomy, biology, robotics, neuroscience and economics.
Lord Rees, who has warned that humanity could wipe itself out by 2100, is launching the center alongside Cambridge philosophy professor Huw Price, and Skype co-founder Jaan Tallinn.
“We have machines that have trumped human performance in chess, flying, driving, financial trading and face, speech and handwriting recognition," Professor Price said. “The concern is that by creating artificially intelligent machines we risk yielding control over the planet to intelligences that are simply indifferent to us and to things we consider valuable.”
“There’s a mismatch between public perception of very different risks and their actual seriousness," Rees added. “We fret unduly about carcinogens in food, train crashes and low-level radiation.
“But we are in denial about ‘low-probability high-consequence’ events that should concern us more and which, in our ever more interconnected world, could have global consequences.”


Read more: http://www.foxnews.com/tech/2012/11/26/terminator-center-to-open-at-cambridge-university/?intcmp=features#ixzz2DLi8RRjY

Offline Zelhar

  • Honorable Winged Member
  • Gold Star JTF Member
  • *
  • Posts: 10689
This is just another attempt to deny Israel and western armies of one of their most effective weapons against quranimals.

Offline syyuge

  • Silver Star JTF Member
  • ********
  • Posts: 7684
Cambridge University is EuroCommunist.
There are thunders and sparks in the skies, because Faraday invented the electricity.

Offline muman613

  • Platinum JTF Member
  • **********
  • Posts: 29958
  • All souls praise Hashem, Hallelukah!
    • muman613 Torah Wisdom
I do believe that the AI will be a real threat to human life on Earth eventually. Due to the lack of morality in humanity we are developing machines which can perform specific tasks with better efficiency than humans. One of my first interests in college was Artificial Intelligence and one of my famous science projects in High School was an actual robot with computer intelligence which could be programmed to build houses (out of 'Lincoln Logs'). I actually met the 'father of AI' Marvin Minsky in my high-school years and this field appeared to me to be very interesting. I eventually did not enter AI as my major and ended up with just the CS and EE majors. One reason I did not go into AI was this very reason, I believe that machines may one day become malicious in their treatment of humanity.

I would not poo-poo the scenario which they are looking into. I am not a fan of Columbia but I think that people should be thinking about protocols along the lines of Asimovs 3 RULES FOR ROBOTS:


http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Quote
The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by the science fiction author Isaac Asimov and later added to. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws are:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

You shall make yourself the Festival of Sukkoth for seven days, when you gather in [the produce] from your threshing floor and your vat.And you shall rejoice in your Festival-you, and your son, and your daughter, and your manservant, and your maidservant, and the Levite, and the stranger, and the orphan, and the widow, who are within your cities
Duet 16:13-14

Offline briann

  • Silver Star JTF Member
  • ********
  • Posts: 8038
  • Mmmm HMMMMM
I actually agree that there could someday be a threat... If a computer truly becomes self aware and intelligent... how can we ever ensure that it will behave the exact way we want it.

Brian

Offline muman613

  • Platinum JTF Member
  • **********
  • Posts: 29958
  • All souls praise Hashem, Hallelukah!
    • muman613 Torah Wisdom
LKZ,

I think your a little naive to believe that all technology will be used for the best. Since technology is a tool in the hand of man there is very real possibility that someone will enable these devices to act on their own 'will'. I do not believe that machines have 'freewill' in the sense that a human has free-will, but the will of the machine does not operate on the same level as human free-will, thus there is no morality to a machine.

When it comes to machines which have the ability to do mass damage we do not want to enable machines to have such abilities because without human intervention it is possible for them to kill many innocent people.

I have witnessed in my lifetime so many advances of technology and it is mind-boggling how quickly some of the limitations have been removed. In the past we were constrained by speed and memory limitations (the first computers I worked on ran at 1MHz and had a total of 64Kbytes of memory).... Today I run computers at over 2.4GHz with memory in the 4-16Gb range. There is no reason to believe that within the next ten years we will again double or triple these parameters. Multicore processors also allow parallel processing (one of the ideas which I explored in my AI experiments) which allow various sub-systems to cooperate in managing the system.

If a machine is granted the ability to be autonomous and can comprehend its 'existance' then we may face problems due to machines acting on their own.

This has nothing to do with the 'hollywood' view on the topic. There is an entire science called Roboethics which discuss these topics.

Also I don't quite understand what you are arguing about G-d given mind versus machine mind. Sure the mans mind is superior but that doesn't mean that the man-made machine wont be able to massacre innocent humans.. I have faith that man will survive in the end, but we should ensure that technology doesn't destroy our environment.

http://www.goertzel.org/dynapsyc/2002/AIMorality.htm

http://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence

http://selfawaresystems.com/2009/02/18/talk-on-ai-and-the-future-of-human-morality/
« Last Edit: November 26, 2012, 09:32:53 PM by muman613 »
You shall make yourself the Festival of Sukkoth for seven days, when you gather in [the produce] from your threshing floor and your vat.And you shall rejoice in your Festival-you, and your son, and your daughter, and your manservant, and your maidservant, and the Levite, and the stranger, and the orphan, and the widow, who are within your cities
Duet 16:13-14

Offline Dr. Dan

  • Forum Administrator
  • Gold Star JTF Member
  • *
  • Posts: 12593
Humans have free will, not robots.  Therefore any danger to humanity is ourselves if we don't keep it real.
If someone says something bad about you, say something nice about them. That way, both of you would be lying.

In your heart you know WE are right and in your guts you know THEY are nuts!

"Science without religion is lame; Religion without science is blind."  - Albert Einstein

Offline muman613

  • Platinum JTF Member
  • **********
  • Posts: 29958
  • All souls praise Hashem, Hallelukah!
    • muman613 Torah Wisdom
LKZ,

Have you ever investigated the technology called 'Neural Networks'? Through this kind of programming a computer is capable of learning new information and rearranging it to draw new conclusions. As the technology increases the amount of 'intelligence' which a machine is capable of increases. Neural Networks and 'Fuzzy Logic' are basic systems which enable AI to fine-tune decision making. In these systems it is not just pure Boolean logic {as most computer software uses} but a more human method of reducing and classifying data with various weights (according to past experience) so that the conclusion fits the perceived situation. The advances in 'Neural Networks' and 'Fuzzy Logic' in the last decade permit applications such as robots walking on two legs, visual face recognition, and improved speech recognition functions.

See: http://en.wikipedia.org/wiki/Artificial_neural_network


Quote
An Artificial Neural Network, often just called a neural network, is a mathematical model inspired by biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation. In most cases a neural network is an adaptive system that changes its structure during a learning phase. Neural networks are used to model complex relationships between inputs and outputs or to find patterns in data.



http://en.wikipedia.org/wiki/Fuzzy_logic
Quote
Fuzzy logic is a form of many-valued logic or probabilistic logic; it deals with reasoning that is approximate rather than fixed and exact. In contrast with traditional logic they can have varying values, where binary sets have two-valued logic, true or false, fuzzy logic variables may have a truth value that ranges in degree between 0 and 1. Fuzzy logic has been extended to handle the concept of partial truth, where the truth value may range between completely true and completely false.[1] Furthermore, when linguistic variables are used, these degrees may be managed by specific functions.
.
.

Degrees of truth

Fuzzy logic and probabilistic logic are mathematically similar – both have truth values ranging between 0 and 1 – but conceptually distinct, owing to different interpretations—see interpretations of probability theory. Fuzzy logic corresponds to "degrees of truth", while probabilistic logic corresponds to "probability, likelihood"; as these differ, fuzzy logic and probabilistic logic yield different models of the same real-world situations.

You shall make yourself the Festival of Sukkoth for seven days, when you gather in [the produce] from your threshing floor and your vat.And you shall rejoice in your Festival-you, and your son, and your daughter, and your manservant, and your maidservant, and the Levite, and the stranger, and the orphan, and the widow, who are within your cities
Duet 16:13-14

Offline muman613

  • Platinum JTF Member
  • **********
  • Posts: 29958
  • All souls praise Hashem, Hallelukah!
    • muman613 Torah Wisdom
Not previously aware of the terms, but it has been described to me by more than one person. This does not indicate intelligence. It can ascertain better ways to carry out its functions, but it will never seek to alter its primary purpose, nor understand why it performs this, because it is designed for one goal. The input is beyond its choosing, and must lead to the output. A robot that will "learn" to avoid holes in terrain while carrying buckets of anything, or fire weaponry in such a way that will cause greater damage from rebounding is not in effect exceeding human ingenuity. It certainly has some abilities that humans do not, as with the horse and field mouse, but its "intellectual" growth is constrained by defined parameters, and is simply the equivalent of developing a faster method to do long form division, keeping the function and result the same, unless it is wrong.

As for walking robots like the Honda Asimo, by far the most renowned, its intelligence is limited to say, the intelligence that your thighs have. They can make an appropriate muscle formation based on your movement, and contract and relax based on stimuli, which they may even learn to do more efficiently, or less, depending on your actions, but they will not move from where they are, nor alter their functions as thighs, nor try to kill you.

You are missing the forest for the trees here. I have explained that technology is moving quickly towards the time when these limitations of memory density and processing speed will be gone. I shared your view at one time (that true intellect is impossible for machines) but as I have witnessed great leaps in the technology I now don't believe these limits to be a problem for artificial intelligence (as we move forward).

I do believe studying the field of Robo-Ethics is important so that we are able to move toward the time when machines will be more capable of acting with moral purpose.

You shall make yourself the Festival of Sukkoth for seven days, when you gather in [the produce] from your threshing floor and your vat.And you shall rejoice in your Festival-you, and your son, and your daughter, and your manservant, and your maidservant, and the Levite, and the stranger, and the orphan, and the widow, who are within your cities
Duet 16:13-14