Pages Menu
Categories Menu

Posted by on Nov 26, 2012 in Science, Technology |

Cambridge will open a center to study the hazards of AI desarrolllo

Cambridge will open a center to study the hazards of AI desarrolllo

Under the title of Centre for the Study of Existential Risk, Cambridge University plans to open a center that, among others, study the risks on the evolution of artificial intelligence in humanity. A space where you will consider that in a future time, the technology may have the potential to threaten our very existence.

Something like the set of “Terminator”, where leading academics will study the potential threat posed to humanity robots. According to the Daily Mail picks, four subjects will be studied, collected as the greatest threats to the human species: Artificial Intelligence, climate change, nuclear war and the use of biotechnology.

The idea that machines could one day take over humanity has appeared in many of the classics of science fiction, is probably the most famous film Terminator on this subject.

Much earlier, in 1965, John Good had written an article in New Scientist magazine under the title which he spoke Speculations first ultra-intelligent machine. Good is a mathematician and cryptographer Cambridge University, a computer scientist Alan Turing’s friend who wrote about the near future where it would build an ultra-intelligent machine first, a team that would be the last invention that mankind would have to perform, and it would take with it the “intelligence explosion”.

According to the scientist had, who also came to advise Kubrick for the film 2001: A Space Odyssey:

Human survival depends on the construction of this machine.

Huw Price, professor of philosophy and one of the founders of the center, says that to reach that point, the AI would enter a dangerous phase that could bring serious consequences. According to Professor:

We could not know what kind of guarantees would. We must take seriously the possibility that there could be a time of “Pandora’s box” with Artificial Intelligence, if we lose, could be disastrous. I mean we can not predict anything with certainty, no one is currently able to do, but that’s the point. With so much at stake, we must do an even better job in understanding the risks of potentially catastrophic technologies.

The basic philosophy is that we must take seriously the fact that we are reaching the point where our technologies have the potential to threaten our very existence in a way that has never before been possible.

The idea is that the center develop all the possibilities ahead of that future. According to the researchers, there is no better place than the University of Cambridge, one of the oldest universities in the world on the science, to give value to these issues. According to Price:

Cambridge has recently celebrated its 800th anniversary … our goal is to reduce the risk that can not be here to celebrate its millennium.

Tags: , ,