Do not make the work of Deaf YouVideo content look like your own. Give credit where it is due and requires that articles be based on reliable published sources.

Artificial Intelligence May Kill Us All By 2050

VIDEO [CC] - Exclusive: We all may be dead in 30 years, scientists are beginning to worry about AI and the danger it poses mankind: US report.


US News - The human race could vanish in the blink of an eye within our lifetimes. Or we could just as plausibly see our species become immortal by the middle of the 21 st century. That's the promise, and the threat, posed by the accelerating pace of research around Artificial Intelligence. It may be an either/or proposition.

To activate this feature, press the "CC" button.

Most people have heard about Ray Kurzweil's immensely hopeful view, captured in his books and lectures about "singularity." In that view, AI progresses in helpful leaps, benefitting humankind at nearly every step of the way.

A good example of the early benefits of AI that nearly everyone uses now is Google Maps. The next stage of AI will be a supercomputer that recreates human intelligence. And the final evolution is artificial super-intelligence (ASI) that learns so quickly that it literally "soars" past ordinary human intelligence and solves every problem confronting mankind.

But there is a dark, threatening side to the AI story, and it is only now being discussed publicly. Physicist Stephen Hawking has said that the development of ASI "could spell the end of the human race."

Microsoft co-founder Bill Gates says he doesn't "understand why some people are not concerned" that an artificial super-intelligence by mid-century might save (or destroy) human civilization.

Billionaire entrepreneur Elon Musk fears that we are "summoning the demon" in our race to create an artificial super-intelligence.

To activate this feature, press the "CC" button.

What all of them agree on is that we may very well approach a "tripwire" sometime in the next 30 years, where a powerful supercomputer finally replicates the human brain and mind, and crosses over nearly instantly into super-intelligence.

And then what happens next is anyone's guess.

"While most scientists I've come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI's abilities could be used to bring individual humans, and the species as a whole, to…species immortality," writes Tim Urban, the author of the popular "Wait, But Why?" blog.

Right now, drones use AI to navigate very complicated landscapes in order to deliver bombs in battlefield conditions. But they're still piloted remotely by human beings.

Should a stealth bomber be developed that can fly itself (not dissimilar to Google's self-driving cars that likewise use AI), and then make decisions about where to drop bombs in battlefield conditions without human input, it could create a situation where AI is in control (and not humans).

To activate this feature, press the "CC" button.

Kurzweil believes we will hit this tripwire by 2045. Most of his scientific colleagues believe it is inevitable that we will hit it at some point in the 21 st century. Many of them are fearful of what happens when we cross it.

But why are they all so afraid of ASI? It's a good question - one that hasn't truly been explored all that much beyond a few boardrooms.

Much of what the public knows about the potential risks posed from AI applications comes from either science fiction movies such as "The Terminator," or from fears surrounding autonomous weapons with AI capabilities to target without human control. These are very real fears.


The truth is that AI is poised to do significant, irreparable harm right now, not just at some point in the future through the creation of a non-human super-intelligence, scientists have warned. AI combined with autonomous weapons could launch an era of indiscriminate killing the likes of which civilization has never seen before.

There have been two revolutions in warfare. With each revolution, humankind made a quantum leap in the ability to kill exponentially more people on the battlefield from a distance. We are on the cusp of the third revolution, engineered by AI. This one, though, may erase its inventor.

To activate this feature, press the "CC" button.

For centuries, if you wanted to kill someone, you had to do it at close range. Gunpowder gave us the ability to fire projectiles at enemies from a distance, and changed the concept of war for good. Soldiers could kill their enemy without seeing the result at close range.

Nuclear weapons created the second revolution in warfare. While few nuclear weapons have been put to use, their invention taught us that we could create very large weapons, launch them from an even greater distance, and kill many people on the battlefield all at once. War hasn't been the same since then.


But it is the third revolution in warfare - autonomous weapons that can largely think for themselves and target enemies on the battlefield without human intervention - that we should all be worried about. Once such weapons are created, there may be no turning back.

"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group," Musk, Hawking and others wrote in an open letter in July 2015. "Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."

World leaders, to date, have ignored the scientists on the threats AI poses to our very existence, much as they've either ignored them (or moved slowly) on other existential future threats like climate change or nuclear proliferation.

But AI is different. Once a super-intelligent, big-data-crunching AI machine learns how to think and learn for itself, it may decide that carbon life forms are the obvious target in any threat scenario. At that point, it won't care what world leaders think.

So whether it's now, or later this century, it's time we took AI seriously (or at least understand Isaac Asimov's first law of robotics). Our lives most likely depend on it. If we are not careful, we are all going to be dead in 30 years because of artificial intelligence. Source: Copyright 2015 U.S. News & World Report
Related Posts Plugin for WordPress, Blogger...

Featured Post

Hearing People Questions Annoy Deaf People

WATCH [CC] - Deaf people tell you which questions annoy them the most. Sometimes people ask stupid questions when they don't fully un...

Posts Archive

Most Viewed Last 7 Days

Most Viewed Last 30 Days

Most Viewed Of All Time

That Deaf Guy Comic

About This Site

Deaf YouVideo is public web site and a free assessment for everyone. A public web site is a web site that you can use to have a presence on the internet. It is a public facing site to attract customers and partners, and it usually includes increase traffic. Feel free to exploring the online community - Deaf, Hearing-Impairment, Hearing-Loss, Sign Language, News, Events, Societies, Resources, Links, Videos, Vloggers and much more. Be sure to Bookmarks this website.

Submitted content, to whom it may concern of posting on this site: YouPrivacy


Videos and Channels Powered By YouTube

RSS Feed Content

Deaf YouVideo provided by YouTube, Blogger, Google Feedburner, RSS Feed are a way for websites large and small to distribute their content as well beyond just visitors using browsers. The feed icon feeds permit subscription to regular updates, delivered automatically via a web portal, news reader vlogs or blogs and etc. Submitted content and/or disabled by request consume content and will be immediately removed from Deaf YouVideo. If you see the content appears "error, blank, and feed not support", click home or refresh your browsers.

Powered by FeedBurner

Copyright © 2017 Deaf YouVideo All Rights Reserved.
Deaf YouVideo. Powered by Blogger.
 
page contents