Thursday, December 31, 2015

MIT Technology Review's 2015 Annual Summary On Developments In Robots And AI

Will Knight, MIT Technology Review: What Robots and AI Learned in 2015

It was the year that self-driving cars became a commercial reality; robots gained all sorts of new abilities; and some people worried about the existential threat posed by super-intelligent future AI.

The robots didn’t really take over in 2015, but at times it felt as if that might be where we’re headed.

There were signs that machines will soon take over manual work that currently requires human skill. Early in the year details emerged of a contest organized by Amazon to help robots do more work inside its vast product fulfillment centers.

The Amazon Picking challenge, as the event was called, was held at a prominent robotics conference later in the year. Teams competed for a $25,000 prize by designing a robot to identify and grasp items from one of Amazon’s storage shelves as quickly as possible (the winner picked and packed 10 items in 20 minutes). This might seem a trivial task for human workers, but figuring out how to grasp different objects arranged haphazardly on shelves in a real warehouse is still a formidable challenge for robot-kind.

Read more ....

CSN Editor: A brief and concise summary of the year.

Wednesday, December 30, 2015

Will The Development Of Artificial Intelligence Doom Mankind?

Raffi Khatchadourian, New Yorker: The Doomsday Invention

Will artificial intelligence bring us utopia or destruction?

I. OMENS

Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude.

Such a system would effectively be a new kind of life, and Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor. He sometimes notes, as a point of comparison, the trajectories of people and gorillas: both primates, but with one species dominating the planet and the other at the edge of annihilation. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he concludes. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

Read more ....

Update: Some scientists fear superintelligent machines could pose a threat to humanity (Washington Post)

ZC Editor: A thought provoking article on the implications of developing AI platforms. I still believe that mankind .... because of its survival instincts .... will never put itself in a position where it may be destroyed. Still .... one has to wonder and it is on this issue that this New Yorker post is a must read.

Tuesday, December 29, 2015

Will Future Wars Be Waged Using Quantum Computers?



Zoe Hawkins, National Interest: How to Fight a War With a Quantum Computer

The Australian Government recently announced plans to invest $26 million in the development of quantum computing technology as part of the National Innovation and Science Agenda (NISA). Prime Minister Turnbull has argued that NISA is part of a new “ideas boom” designed to “create a modern, dynamic and 21st century economy for Australia.” It emphasizes quantum computing as an important area for government investment based on its ability to produce “jobs and economic growth.” And while this industry could certainly be “worth billions,” it offers much more than financial prosperity: quantum technologies could play a significant role in our future defense and security.

Quantum technology harnesses the obscure properties of subatomic matter to achieve computing processes unobtainable with classic computers. Today’s computers run on binary digits, or bits, which exist as either 1s or 0s. In contrast, quantum bits, or qubits, exploit the bizarre principle of superposition that enables them to occupy all possible states (both 1 and 0) at the same time. This allows quantum computers to undertake multiple calculations in parallel, unlocking unprecedented processing power that could “solve problems that would take conventional computers centuries.”

ZC Editor: This BBC report explains why the Pentagon and the NSA want Quantum computers .... Why Google and the Pentagon want 'quantum computers' (BBC).

Monday, December 28, 2015

These Entrepreneurs Want To Share Their Future AI Discoveries With The World

Elon Musk. NATHANIEL WOOD FOR WIRED

Wired: Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving the World

ELON MUSK AND Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.

At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”

Read more ....

ZC Editor: We are still far away before any meaningful discoveries are found.

Sunday, December 27, 2015

Teaching Computers To Learn Like Humans

ABC News Australia: Scientists teach computers how to learn like humans

For artificial intelligence and smart machines to really take off, computers are going to have to think more like people, according to experts in the field.

Now, US scientists have created a computer model, or algorithm, that captures the unique human ability to grasp new concepts in a study that involved learning unfamiliar handwritten alphabet characters.

The algorithm enabled computers to recognise and draw simple symbols that were indistinguishable from those created by humans.

The study, reported in the journal Science, is a "significant advance" in the field of artificial intelligence, the scientists said.

Read more ....

ZC Editor: If these developments continue, I guess it is only going to be a matter of time before the student (AI machines) will surpass the teacher (this is us).