Saturday, January 2, 2016

A Robot That Looks And Acts Human

Prof Nadia Thalmann (left) posing beside Nadine, a life-like social robot capable of autonomously expressing emotions and gestures. Credit: Image courtesy of Nanyang Technological University

Science Daily: Prof Nadia Thalmann (left) posing beside Nadine, a life-like social robot capable of autonomously expressing emotions and gestures.

Say hello to Nadine, a "receptionist" at Nanyang Technological University (NTU Singapore). She is friendly, and will greet you back. Next time you meet her, she will remember your name and your previous conversation with her.

She looks almost like a human being, with soft skin and flowing brunette hair. She smiles when greeting you, looks at you in the eye when talking, and can also shake hands with you. And she is a humanoid.

Unlike conventional robots, Nadine has her own personality, mood and emotions. She can be happy or sad, depending on the conversation. She also has a good memory, and can recognise the people she has met, and remembers what the person had said before.

Read more ....

Editor: This is the first generation .... one can only imagine what "she" will look like 10 years from now.

Friday, January 1, 2016

U.S. Marine Corps Cancels Robotic Dog/Mule Project



Washington Times: Marines shelve Google’s $42M robot ‘AlphaDogs’ because they are too noisy

Marines will not be using robotic mules called “AlphaDogs” any time soon because the machines were deemed too noisy to be useful in a combat situation.

Boston Dynamics, the robotics company owned by Google’s parent Alphabet, has been working with DARPA, the Pentagon’s research division, to build the robots designed to carry supplies for troops.

The AlphaDog robots, also called the Legged Squad Support System (LS3), were designed to carry up to 400 pounds and provide much-needed relief for soldiers, Military.com reported.

Read more ....

ZC Editor: I concur .... it was too noisy.

More News On The U.S. Marine Corps Cancels Robotic Dog/Mule Project

Marine Corps Shelves Futuristic Robo-Mule Due to Noise Concerns -- Military.com
U.S. Marine Corps rules out robotic dog, mule -- UPI
Robot Mule Put Out to Pasture by Marine Corps -- NBC
Noisy Robotic Mule Sidelined by Marines -- VOA
Hee-Haw! DARPA’s noisy robotic ‘pack mule’ mothballed over fears it could spill the beans -- RT
The Marines Are Shelving Their Robot Dogs -- Inverse
The US Marines won’t be using Google’s robot dogs because they fail a pretty basic test -- Quartz
US military says robotic pack mules are too noisy to use -- Verge
US Marine Corps doesn’t want Google’s ‘Alpha Dog’ LS3 robots because they’re too loud -- Digital Trends
AlphaDog Robot Sidelined by the Marines Because It’s Way Too Noisy -- Epoch Times
Marines’ LS3 robotic mule is too loud for real-world combat -- Ars Technica
U.S. Military Puts Down Its Robot Mule for Being Too Loud -- Popular Mechanics

Thursday, December 31, 2015

MIT Technology Review's 2015 Annual Summary On Developments In Robots And AI

Will Knight, MIT Technology Review: What Robots and AI Learned in 2015

It was the year that self-driving cars became a commercial reality; robots gained all sorts of new abilities; and some people worried about the existential threat posed by super-intelligent future AI.

The robots didn’t really take over in 2015, but at times it felt as if that might be where we’re headed.

There were signs that machines will soon take over manual work that currently requires human skill. Early in the year details emerged of a contest organized by Amazon to help robots do more work inside its vast product fulfillment centers.

The Amazon Picking challenge, as the event was called, was held at a prominent robotics conference later in the year. Teams competed for a $25,000 prize by designing a robot to identify and grasp items from one of Amazon’s storage shelves as quickly as possible (the winner picked and packed 10 items in 20 minutes). This might seem a trivial task for human workers, but figuring out how to grasp different objects arranged haphazardly on shelves in a real warehouse is still a formidable challenge for robot-kind.

Read more ....

CSN Editor: A brief and concise summary of the year.

Wednesday, December 30, 2015

Will The Development Of Artificial Intelligence Doom Mankind?

Raffi Khatchadourian, New Yorker: The Doomsday Invention

Will artificial intelligence bring us utopia or destruction?

I. OMENS

Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude.

Such a system would effectively be a new kind of life, and Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor. He sometimes notes, as a point of comparison, the trajectories of people and gorillas: both primates, but with one species dominating the planet and the other at the edge of annihilation. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he concludes. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

Read more ....

Update: Some scientists fear superintelligent machines could pose a threat to humanity (Washington Post)

ZC Editor: A thought provoking article on the implications of developing AI platforms. I still believe that mankind .... because of its survival instincts .... will never put itself in a position where it may be destroyed. Still .... one has to wonder and it is on this issue that this New Yorker post is a must read.

Tuesday, December 29, 2015

Will Future Wars Be Waged Using Quantum Computers?



Zoe Hawkins, National Interest: How to Fight a War With a Quantum Computer

The Australian Government recently announced plans to invest $26 million in the development of quantum computing technology as part of the National Innovation and Science Agenda (NISA). Prime Minister Turnbull has argued that NISA is part of a new “ideas boom” designed to “create a modern, dynamic and 21st century economy for Australia.” It emphasizes quantum computing as an important area for government investment based on its ability to produce “jobs and economic growth.” And while this industry could certainly be “worth billions,” it offers much more than financial prosperity: quantum technologies could play a significant role in our future defense and security.

Quantum technology harnesses the obscure properties of subatomic matter to achieve computing processes unobtainable with classic computers. Today’s computers run on binary digits, or bits, which exist as either 1s or 0s. In contrast, quantum bits, or qubits, exploit the bizarre principle of superposition that enables them to occupy all possible states (both 1 and 0) at the same time. This allows quantum computers to undertake multiple calculations in parallel, unlocking unprecedented processing power that could “solve problems that would take conventional computers centuries.”

ZC Editor: This BBC report explains why the Pentagon and the NSA want Quantum computers .... Why Google and the Pentagon want 'quantum computers' (BBC).

Monday, December 28, 2015

These Entrepreneurs Want To Share Their Future AI Discoveries With The World

Elon Musk. NATHANIEL WOOD FOR WIRED

Wired: Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving the World

ELON MUSK AND Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.

At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”

Read more ....

ZC Editor: We are still far away before any meaningful discoveries are found.

Sunday, December 27, 2015

Teaching Computers To Learn Like Humans

ABC News Australia: Scientists teach computers how to learn like humans

For artificial intelligence and smart machines to really take off, computers are going to have to think more like people, according to experts in the field.

Now, US scientists have created a computer model, or algorithm, that captures the unique human ability to grasp new concepts in a study that involved learning unfamiliar handwritten alphabet characters.

The algorithm enabled computers to recognise and draw simple symbols that were indistinguishable from those created by humans.

The study, reported in the journal Science, is a "significant advance" in the field of artificial intelligence, the scientists said.

Read more ....

ZC Editor: If these developments continue, I guess it is only going to be a matter of time before the student (AI machines) will surpass the teacher (this is us).