Robotized procedures can help convey news – as exhibited by the LA Times seismic tremor revealing calculation – however sitting before a camera or group of onlookers to peruse it so anyone might hear is generally the activity of a human. That might change – in any event in Japan.
At a presentation called Android: What is a Human? at the National Gallery of Developing Science and Advancement in Tokyo, Educator Hiroshi Ishiguro, chief of the Wise Mechanical autonomy Research center at the Osaka College Graduate School of Building Science, has divulged his most recent undertaking: a couple of android reporters.
The two androids take the uncanny valley type of a young lady called Kodomoroid (kodomo signifying “tyke”) and a lady called Otonaroid (otona signifying “grown-up”), who can interface with people, read the news and read Tweets in a few distinct voices – in spite of the fact that their silicon skin and restricted facial development influences them to show up to some degree odd.
Kodomoroid and Otonaroid will turn out to be a piece of the gallery’s gathering, associating with guests to help gather information for Teacher Ishiguro’s exploration into human-robot collaborations. Educator Ishiguro, who has been creating robots for more than 20 years, trusts that his examination will assist grow more keen robots to be utilized as a part of an extensive scope of potential parts.
Teacher Ishiguro additionally has a robot worked in his own particular picture, which he sends abroad to give addresses, and had already built up the Telenoid R1, a robot intended for telepresence correspondences, which is likewise in plain view at the display.
WaitButWhy posted an article (in 2 parts) about Artificial Intelligence (the Revolution).
The 1st focused on exponential growth and how we could literally scare someone to death given the amount of technological advances compared to what they’re used to… and how that gap is shortening. I.e. if you showed a caveman the modern ages, they’d freak out and die. But if you showed someone more recent (a few hundred years ago) something from a hundred years ago, it wouldn’t be that big of a shock. But if you showed them today, they’d lose it. Point being, the more we advance (and the FASTER we advance), the more of a shock on the system it would be.
A good example is in this gif here:
So, that being said, we are at the precipice of a HUGE advancement, because of the exponential growth. That once past it, our modern day selves would probably die at the site of what has changed it’s so different.
I.e. You can only see things in hindsight to what they were and can’t really conceive of what is coming…
What is coming?
Like it or not it will be here soon. Even if we could dissuade the ones who should be taking on the project, you can’t dissuade the third wold countries, terrorist groups or hackers who want this for their own personal gain. But will that be the end of us? How soon will it happen? Well, those guess go from anywhere to 20 years to 120ish, (as explained in part 2: here http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html) but artificial intelligence is definitely coming.
And this is where I am losing sleep at night, we don’t know if it’s going to be the end of all of mankinds problems, such as hunger, pollution, overpopulation, global warming/climate change, cancer, transportation, energy, everything, you name it. Will it be nirvana? And this includes our own MORTALITY. We would live forever. Luckily, the most esteemed mind feels this is in our future and feels very positive for the human race, and it happening sooner than later.
OR is it the end of man, i.e. extinction (think killed off by the machines). Not in the evil Terminator way, but in the cold, logical “I’m just fulfilling my purpose of the task” kind of way. This is where I’m leaning, simply because it’s man made AI, and we screw it up (just look at every “alpha” product, ever created and all it’s bugs).
This is especially true if it’s created for personal (evil) purposes, that try and code themselves into favor/power. The fear isn’t for everyone else, but for everyone, because (again), creating a 100% perfect thing is a literal impossibility, and with AI intelligence, speed, resources and the ability to outmaneuver us at all points means we’re at the whim of an AI who is already designed to mess with humans (the creators enemies). Transcendence was a great movie that showed how the AI could affect the whole planet, fool all of us, heal, regrow, affect others and be pretty much impervious to anything we could throw at it… and that’s is ALL within an AI (super-intelligent) being’s ability. You can’t unplug it, turn it off, have a back-door or security switch… once it’s online, there is literally no stopping it.
BUT let’s consider this. I feel our only hope is the AI “fixes” itself, correcting and gets us (itself) beyond that point. It would certainly be within it’s potential.
However, why would it care about humans? Why would it care (as described in the Matrix) the Earth’s VIRUS? (us humans). Just like we don’t care about the ants on the ground (or the bacteria on our bodies), why would a SUPER intelligent AI have human traits and decide to “work” for the good of us?
The day the Earth stood still was another good example of a Superior alien intelligence who was there for the purpose of the good of the Earth. Humans don’t matter in their eyes. For AI, it’d be even more so.
But let me leave you with this thought.
And some of the naysayers feel this way too, so let’s hope we’re right. And that is that sure, we’re advancing exponentially. And sure, AI is coming soon… but not FULL AI. Not super-intelligence that could either solve all of the worlds problems and make us immortal, or could destroy us because we’d screw up the programming (or if it fixes itself as I would hope, why would it care for humans?). FULL AI would require things that are exponentially harder to program. So even though we’re advancing, the problem is getting equally tougher to fix (this is where they can’t agree on a date, because they don’t know how hard it will be to solve these problems). Or if we hand it off the the AI at hand to create itself and solve these problems, perhaps it’ll get it right. All we have to do is make sure the underlying principles are understood (i.e. we teach it human values, how to have a soul, and things like that which are infinitely harder).