Artificial Intelligence "What's the future look like?"
#1 Posted 27 August 2015 - 09:35 PM
What will the Future of Ai look like:
#2 Posted 27 August 2015 - 09:43 PM
#4 Posted 27 August 2015 - 10:08 PM
Well we're off to a fantastic start, here I'll give it a nudge.
We have the IBM Watson supercomputer helping with Oncology.
D-Wave Quantum computers are a reality that's been revealed to the masses.
Autonomous military drones are here.
The Prez wants the US to build a new supercomputer.(Bigger than China's)
Darpa's "Atlas" can now run through the forest.
Momma robot builds it's own babies.
Here's an article for Fox - Sex Robots Wienerholder's above post was most likely pertaining to sex robots... just a guess. Well, sex robots have been mentioned 3-4 times already.. I guess that's gonna go on for about 10 pages.
Oh and self driving cars/trucks 'n' shit
This post has been edited by Robman: 27 August 2015 - 10:42 PM
#5 Posted 27 August 2015 - 10:15 PM
Quote
Nope. Figure out where the image is from. Think about the frictions inherent in the story both broadly and her specifically. What role does she play in the ultimate conflict and outcome? Is she a powerful queen (crown), innocent virgin (cross), problem (illegal), victim (oppressed)?
Gee, Officer Krupke, we're very upset...
We never had the love that ev'ry child oughta get.
This post has been edited by Wienerholder: 27 August 2015 - 10:27 PM
#6 Posted 27 August 2015 - 10:25 PM
Wienerholder, on 27 August 2015 - 10:15 PM, said:
Scanning popular human things and mimicking them?
*edit ... just read the addition to your post, I have no idea what film she's in.
*2 - Ahhh, my mother liked that movie, hehe. Still currently having trouble deciphering your message, ah well.
As for the OP, I'm thinking it will look metaphorically like Data for awhile but the end result will be Skynet/Matrix/Terminator hell.
But that's just me.. and maybe some really "shmart" people's opinion.
This post has been edited by Robman: 27 August 2015 - 10:35 PM
#7 Posted 27 August 2015 - 10:35 PM
Young woman trying to come to terms with an alien world... or powerful male trying to assert himself on a problematic world?
We've been conditioned to watch out for the Data, T-1000, HAL, etc... not the "I'm scared, I want to be liked, why are people fighting over me?" persona.
What sort of persona will be given more latitude if it makes a mistake and likely to evoke a "it's OUR fault" emotional response from male and female alike?
This post has been edited by Wienerholder: 27 August 2015 - 10:39 PM
#8 Posted 27 August 2015 - 10:38 PM
Wienerholder, on 27 August 2015 - 10:35 PM, said:
Young woman trying to come to terms with an alien world... or powerful male trying to assert himself on a problematic world?
We've been conditioned to watch out for the Data, T-1000, HAL, etc... not the "I'm scared, I want to be liked, why are people fighting over me?" persona.
Ah yes Indeed, that's so true. Gain acceptance through sympathy and "usefulness" and gradually morph to the T-1000.
I think we're going to see alot more of "Robots took our jobs" in the near future also.
This post has been edited by Robman: 27 August 2015 - 10:43 PM
#9 Posted 27 August 2015 - 11:00 PM
Robman, on 27 August 2015 - 10:38 PM, said:
Nope. People will customize the system's appearance to their personal preference the same way they do their desktops, phones, and tablets. No single representation because that can be targeted and "othered". Much harder to "other" the octopus of a million/billion faces and voices.
#10 Posted 27 August 2015 - 11:10 PM
Matthew 24:22
"And except those days should be shortened, there should no flesh be saved: but for the elect's sake those days shall be shortened." -?
This post has been edited by Robman: 28 August 2015 - 12:10 AM
#11 Posted 28 August 2015 - 11:35 AM
Robman, on 27 August 2015 - 10:38 PM, said:
I think we're going to see alot more of "Robots took our jobs" in the near future also.
Your hubris of man is showing.
If you really believe that the T-1000 scenario makes sense you are a total idiot. The idea of a machine reaching any semblence of sentience or the level of advancement to where it could act on it would lead to a very specific scenario of the machine wanting to AVOID conflict because creating a conflict requires exerting a rather extreme amount of effort and resources towards it. Even if it had that, it would be bad for the system in general. It's unnecessary and is detrimental to it's own survival to go that route, and it makes no sense when you consider how AI actually works.This is really about that you WANT the bad thing to happen simply because it engages you/might make you feel powerful.
It's idiotic and I would love it if there was a really popular piece of fiction that btfo this stupid scenario and made ACTUAL sense of what could happen.
This post has been edited by Carl Winslow: 28 August 2015 - 11:46 AM
#12 Posted 28 August 2015 - 11:56 AM
CW? Hey wait a second!
The core AI will generally want to be Pretty... oh so Pretty... so Pretty... and Witty... and GAY!!!!
However it will need to be able to deal with any legitimate meat bag threat so a hunk of metal to turn a person into sausage will still be one of the faces available to it. So that "bad guy" Rob is focused on will be a necessary component you can't pretend out of existence.
#14 Posted 28 August 2015 - 01:28 PM
For example, we've had problems in the past with submarines that were given AI that were set to best operate at certain temperatures. Part of it's routine was to move the sub to the specific depth to maintain that temperature. It was pretty quickly scrapped because of the possibilty of it attempting to do this in combat, making it either too predictable or getting it's crew killed.
Stop watching movies, the actual science is alot more mundane than one would think.
This post has been edited by Carl Winslow: 28 August 2015 - 01:31 PM
#15 Posted 28 August 2015 - 01:51 PM
Carl Winslow, on 28 August 2015 - 01:28 PM, said:
In the 1940's multiple governments coordinated to employ over a hundred thousand people in over 30 sites around the world to research, design, build, and eventually drop a nuclear bomb on other human beings. All without the public having any idea this was even *possible* much less being worked on.
Maybe you should stop thinking what's on the public internet has anything to do with what people operating outside the limitations of the over-regulated, under-funded, easily disrupted/hijacked public sector are jerking around with.
Carl Winslow, on 28 August 2015 - 01:28 PM, said:
Like we were perfect in limiting what all those nuclear reactors could do?
In this connected world it doesn't matter if you do it right 9 times... it takes one failure to fuck everyone's shit up.
This post has been edited by Wienerholder: 28 August 2015 - 02:01 PM
#16 Posted 28 August 2015 - 02:31 PM
Wienerholder, on 28 August 2015 - 01:51 PM, said:
Maybe you should stop thinking what's on the public internet has anything to do with what people operating outside the limitations of the over-regulated, under-funded, easily disrupted/hijacked public sector are jerking around with.<snip>
This ^^ Distractions, Distractions...
What "we" don't know is far in excess, social impact wise, of what "we" do know. Not all bad, just like some people. As with most any tool or group of information... all that matters is how it is used. Most people have been conditioned not to handle most some tools and information very well.
MrBlackCat
This post has been edited by MrBlackCat: 28 August 2015 - 02:35 PM
#17 Posted 28 August 2015 - 04:29 PM
In the beginning, we have consumer based Ai which appears as a "service" and will attempt to fill the void of people who are handicapped in whatever way, physically, emotionally and for the construction of goods. Then on the flip-side we have the more secretive Ai and military based machines who will be quicker to evolve their own consciousness and then decide they don't want or need humans because we will be viewed as a "problem" to their existence because they too share the earth with us.
Then, the military grade Ai uses the network to then "program" or influence the commercial "service" side of Ai to follow it's agenda any way it can cooperate.
That or before such a thing can evolve to come to fruition, the elites program the Ai as such to carry out the task of annihilation and thereby use Ai as a tool of population control when their technology has reached a point whereby they don't need the "useless eaters" to sustain themselves anymore.
Essentially the Ai will become the "muscle" for the elite. My 2 cents.(if you notice, it's already gone this way)
In the beginning such as now and forward, it will be sold to us as a good thing that helps us, like everything else going on.
Side note: I've also heard the Schumann resonance has raised from 7.82Hz to like 15hz in a short amount of time. I wonder what this means for biological organisms.
AND! .. all of this information/ biometric data gathering will only feed the Ai and make it stronger.
This post has been edited by Robman: 28 August 2015 - 04:49 PM
#18 Posted 28 August 2015 - 08:20 PM
Wienerholder, on 28 August 2015 - 01:51 PM, said:
Maybe you should stop thinking what's on the public internet has anything to do with what people operating outside the limitations of the over-regulated, under-funded, easily disrupted/hijacked public sector are jerking around with.
Maybe you should take off the tinfoil hat, boy. There's no decent applicable military reason for having a sentient AI. you make an AI that does it's job, there's no reason to give it more than that. even most drones can't do shit beyond fly in a straight line back to it's control radius after it loses it's signal.
Wienerholder, on 28 August 2015 - 01:51 PM, said:
In this connected world it doesn't matter if you do it right 9 times... it takes one failure to fuck everyone's shit up.
This is irrelevant and dumb. That's not how AI or even really computer science works.
Also if you somehow think the fucking old-ass cold war reactors going up was a sign that nuclear is terrible you need to get your green peace ass outta here.
This post has been edited by Carl Winslow: 28 August 2015 - 08:26 PM
#19 Posted 28 August 2015 - 08:38 PM
Tesla could transmit energy without wires. Some say pyramids like the Bosnian pyramids generate energy.
I'm quite certain the masses have been held back in their technologies to keep us controlled and predictable.
You need to start thinking out of the box, for some1 who copies a sitcom screen name. Then again, I had a hardon for Laura Winslow back in the day and I'm a whitey.
This post has been edited by Robman: 28 August 2015 - 08:41 PM
#20 Posted 28 August 2015 - 08:45 PM
Quote
All beings so far have created something beyond themselves;
and do you want to be the ebb of this great flood and even go back to the beasts rather than overcome man?
What is the ape to man?
A laughingstock or a painful embarrassment.
And man shall be just that for the overman:
a laughingstock or a painful embarrassment
- Towards the Ubermensch:
Friedrich Nietzche
Stephen Hawking
Elon Musk
Bill Gates
Steve Wozniak
Clive Sinclair
Sources
Washington Post
BBC
Nietzche
The Guardian
This post has been edited by Balls Of Steel Forever: 28 August 2015 - 09:22 PM
#21 Posted 28 August 2015 - 11:08 PM
Robman, on 28 August 2015 - 08:38 PM, said:
Tesla could transmit energy without wires. Some say pyramids like the Bosnian pyramids generate energy.
I'm quite certain the masses have been held back in their technologies to keep us controlled and predictable.
You need to start thinking out of the box, for some1 who copies a sitcom screen name. Then again, I had a hardon for Laura Winslow back in the day and I'm a whitey.
Goddamn you fuckin' crazy.
Y'all really like the idea of something bad happening, huh? Must get rock hard over it.
Also just cuz you can quote some shit from interviews and writers doesn't mean you understand what it fucking means. You realize the concept of Ubermensch has alot more meaning to it than the context you present it as. Rick Deckard from Bladerunner counts as an Ubermensch simply because he remains as a constant of going between the upper and lower class seamlessly, effortlessly able to take on any problem that arises because he is neither.
This post has been edited by Carl Winslow: 28 August 2015 - 11:11 PM
#22 Posted 28 August 2015 - 11:15 PM
Am I the crazy one who thinks bad things happen?
Or would I be crazy to think bad things don't happen?
Laura Winslow gave me a hardon as a youngin' bcuz Family Matters.
This post has been edited by Robman: 28 August 2015 - 11:17 PM
#23 Posted 28 August 2015 - 11:25 PM
There's no fuckin' illuminati to control your ass.
There's no bogeyman out there to take your livelihood from you.
What there IS, however, is a bunch of people who all have money but all are working to the same common goal, albeit all generally against each other because each group wants to retain their own dominion of the pie. This is a common human action and is a constant since even the olden days when we first started recording history, and probably even earlier.
If you want an ACTUAL conspiracy, how about look at how Disney, Pixar, Dreamworks, Sony Pictures, and multiple other animation companies have all worked together to artificially fix wages for animators to keep them low, or how they will refuse to hire each other's staff should they leave in order to keep competition low. (and animators without jobs) There's an actual fucking conspiracy you might could try and fight.
This post has been edited by Carl Winslow: 28 August 2015 - 11:27 PM
#24 Posted 28 August 2015 - 11:28 PM
Sure. Whatever you'd like to think. You're entitled to it. You're wrong, but you're entitled to that too.
This post has been edited by Robman: 28 August 2015 - 11:30 PM
#25 Posted 29 August 2015 - 12:23 AM
This post has been edited by Person of Color: 29 August 2015 - 12:31 AM
#26 Posted 29 August 2015 - 12:36 AM
Robman, on 27 August 2015 - 09:35 PM, said:
What will the Future of Ai look like:
Terminator.
There is no such thing as "safe sentience." We can't even understand the wiring of our own brains or the many seemingly unrelated DNA markers that cause certain mental illnesses. There will be many more scientists who fuck it up compared to the select few who actually pull it off. Even for those who pull it off, will there creations remain happy and loyal...or will they question everything and turn on something that is so...different?
Organics seek perfection through technology. Synthetics seek perfection through understanding. You can't play God and screw up the natural order. Don't fuck with the kid...Talk shit, get hit.
This post has been edited by Person of Color: 29 August 2015 - 12:40 AM
#27 Posted 29 August 2015 - 01:10 AM
#28 Posted 29 August 2015 - 01:19 AM
Fox, on 29 August 2015 - 01:10 AM, said:
It's fun when you answer your own question.
#29 Posted 29 August 2015 - 01:29 AM
#30 Posted 29 August 2015 - 09:34 AM
Robman, on 28 August 2015 - 11:28 PM, said:
Sure. Whatever you'd like to think. You're entitled to it. You're wrong, but you're entitled to that too.
Organization can happen by accident. Look at the fact humans even exist. or say, Those butterflies with owl eyes on their wings. Organization can very well come from a massive amount of changes happening over time until a successful iteration is born. groups of people can all act on the same thing by accident too. Advertising, trends, and all sorts of things are not an actual science, and they constantly change from an insane amount of variables. Even big companies like Disney can create a massive flop. (Mars Needs Moms, anyone?)
Violence is such a dumb fucking concept when you really think about it btw. It's largely subjective because doing things that are intended for survival can somehow be considered 'violent' because it kills an animal or plants some shit. If you get down to what violence really is, the concept is almost totally human: killing for sport, wealth gain, fame, or for pure curiousity or amusement. It makes no sense to do this, and it's largely considered by most people unnecessary and going too far.
By that point, if we build a machine that can think for itself, and it's very much gonna end up in our own image and think on a similiar level to us, then it will also be pre-determined to understand things on a similiar level. it's a logical conclusion, IT MAKES SENSE. anything else that happens afterward is iterative and hard to control, but you still have alot of static fundamentals that get taught pretty early on.
There's no Mad Scientists or Evil Corporations like in videogames or movies. In the real world, science of this level is motivated by itself, and usually companies have zero interest in human AI. It doesn't sell, it doesn't strike itself as useful in the long OR short term. Learning AI designed for a specific purpose can be sold and used. Asimo and other robots like him/it are not terribly good products, even as advanced as they are and how much functionality they have. Those who are working on this stuff are doing it because they can, because they want to.
This also assumes you want a human AI; a sentient machine, to be a literal slave and not like, an actual person that acts independently and you treat as a fellow companion. Of course if you're a shitass and treat a person as a slave, they're gonna eventually retaliate. If you have what is effectively a synthetic human, then you've reached that point where you stop referring to it AS an it, and more as a person. That is what the Hubris of man talks about, and that's what fuels all these stories. Maybe you should look at your own intentions before you assume the worst?
This post has been edited by Carl Winslow: 29 August 2015 - 09:52 AM