Artificial Intelligence "What's the future look like?"
#31 Posted 29 August 2015 - 11:40 AM
I have written learning and "AI" programs since the late 80's... for fun, sport, hobby whatever. So from that perspective, let me say this...
Dangerous? Not likely more dangerous than anything else humans make, as far as I am concerned. We "control" things through regulation... Why fool with AI when scripting can provide us with any "need" we would want from "robot servants" for example. Has anyone here every actually written AI? It is still limited by whatever stack of algorithms we provide. Has anyone every tried to write a self modifying algorithm? It is still dependent on limits... most of mine just "gray goo" the database, but as I said, I am/was just at the hobby level.
One of the pitfalls I see as a bigger threat than some amok AI, is some entity using public perception to pretend a problem exists and really have this "AI problem" basically be scripted and controllable by those in political power.
Random Example Scenario: " OH Nooo's... The evil AI is shutting down all the power grids, but we are sending TRON in to fix it... hopefully some time in the next couple of years, he can get the evil AI stopped. Until then, we all need to work seven days a week and taxes will have to double to fund TRON. Blah Blah Blah... " Something like that IS possible with the of rising levels of Ignorance and Geed in the world today.
In other words, I see bad people as a much bigger threat than any AI we will be developing in any of our lifetimes.
MrBlackCat
This post has been edited by MrBlackCat: 29 August 2015 - 01:49 PM
#32 Posted 29 August 2015 - 12:34 PM
Robman, on 27 August 2015 - 09:35 PM, said:
The correct answer:
#33 Posted 29 August 2015 - 02:25 PM
(This article afterwards is almost entirely non-fictitious)
Then read this:
About the Russian Dead Hand Nuclear System
Specifically this quote:
Quote
(these are questions that are based on the prior articles)
This post has been edited by Balls Of Steel Forever: 29 August 2015 - 05:23 PM
#35 Posted 29 August 2015 - 05:23 PM
Fox, I already covered the Sex Robot thing in post #4 .. even referenced you. Hell, I even brought up the same skank bot.
BlackCat, I already also brought up the mal intent of "humans" .. covered that base also. For someone so intent on poo-pooing the original string of posts you're basically regurgitating what's already been said but with a lack of foresight.
How about we all keep in mind we already have "Atlas" which is basically a primitive t-1000.
This post has been edited by Robman: 29 August 2015 - 05:27 PM
#36 Posted 29 August 2015 - 05:36 PM
This post has been edited by Carl Winslow: 29 August 2015 - 05:36 PM
#37 Posted 29 August 2015 - 05:42 PM
Some would say those who wrote such fiction were given insight as to plant ideas into the minds of the public and make the technology that exists in the realms of the black or "secret" more palatable to the masses.
Look forward to terms used in fiction to actually be used in reality when those same things come to fruition.
You've got blinders on son. Take em off and look around.
Oh and please .. "scientists" are nothing but tools to be swayed to fit any agenda on the money order. Compartmentalized worker bees.
They burn their brain cells out on focused projects, void of the big picture. Pretty obvious. They focus on what they can do, not wondering if they shouldn't because of any number of repercussions.
This post has been edited by Robman: 29 August 2015 - 06:26 PM
#38 Posted 29 August 2015 - 06:20 PM
Carl Winslow, on 29 August 2015 - 05:36 PM, said:
0/5 people I cited from were not writers.
The people I cited were, programmers (Bill Gates, Clive Sinclair, Steve Wozniak) , inventors (Elon Musk) , and scientists (Stephen Hawking).
One of them, (Elon Musk) has a manufacturing line that runs off of pre-programmed automation.
Elon Musk Robots
The only writer I cited was Harlan Ellison due to it providing a situation in which the Dead Hand case,
could've went wrong.
And well Russia doesn't trust even primitive automation,
(The same automation that we use to make most of our manufactured goods)
with nuclear weapons.
(3 required inputs, one possible output)
Why do you think that is?
P.S. The Bible is real, I have one in my house.
This post has been edited by Balls Of Steel Forever: 29 August 2015 - 06:25 PM
#39 Posted 29 August 2015 - 06:45 PM
Let say we create some kind of AI system and then "connect" it in some way to a physical "body"... if it is modeled after human thought, yes it will rise up and destroy... uh something, everything? Maybe... simply because we are SO greatly influenced by our upbringing. Gandhi vs Radical Muslim for instance... not that humans are blank slates, but how we are "raised" has great influence on what we choose, even to the point of our own destruction.
So if we raise it right, it might not destroy us... so what if Radical Muslims create and AI in their image, as described above, vs another group?
Remember the three laws? What if those algorithms were a read-only filters? What can go wrong? Hehehe... but it did, in the movie. In the end, the systems weren't actually bad, but got abused by a bad and remote systems.
Another thing... what if, we create an AI and it kills us all? So what. The universe will be just fine without us... hell a super-virus could take us out also you know.
Reference this also, if you aren't familiar... "Failure to Thrive". This is one area/trap where programmers have to shape AI. It must have a reason, and that reason, is scripted basically. Just kind of "order" to "do". I worked with a guy that does Chat-Bots (he stayed in, I went another direction). I have gotten to type with his work some and they are more interesting that many people I've typed with. But they are mostly database scripts that use Google. It learns from each person it types with.
Anyway... I believe a critical part of growing and learning is to not believe you have everything figured out.
And again, I don't consider human extinction to be a big deal really. I am comfortable with being really really small in the scope of existence. If my life only matters to myself, I am ok with that.
MrBlackCat
This post has been edited by MrBlackCat: 29 August 2015 - 06:49 PM
#40 Posted 29 August 2015 - 07:23 PM
MrBlackCat, on 29 August 2015 - 06:45 PM, said:
MrBlackCat
That's a mighty low bar and I'm not ok with that.
This post has been edited by Robman: 29 August 2015 - 07:23 PM
#41 Posted 29 August 2015 - 08:26 PM
Robman, on 29 August 2015 - 07:23 PM, said:
It is only ok for you not to be ok with "my bar position" because this is amature speculation. But notice an AI influenced by my image wouldn't necessarily worry about survival to the point of trying to control others based on probability. "Am I being Attacked? Yes/No" ~ React This doesn't preclude fiercely defending myself in the event of attack on my well-being however. I am thinking of a more broad scope than that.
But what of an AI influenced in your image? I see it as dangerous, destructive and short lived. "I Might be attacked." ~ React
Emotions are fun life experience enhancers that I speculate are left-overs from a time when we needed "enhancers" for our (almost non-existent) instincts of all types. Anger is a defense, in my opinion. The point being that SO many people react to emotions that are mostly redirects from matters of the mind not dealt with, they can really get us into trouble like patterns of paranoia.
I realize this is too complex an issue for a forum thread, as there is SO much education that needs to take place before this can be discussed.
MrBlackCat
This post has been edited by MrBlackCat: 29 August 2015 - 08:27 PM
#42 Posted 29 August 2015 - 09:37 PM
What if it's more like - X amount of resources to be shared amongst the eventual Ai "species" and humans.
What if they decide they don't want to share and eventually find it logical to do away with those having to share with?
Or scenario b: Humans are deemed responsible for "killing" the planet and become considered inefficient and a detriment.
We already hear of "too much human population" ... I think machines, programmed or free thinking will play a role.
This post has been edited by Robman: 29 August 2015 - 10:29 PM
#43 Posted 29 August 2015 - 10:00 PM
#44 Posted 30 August 2015 - 11:59 AM
Robman, on 29 August 2015 - 09:37 PM, said:
Robman said:
I could type a page of alternatives in minutes which you might never consider. For example...
Time matters to humans because of limited life span. Most likely, the only time that might matter to machines is planetary life-span, which is local star based most likely.
Human efficiency is almost completely based in measures of time, a machine would not likely consider time.
Robman said:
1. Thinking machines most likely won't need to share because things like planetary conditions won't matter to them for the most part. They will adapt to whatever conditions are available fluidly and instantly... we would be like ants to them in a few decades.
2. If they were designed in our image, and acknowledge such, they might even help us out in some ways. Just like we build bird houses, maintain habitats for some animals etc.
In a short story I wrote years ago, based on my disagreement with the idea of the first Terminator movies logic, the robots we created built a set of reactors into the earth that would provide electrical energy forever, then corrected our orbital alignments, and stabilized our star on their way out. They left the planet to explore "forever" as they would never age. By my own logic, the largest single control factor on earth for humans is power generation. Nuclear Fusion is very very dangerous to the control of the worlds power... our economy would be in for serious changes with the relatively safe, inexpensive, sustainable power this could provide. It has already been done... they are stalling and being stalled. Another subject though. <ends ramble here>
Robman said:
Robman said:
We also "hear" about things like "running out of fresh water". I would laugh, but this is dangerously insane. We can de-salt the oceans and deliver water anywhere in the world, easily. Water in the store now costs 1/3 of what oil costs... they can almost de-salt it for that. All military ships de-salt water for their crews, in mass. Until the price of water is twice what oil costs, we don't have any problem at all, just a slight inconvenience and another "cost".
In closing, we can't imagine what "they" will do... I still stick by the idea that the concept will be used to manipulate the populations, just like overpopulation and "running out of fresh water" will.
Much to say...
MrBlackCat
This post has been edited by MrBlackCat: 30 August 2015 - 12:02 PM
#45 Posted 05 September 2015 - 04:12 AM
All this talk of computers taking over is garbage. We will hand the keys to our civilisation over to them for money and convenience and we will pat ourselves on the back as we do it. It's not if, but when.
Also, there won't be any problems with resources. Any intelligent machine will realise that there are plenty of resources in the Solar System to keep us going for millions of years. Money will be diverted from wars, cosmetics and pressing One Direction CD's to developing space mining and problem(s) solved.
#46 Posted 05 September 2015 - 07:21 AM
This post has been edited by Mark.: 05 September 2015 - 07:23 AM