Duke4.net Forums: Artificial Intelligence - Duke4.net Forums

Jump to content

  • 2 Pages +
  • 1
  • 2
  • You cannot start a new topic
  • You cannot reply to this topic

Artificial Intelligence  "What's the future look like?"

User is offline   MrBlackCat 

#31

Would someone change this thread title to "Mostly Very Poor PERCEPTIONS of Artificial Intelligence."... because that is what this is.
I have written learning and "AI" programs since the late 80's... for fun, sport, hobby whatever. So from that perspective, let me say this...

Dangerous? Not likely more dangerous than anything else humans make, as far as I am concerned. We "control" things through regulation... Why fool with AI when scripting can provide us with any "need" we would want from "robot servants" for example. Has anyone here every actually written AI? It is still limited by whatever stack of algorithms we provide. Has anyone every tried to write a self modifying algorithm? It is still dependent on limits... most of mine just "gray goo" the database, but as I said, I am/was just at the hobby level.

One of the pitfalls I see as a bigger threat than some amok AI, is some entity using public perception to pretend a problem exists and really have this "AI problem" basically be scripted and controllable by those in political power.

Random Example Scenario: " OH Nooo's... The evil AI is shutting down all the power grids, but we are sending TRON in to fix it... hopefully some time in the next couple of years, he can get the evil AI stopped. Until then, we all need to work seven days a week and taxes will have to double to fund TRON. Blah Blah Blah... " Something like that IS possible with the of rising levels of Ignorance and Geed in the world today.

In other words, I see bad people as a much bigger threat than any AI we will be developing in any of our lifetimes.

MrBlackCat

This post has been edited by MrBlackCat: 29 August 2015 - 01:49 PM

1

User is offline   Fox 

  • Fraka kaka kaka kaka-kow!

#32

View PostRobman, on 27 August 2015 - 09:35 PM, said:

What will the Future of Ai look like:
Posted Image

The correct answer:

Posted Image
3

User is offline   Balls of Steel Forever 

  • Balls of Steel Forever

#33

Entirely fictitious, but related to the questions afterwards:
Spoiler

(This article afterwards is almost entirely non-fictitious)
Then read this:
About the Russian Dead Hand Nuclear System
Specifically this quote:

Quote

The Soviets did look briefly at totally computer driven automatic system and they decided against it opting for a human firewall in a deep, safe bunker.


(these are questions that are based on the prior articles)
Spoiler


This post has been edited by Balls Of Steel Forever: 29 August 2015 - 05:23 PM

0

#34

Stop acting like scifi is proof you dingledork.
0

User is offline   Robman 

  • Asswhipe [sic]

#35

Carl is clearly smarter than Musk, Gates, Hawking, Jobs, Roddenberry, Spielberg etc..

Fox, I already covered the Sex Robot thing in post #4 .. even referenced you. Hell, I even brought up the same skank bot.

BlackCat, I already also brought up the mal intent of "humans" .. covered that base also. For someone so intent on poo-pooing the original string of posts you're basically regurgitating what's already been said but with a lack of foresight.

How about we all keep in mind we already have "Atlas" which is basically a primitive t-1000.

This post has been edited by Robman: 29 August 2015 - 05:27 PM

1

#36

Your sense of foresight is citing interviews (from people that are mostly not programmers, scientists, or even anything beyond writers) from before proper AI was even feasible. A bunch of bullshit fiction, much like the bible and the illuminati. (which 'smart people' like to claim is real.)

This post has been edited by Carl Winslow: 29 August 2015 - 05:36 PM

0

User is offline   Robman 

  • Asswhipe [sic]

#37

Not sure if you've noticed but reality strives to mimic fiction and make it reality.

Some would say those who wrote such fiction were given insight as to plant ideas into the minds of the public and make the technology that exists in the realms of the black or "secret" more palatable to the masses.

Look forward to terms used in fiction to actually be used in reality when those same things come to fruition.

You've got blinders on son. Take em off and look around.

Oh and please .. "scientists" are nothing but tools to be swayed to fit any agenda on the money order. Compartmentalized worker bees.
They burn their brain cells out on focused projects, void of the big picture. Pretty obvious. They focus on what they can do, not wondering if they shouldn't because of any number of repercussions.

This post has been edited by Robman: 29 August 2015 - 06:26 PM

0

User is offline   Balls of Steel Forever 

  • Balls of Steel Forever

#38

View PostCarl Winslow, on 29 August 2015 - 05:36 PM, said:

Your sense of foresight is citing interviews (from people that are mostly not programmers, scientists, or even anything beyond writers) from before proper AI was even feasible. A bunch of bullshit fiction, much like the bible and the illuminati. (which 'smart people' like to claim is real.)

0/5 people I cited from were not writers.
The people I cited were, programmers (Bill Gates, Clive Sinclair, Steve Wozniak) , inventors (Elon Musk) , and scientists (Stephen Hawking).

One of them, (Elon Musk) has a manufacturing line that runs off of pre-programmed automation.
Elon Musk Robots

The only writer I cited was Harlan Ellison due to it providing a situation in which the Dead Hand case,
could've went wrong.

And well Russia doesn't trust even primitive automation,
(The same automation that we use to make most of our manufactured goods)
with nuclear weapons.
(3 required inputs, one possible output)
Why do you think that is?

P.S. The Bible is real, I have one in my house.

This post has been edited by Balls Of Steel Forever: 29 August 2015 - 06:25 PM

0

User is offline   MrBlackCat 

#39

Ok... let me go full tin hat then... I will turn off my software based exposure/experiments with AI.


Let say we create some kind of AI system and then "connect" it in some way to a physical "body"... if it is modeled after human thought, yes it will rise up and destroy... uh something, everything? Maybe... simply because we are SO greatly influenced by our upbringing. Gandhi vs Radical Muslim for instance... not that humans are blank slates, but how we are "raised" has great influence on what we choose, even to the point of our own destruction.

So if we raise it right, it might not destroy us... so what if Radical Muslims create and AI in their image, as described above, vs another group?
Remember the three laws? What if those algorithms were a read-only filters? What can go wrong? Hehehe... but it did, in the movie. In the end, the systems weren't actually bad, but got abused by a bad and remote systems.

Another thing... what if, we create an AI and it kills us all? So what. The universe will be just fine without us... hell a super-virus could take us out also you know.

Reference this also, if you aren't familiar... "Failure to Thrive". This is one area/trap where programmers have to shape AI. It must have a reason, and that reason, is scripted basically. Just kind of "order" to "do". I worked with a guy that does Chat-Bots (he stayed in, I went another direction). I have gotten to type with his work some and they are more interesting that many people I've typed with. But they are mostly database scripts that use Google. It learns from each person it types with.

Anyway... I believe a critical part of growing and learning is to not believe you have everything figured out.
And again, I don't consider human extinction to be a big deal really. I am comfortable with being really really small in the scope of existence. If my life only matters to myself, I am ok with that.

MrBlackCat

This post has been edited by MrBlackCat: 29 August 2015 - 06:49 PM

1

User is offline   Robman 

  • Asswhipe [sic]

#40

View PostMrBlackCat, on 29 August 2015 - 06:45 PM, said:

If my life only matters to myself, I am ok with that.

MrBlackCat

That's a mighty low bar and I'm not ok with that.

This post has been edited by Robman: 29 August 2015 - 07:23 PM

0

User is offline   MrBlackCat 

#41

View PostRobman, on 29 August 2015 - 07:23 PM, said:

That's a mighty low bar and I'm not ok with that.
Fell right into that example trap... Point: there is no universal bar and its position is a choice you make for you. By suggesting it is universal, that is basically shoving your belief onto myself. Unwise. It doesn't matter if you are or are not "ok with that", it is how you choose to react to it that matters. Until you react, I mostly likely won't.

It is only ok for you not to be ok with "my bar position" because this is amature speculation. But notice an AI influenced by my image wouldn't necessarily worry about survival to the point of trying to control others based on probability. "Am I being Attacked? Yes/No" ~ React This doesn't preclude fiercely defending myself in the event of attack on my well-being however. I am thinking of a more broad scope than that.
But what of an AI influenced in your image? I see it as dangerous, destructive and short lived. "I Might be attacked." ~ React
Emotions are fun life experience enhancers that I speculate are left-overs from a time when we needed "enhancers" for our (almost non-existent) instincts of all types. Anger is a defense, in my opinion. The point being that SO many people react to emotions that are mostly redirects from matters of the mind not dealt with, they can really get us into trouble like patterns of paranoia.


I realize this is too complex an issue for a forum thread, as there is SO much education that needs to take place before this can be discussed.

MrBlackCat

This post has been edited by MrBlackCat: 29 August 2015 - 08:27 PM

0

User is offline   Robman 

  • Asswhipe [sic]

#42

What if it's not as simple as "I might be attacked" - react.

What if it's more like - X amount of resources to be shared amongst the eventual Ai "species" and humans.

What if they decide they don't want to share and eventually find it logical to do away with those having to share with?

Or scenario b: Humans are deemed responsible for "killing" the planet and become considered inefficient and a detriment.

We already hear of "too much human population" ... I think machines, programmed or free thinking will play a role.

This post has been edited by Robman: 29 August 2015 - 10:29 PM

1

#43

View PostBalls Of Steel Forever, on 29 August 2015 - 06:20 PM, said:

words


Hi I wasn't replying to you.
0

User is offline   MrBlackCat 

#44

View PostRobman, on 29 August 2015 - 09:37 PM, said:

What if it's not as simple as "I might be attacked" - react.
Of COURSE it isn't that simple. One logic set I used had 3500 lines of code for example. I couldn't even post something like that here in the "one-liner" and "too long didn't read" world of forums. Which is why I will point out again that a subject like this can't be discussed with any seriousness on a forum. To me, forums are a place you can state your views maybe, but if the reasons for those views can be stated in the space supplied, they aren't likely very well thought out. They are more likely "momentary reactions to others views". Not really thought out.

Robman said:

What if it's more like - X amount of resources to be shared amongst the eventual Ai "species" and humans.
Pointless speculation... what about...
I could type a page of alternatives in minutes which you might never consider. For example...
Time matters to humans because of limited life span. Most likely, the only time that might matter to machines is planetary life-span, which is local star based most likely.
Human efficiency is almost completely based in measures of time, a machine would not likely consider time.


Robman said:

What if they decide they don't want to share and eventually find it logical to do away with those having to share with?
Most likely, machines would evolve exponentially, humans would probably be ignored for a few reasons. (in my opinion, experience, and pointless speculation...)
1. Thinking machines most likely won't need to share because things like planetary conditions won't matter to them for the most part. They will adapt to whatever conditions are available fluidly and instantly... we would be like ants to them in a few decades.
2. If they were designed in our image, and acknowledge such, they might even help us out in some ways. Just like we build bird houses, maintain habitats for some animals etc.

In a short story I wrote years ago, based on my disagreement with the idea of the first Terminator movies logic, the robots we created built a set of reactors into the earth that would provide electrical energy forever, then corrected our orbital alignments, and stabilized our star on their way out. They left the planet to explore "forever" as they would never age. By my own logic, the largest single control factor on earth for humans is power generation. Nuclear Fusion is very very dangerous to the control of the worlds power... our economy would be in for serious changes with the relatively safe, inexpensive, sustainable power this could provide. It has already been done... they are stalling and being stalled. Another subject though. <ends ramble here>

Robman said:

Or scenario b: Humans are deemed responsible for "killing" the planet and become considered inefficient and a detriment.
We aren't hurting the planet for anyone but ourselves and the temporary life forms here. Nature will do fine after we are long gone. Things just change.

Robman said:

We already hear of "too much human population" ... I think machines, programmed or free thinking will play a role.
Exactly... "we hear" because we are being beat down. Basically everyone in the world could fit in Texas were it fitted with some multi-story apartments. Things would be different, but we are not nearly over-populated. The way we live might need to be changed from what it is now, but we will adapt. The dangerous people are the ones who don't want to compromise their ways and live solely on the labor of others. They run the world, and can't handle the power. (not suggesting I could either, of course, but I would like to believe I could)
We also "hear" about things like "running out of fresh water". I would laugh, but this is dangerously insane. We can de-salt the oceans and deliver water anywhere in the world, easily. Water in the store now costs 1/3 of what oil costs... they can almost de-salt it for that. All military ships de-salt water for their crews, in mass. Until the price of water is twice what oil costs, we don't have any problem at all, just a slight inconvenience and another "cost".

In closing, we can't imagine what "they" will do... I still stick by the idea that the concept will be used to manipulate the populations, just like overpopulation and "running out of fresh water" will.

Much to say...

MrBlackCat

This post has been edited by MrBlackCat: 30 August 2015 - 12:02 PM

1

User is offline   Tea Monster 

  • Polymancer

#45

I remember when I was in high school. There was a movie called 'Future Shock' which basically said "Don't fret about the pace of technology, we can always just switch the machines off". Of course, that was complete bull. If we switched off all the computers today, we would have no air travel, no space travel. Industry depend on computers to make decisions that will keep them competitive and in business.

All this talk of computers taking over is garbage. We will hand the keys to our civilisation over to them for money and convenience and we will pat ourselves on the back as we do it. It's not if, but when.

Also, there won't be any problems with resources. Any intelligent machine will realise that there are plenty of resources in the Solar System to keep us going for millions of years. Money will be diverted from wars, cosmetics and pressing One Direction CD's to developing space mining and problem(s) solved.
0

User is offline   Mark 

#46

We will always have a way to disable the rogue AI. Either through backdoor hacking or a few well placed EMPs ( large or portable ). Or flipping off other power switches. Backup power can only last for so long. That is of course if the touchy feeley's don't give the bots human status and protections.

This post has been edited by Mark.: 05 September 2015 - 07:23 AM

0

User is offline   Daedolon 

  • Ancient Blood God

#47

Write an AI so good the developer falls in love with it and inhibits all ways of disabling the AI.
0

Share this topic:


  • 2 Pages +
  • 1
  • 2
  • You cannot start a new topic
  • You cannot reply to this topic


All copyrights and trademarks not owned by Voidpoint, LLC are the sole property of their respective owners. Play Ion Fury! ;) © Voidpoint, LLC

Enter your sign in name and password


Sign in options