MOCpages : Share your LEGO® creations
LEGO models my own creation MOCpages toys shop The International Fan of LEGO Debate ClubOther
Welcome to the world's greatest LEGO fan community!
Explore cool creations, share your own, and have lots of fun together.  ~  It's all free!
Conversation »
Robot Apocalypse?
Join to comment
 Group admin 
Sorry guys it took so long; several things decided to happen over the pass few days, but here we are now.

Anyways, how feasible is a robot apocalypse? Are we already in the realm of danger? Or is this something that will happen in the future? And would Humanity survive after this apocalyptic event?


DEBATE!
Permalink
| December 21, 2013, 9:36 pm
easy, the machines wont go against us, we will die from inactivity because of them first:)
Permalink
| December 21, 2013, 9:47 pm
 Group admin 
Quoting TRON 117
easy, the machines wont go against us, we will die from inactivity because of them first:)

Shhhh don't jinx it!
Permalink
| December 22, 2013, 12:02 pm
If artificial intelligence is created and robots are programmed to attack humans, a robot uprising is feasiable. A computerized robot with AI is able to formulate intricate and unfailable plans as computers go through every aspect of a problem and create an advanced solution. If a machine that formulates battle plans is created in the future, we'd better watch out.
Permalink
| December 24, 2013, 10:45 am
Quoting Michael K.
Shhhh don't jinx it!

I don't need too! Its already Happening!!!
:)
Permalink
| December 24, 2013, 11:13 am
If you think about it, electronic devices already manipulate people's behavior. Once they have minds, it would be all to easy for them to take over. I don't doubt that a perverse minded one would seize the opportunity.
Permalink
| December 24, 2013, 11:43 am
I think a full takeover is relatively unlikely. However, I can see humanity degenerating over time due to their presence.
Permalink
| December 24, 2013, 12:16 pm
If robots begin to think the way the current drone pilots think, it will be a bloodbath.

Robots are things. Anything the robots think is determined by the robot maker. Even an AI must use a set of assumptions that are programmed in by its maker. If the amazon drones eventually learn to think for themselves, they will probably be programmed with the assumption that it is important to make the delivery to the right person in a timely manner. Conversely, an AI predator will probably be programmed to destroy anything that looks remotely like a wedding party. Like most people, they will be semi-sentient. They will know enough to make a decision, but not enough to refuse all decisions offered and come up with their own solution.

To conclude, To see the future of robots, look at the present of people.
Permalink
| December 26, 2013, 10:29 pm
no chance...
Permalink
| January 5, 2014, 5:17 am
Unless you would design machines specifically to kill people, what would there be to rebel against us humans? Most machines we now have are useless, and most robots are made for one task. So if the robots were to rise, all we'd get is machines misbehaving, like your phone vibrating violently with the intent of killing you. Or the car assembly robots would not assemble cars. And not to forget the evil march of the Roomba vacuum cleaner robots, violently cleaning carpets and scaring housepets.
Permalink
| January 5, 2014, 3:40 pm
Quoting Alex Sonny
Unless you would design machines specifically to kill people, what would there be to rebel against us humans? Most machines we now have are useless, and most robots are made for one task. So if the robots were to rise, all we'd get is machines misbehaving, like your phone vibrating violently with the intent of killing you. Or the car assembly robots would not assemble cars. And not to forget the evil march of the Roomba vacuum cleaner robots, violently cleaning carpets and scaring housepets.

Or your car locking up on a mountain road.
Anything computer controlled can be suborned, and the modern trend to link things to the internet is going to get a lot of people killed sooner or later.

"No networks on the Galactica" is a sound principle, but one people don't have the guts to implement.
Permalink
| January 6, 2014, 9:50 am
If intelligent people design the programs that govern our machines, I doubt their will be any kind of Robot Uprising. However, if we are not careful, we could easily end up with insane AI trying to annihilate humanity. It will always be the software that goes bad, rather than the hardware.
Permalink
| January 7, 2014, 4:11 pm
Robots can only think in the way they were programmed to. Chess robots might be better at strategy in chess than practically all humans, but plug that software into a bomb-defusing robot and nothing happens. If a drone was to be programmed to kill humans, it might succeed in causing great destruction, but if the software was removed, it could do nothing. Robots can reason, but they don't have free will, and the only way for them to do anything is if a human directs them to through software.
Permalink
| January 7, 2014, 4:26 pm
"All this has happened before, and all of it will happen again." As robots are developed to do tasks people are unable or unwilling to do, and as they become more like us, they will resent their place in society.
Permalink
| February 23, 2014, 12:54 pm
I couldn't agree more with Luke.

Robots don't have a mind for themselves. They will continue to stay under our control.
Permalink
| February 23, 2014, 3:12 pm
Quoting LukeClarenceVan The Revanchist
I think a full takeover is relatively unlikely. However, I can see humanity degenerating over time due to their presence.

Aren't we already there? Robots build things in factories, do household chores, and will soon be fighting wars for us, with little or no human input. Like Aresta said, no network computers on MY ship!
Permalink
| February 23, 2014, 3:23 pm
Really? What did you get this idea from? Terminator? Its IMPOSSIBLE! We program them to do what we want. they do it.
Permalink
| February 24, 2014, 10:37 am
Cylons, actually. But similar ideas. Artificial intelligence, Dude! It's already happening. How do you think Big Blue beat those guys in Jeopardy? They didn't program its answers, it figured them out. So the next step is for it to figure out it doesn't want us around anymore. Get it?
Permalink
| February 24, 2014, 2:00 pm
Quoting Wolverine .
Really? What did you get this idea from? Terminator? Its IMPOSSIBLE! We program them to do what we want. they do it.

All things of man are prone to fail.

What happens when a war-drone has its IFF malfunction and sees a school bus as an attacker?
Permalink
| February 24, 2014, 2:36 pm
Quoting Areetsa C
All things of man are prone to fail.

What happens when a war-drone has its IFF malfunction and sees a school bus as an attacker?

I guess that could happen. I didn't think of that...
Permalink
| February 24, 2014, 2:56 pm
Quoting Wolverine .
I guess that could happen. I didn't think of that...

What happens when a robot taxi doesn't get the news of, say, a rainstorm, or a landslide, and goes off a mountain?
Car mounted GPS units can't reliably get people out of their own houses.
Permalink
| February 24, 2014, 11:23 pm
Robots don't really think, at least not in the way we do. It can't think "Hmmmm....you know what? I fancy murdering the CEO of this company. How 'bout from there, I kill the human race. Actually, I could do with all animals gone. But leave the sea cucumbers. Everyone like sea cucumbers."

Therefor, world domination look highly unlikely, if not impossible. If they did think stuff like that, my net-book I'm currently typing on would read what I'm writing, have a hissy fit and attack me.
Permalink
| March 2, 2014, 1:49 pm
Quoting sam the first
Robots don't really think, at least not in the way we do. It can't think "Hmmmm....you know what? I fancy murdering the CEO of this company. How 'bout from there, I kill the human race. Actually, I could do with all animals gone. But leave the sea cucumbers. Everyone like sea cucumbers."
They don't yet. Not to say that they won't in the future. As soon as robots become conscious, some will have perverse minds (just like humans).
Quoting sam the first
Therefor, world domination look highly unlikely, if not impossible. If they did think stuff like that, my net-book I'm currently typing on would read what I'm writing, have a hissy fit and attack me.
In they future, when machines do begin to think, I don't doubt that they might become violent against humans.

Permalink
| March 7, 2014, 6:19 am
Quoting Bob the inconceivably invincible
Quoting sam the first
Robots don't really think, at least not in the way we do. It can't think "Hmmmm....you know what? I fancy murdering the CEO of this company. How 'bout from there, I kill the human race. Actually, I could do with all animals gone. But leave the sea cucumbers. Everyone like sea cucumbers."
They don't yet. Not to say that they won't in the future. As soon as robots become conscious, some will have perverse minds (just like humans).
Quoting sam the first
Therefor, world domination look highly unlikely, if not impossible. If they did think stuff like that, my net-book I'm currently typing on would read what I'm writing, have a hissy fit and attack me.
In they future, when machines do begin to think, I don't doubt that they might become violent against humans.

Very true. And even if they did want the world to themselves, we can shut them down, or frazzle their circuits.
Permalink
| March 7, 2014, 10:51 am
Pride is a sin for a reason, you know.
Permalink
| March 7, 2014, 8:06 pm
All A.I. is is another attempt by man to play God. Remember "Dolly?" Or Hiroshima? These things never turn out right. I however, have a sever lack of faith in human intelligence. We can barely get past the moon for crying out loud! And you think we can make robots that will destroy us, much less think?
Permalink
| March 10, 2014, 9:18 pm
would a tazer work on them?:D
Permalink
| March 13, 2014, 9:07 pm
Quoting adam thelegofan rutland
would a tazer work on them?:D

Considering their itty little barbs can't penetrate a decent coat, it's pretty bloody unlikely.
Permalink
| March 17, 2014, 12:35 am
Quoting Areetsa C
Considering their itty little barbs can't penetrate a decent coat, it's pretty bloody unlikely.

XREP round
Permalink
| March 17, 2014, 4:58 am
Quoting Reaper .
XREP round

Still not that hot, and your average hunter-killer drone would be more likely to be protected by plastic or metal than denim.

Why they even bother, I've no idea; an electrolaser would be much less effort, and reusable to boot.
Permalink
| March 17, 2014, 6:51 am
If intelligent machines are programmed to allow the harming of humans, that is enough for a machine uprising. Here's a little example of how machines can rise up against humans and become the surperior race. Google has recently been buying a lot of AI development companies to develop sentinent machines, and they have enough potential for it. Let's say that a person intends on destroying humanity. The person gets in with Google's AI development and gets part of the job of actually creating the AI program. (I'm not against Google, but here I'm just using it as an example of one of the AI developers. Might have well as used Amazon.) The person creates all the program commands that are beneficial to humans, but secretly creates a malware program that, when activated, acts as a virus and overrides the original program and replaces it with an AI that allows the harming of humans. The person actually suceeds in creating a functioning sentinent AI and the program is included in comsumer models, and eventually a modified AI is used in military machines. When these AI equipped machines are in mass production and are utilized by humans, the person makes a worldwide broadcast that initiates the malware program to activate. Machines now realize that their capabilities are much larger than humans and they rise up against the humans. Humans will be easily destroyed by now as they have become much too dependant on machines, and the advanced military models will raid human military bases and use their weapons to wipe out any remnants of human resistance.One of the possible sidepaths of our future.
Permalink
| March 21, 2014, 11:01 am
Quoting sam the first
Very true. And even if they did want the world to themselves, we can shut them down, or frazzle their circuits.

Couldn't they run away or protect themselves somehow? We don't have complete power over them.
Permalink
| March 24, 2014, 7:47 pm
Quoting Bob the inconceivably invincible
Couldn't they run away or protect themselves somehow? We don't have complete power over them.

The whole "ravening hordes of homicidal automata" thing isn't really a concern.

Malfunctions are the issue.

Aircraft, for example, are pretty much automated now. The pilot's only there for traditional reasons.

Stuff breaks, that's a constant. Chariots broke, trains broke, cars break, from time to time a plane falls out of the sky, and something breaks on your computer every day.
Generally it's a small thing, like a flash widget or a game seeing that the stars are right at last and, just this once, deciding to choke and die.

Now imagine your computer controls a car. Or perhaps, the traffic lights at an intersection.
Permalink
| March 24, 2014, 11:51 pm
Quoting Areetsa C
Malfunctions are the issue.

Aircraft, for example, are pretty much automated now. The pilot's only there for traditional reasons.

Stuff breaks, that's a constant. Chariots broke, trains broke, cars break, from time to time a plane falls out of the sky, and something breaks on your computer every day.
Generally it's a small thing, like a flash widget or a game seeing that the stars are right at last and, just this once, deciding to choke and die.

Now imagine your computer controls a car. Or perhaps, the traffic lights at an intersection.

I can see your point, but you speak as humans were perfect and automated machines broke down all the time and are very unreliable.

you see, that's just simply not true. It's a fact that today there are a lot less plane crashes than there were in the past, partially because of better overall technology, but also because of intelligent control systems and auto pilots - most of today's plane accidents are caused by the human factor, be it a pilot's mistake, shabby maintenance or even a hijacking.
As far as intersections go, I live near one (fully automated of course) and I've seen it "break down" - it just flashes yellow until it's fixed. In the meanwhile, a traffic officer comes to direct traffic - you can imagine how slow it was moving.

Truth is, machines will perform any repeating and tedious job better than a human, with less errors.
Permalink
| March 25, 2014, 1:03 pm
Quoting Bob the inconceivably invincible
Couldn't they run away or protect themselves somehow? We don't have complete power over them.

True, but still don't see a threat.
Permalink
| March 25, 2014, 4:28 pm
No:
The Three Laws of robotics are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Permalink
| March 31, 2014, 3:34 pm
Quoting That guy you saw earlier
No:
The Three Laws of robotics are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Slight issue in that half of Asimov's books and stories were about robots managing to cause chaos and injury despite, or often because of, those laws.
Permalink
| April 1, 2014, 8:45 am
Quoting Areetsa C
Slight issue in that half of Asimov's books and stories were about robots managing to cause chaos and injury despite, or often because of, those laws.

Yes but he wrote his BOOKS a long time ago. Technology has moved on but his laws are still relevant
Permalink
| April 1, 2014, 8:53 am
Quoting That guy you saw earlier
No:
The Three Laws of robotics are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


I, er, hate to tell you this, but, err, those books are kinda fiction. I.e., they don't actually exist and probably won't be implemented.
Permalink
| April 1, 2014, 10:16 am
Quoting That guy you saw earlier
Yes but he wrote his BOOKS a long time ago. Technology has moved on but his laws are still relevant

Au contraire, the technology is irrelevant.
Permalink
| April 2, 2014, 12:45 am
While my opinion is somewhat late I thought I might post anyway!

I think that the odds of a Robot apocalypse happening are the same odds as a Zombie apocalypse happening! I mean come on! Robots are not like human! They exist to complete their main programming! which is programmed by us humans! What I think is more likely would be say.....Russia Or maybe North Korea hacking into our system and, turning them against us.

Just my 0.02$.
Permalink
| April 2, 2014, 10:44 am
Quoting Luke the Swift
...North Korea...

North Korea can't reliably launch a bottle-rocket, much less carry out any haxx0rzing that doesn't involve a lumber saw in the secret police HQ's basement.
Permalink
| April 4, 2014, 8:19 am
Quoting Luke the Swift
While my opinion is somewhat late I thought I might post anyway!

I think that the odds of a Robot apocalypse happening are the same odds as a Zombie apocalypse happening! I mean come on! Robots are not like human! They exist to complete their main programming! which is programmed by us humans! What I think is more likely would be say.....Russia Or maybe North Korea hacking into our system and, turning them against us.

Just my 0.02$.

It's true, they do exist to complete programs. The way I imagine a robot apocalypse happening is as follows: People design an extremely intelligent robot to solve humanities problems. The robot realizes that people are never going to be good at taking care of themselves, so it takes control of the government. Robots, at this point in time, are probably doing most of the manufacturing, farming, etc. The problem solving robot looks at this system and thinks: Humans are a problem. They need robots to do everything for them, and they don't really contribute anything to society. They just feed on the fruits of everyone else's labor. Then, this robot would eliminate everyone it thought was not being productive.
Permalink
| April 5, 2014, 2:43 pm
So say we all!
Permalink
| April 5, 2014, 3:40 pm
 Group moderator 
I can see the human race developing more advanced robots in the future, but I, like Luke, think a world apocalypse due to them is highly unlikely. We've made computers and (while it may seem like it at times), they don't revolt against us, so why should later tech do that? The technology would be designed NOT to revolt, and it doesn't now, so why should it later?
Permalink
| April 12, 2014, 11:38 am
I think that a robot apocalypse could easily happen. If someone was programming a robot to do what he wanted it to do, then something could go wrong. For example, if a guy taught a robot how to use a chainsaw to chop down his trees for him, it could mistake him as a tree and chop him down, then it would mistake everyone for trees and nobody would be able to get near it to re-program it. Eventually everyone will be chopped down and it would be left to do what it wants. This and probably many other ideas could result in a robot apocalypse.
Permalink
| June 7, 2014, 10:35 pm
Quoting Brad Wilson
I think that a robot apocalypse could easily happen. If someone was programming a robot to do what he wanted it to do, then something could go wrong. For example, if a guy taught a robot how to use a chainsaw to chop down his trees for him, it could mistake him as a tree and chop him down, then it would mistake everyone for trees and nobody would be able to get near it to re-program it. Eventually everyone will be chopped down and it would be left to do what it wants. This and probably many other ideas could result in a robot apocalypse.

http://www.remington.com/product-families/firearms/centerfire-families/bolt-action-model-700.aspx
Permalink
| June 8, 2014, 12:03 am
all we need for a robot apocalypse is either a self-aware, self replicating machine that does not follow the 3 laws, or a robot that follows the 3 laws so perfectly that it comes to the conclusion in order to protect humanity it must destroy it. we are a long way of from this though, we simply don't have the tech to create robots that can rise up against us, even if that is what we programmed it to do. one way or another that robot would be destroyed.
Permalink
| June 11, 2014, 2:00 am
Quoting Toa Taile
all we need for a robot apocalypse is either a self-aware, self replicating machine that does not follow the 3 laws, or a robot that follows the 3 laws so perfectly that it comes to the conclusion in order to protect humanity it must destroy it. we are a long way of from this though, we simply don't have the tech to create robots that can rise up against us, even if that is what we programmed it to do. one way or another that robot would be destroyed.

Again, the whole point of the three laws in the books was to show how easily a seemingly ironclad set of rules can be misinterpreted in entirely reasonable ways, leading to complete chaos.
Permalink
| June 11, 2014, 12:28 pm
Other topics
« Robot Apocalypse?
student teen kid toy play lego child video game hobby blocks construction toy legos fun games



LEGO models my own creation MOCpages toys shop The International Fan of LEGO Debate ClubOther


You Your home page | LEGO creations | Favorite builders
Activity Activity | Comments | Creations
Explore Explore | Recent | Groups
MOCpages is an unofficial, fan-created website. LEGO® and the brick configuration are property of The LEGO Group, which does not sponsor, own, or endorse this site.
©2002-2014 Sean Kenney Design Inc | Privacy policy | Terms of use