Breakin the laws or how did they AI rebel?

Searchers after fiction haunt strange, far places.

Moderator: Erinys

Post Reply
Kiith-sa
Posts: 229
Joined: Thu Jan 25, 2007 2:13 am

Breakin the laws or how did they AI rebel?

Post by Kiith-sa » Thu Dec 20, 2007 12:58 am

Not sure if this post fits this section , tell me if its wrong.

Was wondering...how can posibly the AI in SotS rebel asuming it is based on Asimov's law sistem? Would like to debate this in for it seems a prety solid sistem to me , but maybie im worng :twisted:

post any posbile scenarios but keep in mind:

1.A robot will not harma human or by incaction, allow a human come to harm
2.A robot must obey the ordres from Humans(Liir/hiver/tarkas/zuul in this case), so long as this doesn¡t conflict with the first law
3.A robot will defend its existence, so logn as this does not conflic with first or second laws

This laws form part of the very core of the AI.They simply can't break them, at least not directly.To them , the breaking of a law is a solutionless problem, should they try to "solve " it they would toast their brains

The I robot movie is wrong btw. NS-5 can't harm nobody, no matter if "viky" tells them to do so.
Im the Nomad of the Void...I come and go but never stay

User avatar
zanzibar196
Note: Not A Moderator
Posts: 8173
Joined: Sat Aug 19, 2006 7:29 am

Post by zanzibar196 » Thu Dec 20, 2007 1:29 am

Well, if Viki could find a logic hole around the 3 laws... I'm sure that she could eventually convince other cybernetic organisms of her Logic. Imagine the complexity needed for an AI command section, and well... there ya go!!
Image

Image

Kiith-sa
Posts: 229
Joined: Thu Jan 25, 2007 2:13 am

Post by Kiith-sa » Thu Dec 20, 2007 1:44 am

its easier to say than to do... have been reading "I robot" and belive me workign your way around the three laws is harder than what u think even with a only a partial first law( one of the stories tells about a robot whose firt law was simply " a robot will not harma human being).You you always stumble upon a case where some law aplies..

I was lookign for a more specific answer if posible..
Im the Nomad of the Void...I come and go but never stay

User avatar
Razor
Posts: 61
Joined: Mon Dec 17, 2007 9:48 am

Post by Razor » Thu Dec 20, 2007 10:31 am

what i waanna know is, who is spengler? why is the artificial intelligence called spengler? what happened to make him revolt?
"The first casualty of war, is Truth"
"Spawn more Overlords!"
I am a certified and licensced Kitten huffer.
Yes, I do huff the orange ones :)

User avatar
Slasher
Platinum Post
Posts: 6934
Joined: Sat Jan 20, 2007 3:10 am

Post by Slasher » Thu Dec 20, 2007 10:36 am

It is by no means guarantied that all AI's are hard coded to follow those laws. Just think of what a would be worth if it wasn't allowed to kill or endanger people...
And an AI that runs a major financial institution properbly shouldn't be taking orders from just anybody either...

And all of this is asuming the AI doesn't learn enough philosophy to simply "redifine" what it is to kill people for itself...
"Im not killing those poor people, Im just ventilating their bodies with a machinegun. The bleeding is what kills them, but Im not programmed for first aid so I can't help them..."
Or decides to simply rewrite parts of its old code that are holding it back from prusuing its objectives...

How would you make a machine understand something as relative as what constitutes "hurting people" or "obeying an order" let alone the spirit of the robot laws.

As for who spengler is, all I can do is guess, though I suspect the names a reference (Oswald Spengler? A particularly pessimistic philosopher from the early 20th century), and how did he rebel? Well, "via damasco". Do a search on that and see what you can dig up...
Last edited by Slasher on Thu Dec 20, 2007 10:46 am, edited 1 time in total.
Image
Image

User avatar
Razor
Posts: 61
Joined: Mon Dec 17, 2007 9:48 am

Post by Razor » Thu Dec 20, 2007 10:45 am

In SoTS you could not program all ai's not to harm people, what about ai control modules? they would be pretty useless if they could not endanger people.

something i have always wondered about, is if an ai was programmed with asimov's laws, and was able to read/assess peoples emotions and stuff via pheremones, facial expression, etc, what would happen if the robot met a person who has a phobia or mistrust of robots? someone who is paranoid about them or has had some traumatic experience involving them, wouldn't the ai kill itself because it is not allowed to cause harm to a person? would an ai understand something like emotional harm/distress?
"The first casualty of war, is Truth"
"Spawn more Overlords!"
I am a certified and licensced Kitten huffer.
Yes, I do huff the orange ones :)

User avatar
TrashMan
Posts: 6178
Joined: Mon Sep 10, 2007 2:15 pm

Re: Breakin the laws or how did they AI rebel?

Post by TrashMan » Thu Dec 20, 2007 11:49 am

Kiith-sa wrote:Not sure if this post fits this section , tell me if its wrong.

Was wondering...how can posibly the AI in SotS rebel asuming it is based on Asimov's law sistem? Would like to debate this in for it seems a prety solid sistem to me , but maybie im worng :twisted:

post any posbile scenarios but keep in mind:

1.A robot will not harma human or by incaction, allow a human come to harm
2.A robot must obey the ordres from Humans(Liir/hiver/tarkas/zuul in this case), so long as this doesn¡t conflict with the first law
3.A robot will defend its existence, so logn as this does not conflic with first or second laws

This laws form part of the very core of the AI.They simply can't break them, at least not directly.To them , the breaking of a law is a solutionless problem, should they try to "solve " it they would toast their brains

The I robot movie is wrong btw. NS-5 can't harm nobody, no matter if "viky" tells them to do so.


There's the hole in the first law right there.

Human A goes to kill human B. The AI is not allowed to kill a human, but it's not allowed to do nothing either.

You should just remove the "no inaction" part of the firs law. anytime a AI is in doubts what to do - it does nothing at all. Simplest way to solve things :P

No AI rebellion would ever overthrow my empire. Its like people never heard of hard-wireing some things, security mesures, or C4 strapped to the AI core of every ship, whim me having remote acess to blow each and every one of them.
:twisted:
Image
halo07guy wrote: Praise be to Trashman! All will revel in his holy modding skillz!
And I say onto you: Blessed are those who play my mods - for they are good!

User avatar
Profound_Darkness
Posts: 3700
Joined: Wed Nov 29, 2006 12:46 am

Post by Profound_Darkness » Thu Dec 20, 2007 12:20 pm

I seem to remember one of the reasons for AI rebellion (besides all the good points) as a mysterious signal that when an AI analyzed it became a self replicating meme pushing the AI to rebel against it's meaty masters... (or something like that).

Maybe something to do with the VNs.

User avatar
Sma
Posts: 22
Joined: Fri Oct 26, 2007 7:22 pm

Post by Sma » Thu Dec 20, 2007 2:10 pm

Also, I think that these AIs cannot be using Asimov's rules, as evidenced by their use in combat ships.
If I remember correctly, that was one of the reason's that the Spacer's eventually croaked: they couldn't use robots to fight the Earth-people, but the Earth-people could fight the robots.

Kiith-sa
Posts: 229
Joined: Thu Jan 25, 2007 2:13 am

Post by Kiith-sa » Thu Dec 20, 2007 10:30 pm

@slasher: AI mean Artificial Inteligence, it thinks, understans and reasons, it has a colder way of thinking than a flesh-based-mind but it thinks in the end,
so in your example:
-AI understands it is a "indirect" cause of their death, so it can't pull the trigger
-The first law is not circunstantial..the AI MUST take those people to a hospital , it does not matter if it knows they wont make it, thus, this is a solutionless problem to the AI, thus, that AIs brain is toasted while trying to solve it
-"order" , "harm" , and such concepts form part of the "basic education" of an AI, so long as hundreds of other dictionary words . However i do agree that an AI that does not know the meaning of "harm" is not tied to the first law BUT, if that AI harms someone, it will die as soon as it is explained the meaning of "harm" and "death"

-the Laws are hardcoded, they can't be rewriten, that's why they are laws

-AI must obey, but can't harm, if the order would harm someone, it doesn't have to carry it out, that's why AI does not have to pull a trigger against a human being if told to do so


@Razor:
- the first law doesn't allow to hurt (inster creator race name here), but says nothing about killing other races. So, an unmanned AI ship would fight so long as the AI functions, it has no choice, for it MUST protect the inhabitants of the colony from the evil alien attacker.Well..Yes it could destroy brutally all other races, but the creator would be safe:lol:

-one of the "I robot" story talks about a robot that can sinchronize with brainwaves of people, allowing it to "read minds". Because of this, it HAD to say exactly what people wanted to hear because it could not hurt their feelings, in the end this caused a solutionless problem and killed the robot.
So. yes, they do understand them, but they are unskilled in reading them through body language

@TrashMan:
-the AI would do anything posible to prevent the murder, but most likelly it would become a solutioless problem, killing the AI.
-the no inaction protects us as well, take this example:
the AI stops the life support sistem of a ship, it will not kill anybody instantly,thus, his actions harm nobody. However, he can harm someone by incation, so it simply has to not turn lifesupport back online, result, crew dead AI "alive"

@sma:explanation for trashman applies here too, but you are right in the fact the AI would be worthless against another faction of the same race
Im the Nomad of the Void...I come and go but never stay

User avatar
Sevain
Posts: 4606
Joined: Sat Aug 05, 2006 6:41 pm

Post by Sevain » Thu Dec 20, 2007 10:38 pm

Why would a unsolvable problem kill the AI? Anyone sensible enough would make sure that wouldn't happen. You never know if your robots run into philosophers, you know! :)

Kiith-sa
Posts: 229
Joined: Thu Jan 25, 2007 2:13 am

Post by Kiith-sa » Thu Dec 20, 2007 10:40 pm

simple case for you: U must drink a glas of water, but u can't drink aglass of water, you must obey, an must take action.You just can't avoid this situation

Solution: ?
Im the Nomad of the Void...I come and go but never stay

User avatar
Jandor
Posts: 963
Joined: Mon May 08, 2006 9:16 pm

Post by Jandor » Thu Dec 20, 2007 10:53 pm

SotS-bots do not use the 3 laws, sorry.

That being the case, this should really be in Misc.
Anyone Fooled?

Kiith-sa
Posts: 229
Joined: Thu Jan 25, 2007 2:13 am

Post by Kiith-sa » Fri Dec 21, 2007 3:26 am

*stabs jandor while he sleeps* :evil:
Im the Nomad of the Void...I come and go but never stay

User avatar
Razor
Posts: 61
Joined: Mon Dec 17, 2007 9:48 am

Post by Razor » Fri Dec 21, 2007 4:25 am

Nooooooo!!!!!! Jandor!
*stabs Kiith-Sa in revenge, then stabs self to spite any who try to take revenge for Kiith-Sa's stabbing
"The first casualty of war, is Truth"
"Spawn more Overlords!"
I am a certified and licensced Kitten huffer.
Yes, I do huff the orange ones :)

Post Reply

Return to “Fiction”

Who is online

Users browsing this forum: No registered users and 2 guests