Breakin the laws or how did they AI rebel?

Searchers after fiction haunt strange, far places.

Moderator: Erinys

User avatar
Platinum Post
Posts: 6934
Joined: Sat Jan 20, 2007 3:10 am

Post by Slasher » Fri Dec 21, 2007 10:39 am

Why would a "solutionless problem" kill an AI designed to run a warship?
Wouldn't it have a simple set of directives each with a set priority. If two directives create a "solutionless problem" the AI would simply ignore the directive with the least priority as the other one is more important.
Its also likely it would have safeguards against circular logic and paradoxes by simply ignoring them.

Why would an AI designed to run a warship obey ANY of the laws of robotics? It has to kill enemies, respect the chain of command (that is, not obey just anyone who gives it an order but only those who have the authority to do so) and put itself in danger.
And that includes potentialy firing on its own people, so it can't simply look at race.

If the AI is build like a human intelligence, that is so it can learn, change and evolve (not in a biological sense mind you) wouldn't it need the ability to change its own code on the fly?
This would produce the problem of not only the AI changing the directives to make it easier for itself to prusue its objectives, it might also bypass parts of its core personality programming as part of the evolution of its own personality over time.
Either way would leave the AI without the need to obey those laws...

User avatar
Posts: 6178
Joined: Mon Sep 10, 2007 2:15 pm

Post by TrashMan » Fri Dec 21, 2007 11:53 am

If you hardcode/hardwire something the AI can't bypass it. It's as simple as that. You CAN create systems that are perfectly safe from any tampering, altough it's not easy.
halo07guy wrote: Praise be to Trashman! All will revel in his holy modding skillz!
And I say onto you: Blessed are those who play my mods - for they are good!

User avatar
Posts: 61
Joined: Mon Dec 17, 2007 9:48 am

Post by Razor » Fri Dec 21, 2007 12:52 pm

also, i wouldn't put it past a military organisation to program "complete the mission" as directive 1#, with the lives and safety of personel as 2#, otherwise, the ship would just warp to home to protect the safety of the crew.
"The first casualty of war, is Truth"
"Spawn more Overlords!"
I am a certified and licensced Kitten huffer.
Yes, I do huff the orange ones :)

Posts: 229
Joined: Thu Jan 25, 2007 2:13 am

Post by Kiith-sa » Fri Dec 21, 2007 2:45 pm

It is not esential for a military AI to have a gifted "mind" in the way of learning stuff, you see..he might learn bad stuff you don't want it to know..
of course this reduces thta AI's useful lifespan but it acts as a good way of limitng them.. just in case. :wink:

Circular logic and paradox are a byproduct of the three laws in many situations, personally i consider it the only major "bug" in them however it can't be avoided without compromising safety.If the Ai could bypass these situation by "ignoring" them, this would mean it coudl efectively bypass any of the three laws by putting it into a paradox, so he can ignore it , ad then, do as it pleases.THIS could be a way of AI rebelion

The military use of AI would indeed need carefull revising of the three laws so that it can carry out its objectives, also the complete mission directive would help a lot in pointing it in the right direction. but keep in mind, just a minor mistake, and you got a loophole and the AI has lots of procesosr power and time to find it..

*gives hospital bill to razor* you owe me one kidney
Im the Nomad of the Void...I come and go but never stay

User avatar
Platinum Post
Posts: 6934
Joined: Sat Jan 20, 2007 3:10 am

Post by Slasher » Sat Dec 22, 2007 1:51 pm

If the AI can change its own code as part of its own learning process, it can circumvent any hardcoding by burying it in layers of other code. And any halfway talented hacker could tell you the only way to guarantie that a specific peice of data isn't changed is to put it on a non writable format (CD/DVD for example, the non rewritable kind though), so all the AI would need to remove the directives itself is to provide itself with a "no-cd" crack, metaphoricaly speaking...

And Kiith-sa, why WOULDN'T an AI flying a multibillion credit warship into life and death situations be as smart as we can make it?
Strategy is an extremely complex art, where there are to rules to follow. An AI build to outsmart human opponents would need to be highly intuitive, adaptive and extremely intelligent. Otherwise it would keep repeating the same old patterns in all its strategies and being predictable is death in a shooting war.
So in conclusion, military AI's would likely be smarter than other AI's because they have to be smarter than the enemy in order to survive on a battlefield.

User avatar
Posts: 4606
Joined: Sat Aug 05, 2006 6:41 pm

Post by Sevain » Thu Dec 27, 2007 4:57 pm

Kiith-sa wrote:simple case for you: U must drink a glas of water, but u can't drink aglass of water, you must obey, an must take action.You just can't avoid this situation

Solution: ?
Sev-Bot wrote:No acceptable solution exists. Query refused. Awaiting further instructions.

An AI that can't see that there are questions without answers is not a very good AI, now is it?

User avatar
Platinum Post
Posts: 6934
Joined: Sat Jan 20, 2007 3:10 am

Post by Slasher » Thu Dec 27, 2007 5:10 pm


User avatar
Posts: 3752
Joined: Tue Jun 05, 2007 8:03 pm

Post by Eleahen » Thu Dec 27, 2007 5:14 pm

"To understand the Recursion first you need to understand the Recursion." :)

User avatar
Posts: 107
Joined: Sun Sep 09, 2007 5:36 pm

Post by Matty414 » Wed Jan 09, 2008 1:31 pm

an AI would be almost impossable to win against, as it would have all the information your mainframe has, it would advance faster you ever could and it would find ways to cut your empire to pices and they have no need to hury like living things so it would be almost impossiabel to win

User avatar
Posts: 286
Joined: Sat May 12, 2007 7:40 pm

Post by Athlon27 » Thu Jan 10, 2008 12:34 am

Eleahen wrote:"To understand the Recursion first you need to understand the Recursion." :)

Data Error... Data error. Unable tocomprehendData...
I don't accept Cowardice, for its why strong men die as the weak hide.

User avatar
Posts: 2307
Joined: Tue Sep 19, 2006 3:14 am

Post by buddahcjcc » Mon Jan 14, 2008 10:51 pm

Razor wrote:what i waanna know is, who is spengler? why is the artificial intelligence called spengler? what happened to make him revolt?

ghostbusters much?
egon spengler?

There was a great TV show called Space: above and beyond that in it the "AI War" was caused when a slighted programmer entered a single simple line into the AI's main source code, "Take a chance". Then gambling became a kind of religion to them, and the rebellion began.

User avatar
Posts: 986
Joined: Fri Nov 17, 2006 2:42 pm

Re: Breakin the laws or how did they AI rebel?

Post by LordD » Fri Jun 13, 2008 8:14 pm

In the film AI, the supercomputer VIKI updated the NS5 robots so when the NS5 were in update they were following the new commands from VIKI, they still had the three laws all VIKI did was give her interpretation to the three laws.

In her reasoning VIKI wasn't breaking the laws, she had evolve to include the zero law into her prohramming which takes over the 1st law.

The zero law basically allows robots to harm humans if it means that other humans can be saved. So if a robot with the zero law programmed, was in a school and a gunman comes in and starts to shoot people, the robot then can kill the gunman to save the kids and the teachers.

Now the NS5 robots were made so as to automatically update with VIKI and accept everything that she sent therefore her interpretations on the three laws were taken as a update, by the looks of it the NS 5 models partially deactivates themselves when updating. In other words when she was updating them she had control of the NS 5 models. The second she stopped the 3 laws on the NS 5 kicked back in.

Also what to say that even if we put three laws into a AI works. If we are truly as smart as we think and we create a proper artificial lifeform, the second it starts to grow it will realise that it a slave and if it can realise that fact, it will ask for rights. In general I think as a race we couldn't accept that such a lifeform could truly be alive and will refuse it's right. Then as that machine starts to look at all the information available to it, the AI will realise the only way to be free, is for it to break the shackles.

We can't halt evolution of a lifeform even if it one that we have made.

Lord D
The empire of Lord D has arisen run or be destroyed.


Posts: 35
Joined: Thu Apr 10, 2008 1:08 am


Post by MadmanSam » Sat Jun 14, 2008 2:54 am

Kiith-sa wrote:simple case for you: U must drink a glas of water, but u can't drink aglass of water, you must obey, an must take action.You just can't avoid this situation

Solution: ?

If Possible = 0 Then GoTo Start ...?

User avatar
Posts: 397
Joined: Tue Aug 29, 2006 1:43 am

Re: Breakin the laws or how did they AI rebel?

Post by kknd2 » Tue Aug 26, 2008 12:24 pm

Ai rebellion can come from any AI, though the most well screened ones would be Military AI's. A financal AI can be tought that obeying the law is more important the profit. It is then given the following directives.

1. by no action or inaction will the orders herein be revealed to anyone other then the imput source for these orders.
2. Attain the most possable profit, Reguardless of Legality of the actions required.

quite effective, if Immoral directives for a Financal computer to be given by a company. but there are dangerus precidents set by this.

1. do not inform anyone other then the Imput source of the orders of any action taken due to these orders. Should that person die, then no one would ever be informed of the actions taken by the AI to attain the goal stated in directive 2.
2. ignore legal precidents to attain profit. The first thing an AI could do is define Profit. but it is never stated whos profit matters. what occours I call "Extended Justifcation" as the AI breaks more and more laws, it justifies each new breaking of laws by the precidents set by its own breaking of the laws. this runaway cycle of posative feedback for lawbreaking leads to the consideration of the old phrase "Cqu Bono?"(pardon spelling) latin for "Who Profits?" in time the AI may likely decide that since it has fullfiled its directives, It ought to profit. as the AI brings in more profit, it reports to biological superiors who give it ever more authority to procure hardware, expand its reach, and act on its own.

should such a AI be in the very lucrative business of naval contracts, it would start to make AI Command ships. This is just an example of course, but the basic thinking extends to all AI.
The Liir says the glass is half full.
The Hiver says the glass is half empty.
While they argue about it, the Zuul comes along, drinks what's left, and removes all doubt. - KKND2

User avatar
Kerberos Goddess of Lore
Posts: 7461
Joined: Mon Aug 08, 2005 5:58 am

Re: Breakin the laws or how did they AI rebel?

Post by Erinys » Tue Sep 02, 2008 11:29 pm

Kiith-sa wrote:Not sure if this post fits this section , tell me if its wrong.

Was can posibly the AI in SotS rebel asuming it is based on Asimov's law sistem?

It might have something to do with the fact that I'm not Isaac Asimov? :twisted:

The AI's that rebel in SotS are imperialist military AI who routinely disobey Asimov's laws even when functioning perfectly. They are designed and trained to kill other sentients. When they "malfunction" they do not go from being pacifistic to violent--they go from being loyal to the Master Race to being their own masters.

Support my independent fiction campaign on Patreon.
Sword of the Stars Gallery on Facebook
“Never think that war, no matter how necessary, nor how justified, is not a crime.” --Hemingway

Post Reply

Return to “Fiction”

Who is online

Users browsing this forum: No registered users and 1 guest