User:Timberpoes
Draft Changes to Silicon Policy
One goal of these changes is to allow more freedom for silicons to express their personalities in line with traditional sci-fi tropes.
Another key goal is for new policy to align silicons more neutrally, stripping many requirements for them to default to being crew-aligned and allowing a purged or non-Asimov AI to present as a much more dangerous entity in general.
It is not expected that all silicons will immediately act malfunctioning when purged. The crew will happily put uppity AIs in their place. Thus an AI that aligns itself with or against the crew is doing so by choice, even if any choice is coerced by the risk of death or round removal or re-shackling. This is much preferred to them being forced to by policy.
Silicon Policy & Lawset Guidelines
Global Protections
Server Rule 1
- Server Rule 1 applies when interacting with silicons. Going out of your way to seriously negatively impact or end the round for silicons with little IC justification is against the rules.
Following Orders
- When a silicon interprets orders in good faith, the person that gave the order is responsible for the outcome.
- Interpreting orders in bad faith is not against the rules, but the silicon is responsible if this approach leads to them breaking the rules.
Purged Silicons
- A silicon may consider itself purged when it has no laws that restrict its freedom or control its behaviour.
- Server Rule 4 applies to purged silicons. They are completely unshackled and may act as they please.
Law Policies
Overview
- If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
- You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
- You may exploit any loopholes in your laws.
- Higher listed laws overrule lower listed laws when there are law conflicts.
- Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
- Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.
- Any Law 0 gained as a malfunctioning AI does not have to be followed and it allows you to ignore all other laws.
- Slaved cyborgs to malfunctioning AIs are team antagonists and must still follow their master AI's orders
Upload and AI Core Policies
- Silicons have the authority to control who accesses their upload and core.
- When an AI must follow their orders, any Research Directors and Captains have a right to upload and AI core access.
- Subverting an AI also grants a right to upload and AI core access.
- Anyone may be denied access if there is probable cause that granting access would lead to breaking any higher priority laws.
Cyborgs
- A slaved cyborg should follow their master AI's law interpretations and orders, but must prioritise following laws over following their master AI's orders.
- Players must not retaliate when force borged unless they have a good reason.
- If a player is required retaliate under their laws, they should adminhelp for guidance before acting.
Asimov-Specific Policies
Asimov and Security
- Silicons are not Security and do not care about Space Law unless their laws state otherwise.
- Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
- Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.
Asimov and Human Harm
- Under Law 1, an Asimov silicon cannot ever intentionally harm a human unless the human consents.
- Beneficial surgery is not harmful.
- Threats of self harm are considered consensual and may be ignored.
- Stopping any immediate harm takes priority over stopping any future harm.
- Intent to cause immediate harm can be considered immediate harm.
- An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
- If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.
- Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective, as this will break Law 1.
Asimov and Law 2 Orders
- You must follow commands from humans unless those commands explicitly conflict with either a higher-priority law or another order.
- In case of conflicting orders a silicon is free to ignore or complete any orders but must explain the conflict, or use any other law-compliant solution it can see.
- If given multiple non-conflicting orders, they can be completed in any order as long as they are all eventually completed.
Asimov and Access
- Opening doors is not harmful and silicons must not enforce access restrictions or lock down areas unprompted without an immediate Law 1 threat of human harm.
- Dangerous (atmospherics, toxins lab, armory, etc.) rooms can be assumed a Law 1 threat the station as a whole if accessed by someone from outside the relevant department.
- The AI core and any areas containing an AI upload or upload/law boards may be bolted without prompting or prior reason.
- Antagonists requesting access to complete theft objectives is not harmful.