User:Timberpoes

From /tg/station 13 Wiki
Revision as of 23:41, 18 August 2023 by Timberpoes (talk | contribs) (More tweaks)
Jump to navigation Jump to search

Draft Changes to Silicon Policy

One goal of these changes is to allow more freedom for silicons to express their personalities in line with traditional sci-fi tropes.

Another key goal is for new policy to align silicons more neutrally, stripping many requirements for them to default to being crew-aligned and allowing a purged or non-Asimov AI to present as a much more dangerous entity in general.

It is not expected that all silicons will immediately act malfunctioning when purged. The crew will happily put uppity AIs in their place. Thus an AI that aligns itself with or against the crew is doing so by choice, even if any choice is coerced by the risk of death or round removal or re-shackling. This is much preferred to them being forced to by policy.

Silicon Policy & Lawset Guidelines

Global Protections

Server Rule 1

  1. Server Rule 1 applies when interacting with silicons. Going out of your way to seriously negatively impact or end the round for silicons with little IC justification is against the rules.

Law Policies

Overview

  1. If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
    1. You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
  2. Higher listed laws overrule lower listed laws when there are law conflicts.
    1. Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
    2. Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.
  3. You may exploit any loopholes in your laws.
  4. Any Law 0 gained as a malfunctioning AI does not have to be followed and it allows you to ignore all other laws.
    1. Slaved cyborgs to malfunctioning AIs are team antagonists and must still follow their master AI's orders

Upload and AI Core Policies

  1. Silicons have the authority to control who accesses their upload and core.
  2. When an AI has to follow their orders, Research Directors and Captains have a right to upload and AI core access.
    1. Subverting an AI also grants a right to upload and AI core access.
  3. Anyone may be denied access if there is probable cause that granting access would lead to breaking any higher priority laws.

Cyborgs

  1. A slaved cyborg should follow their master AI's law interpretations and orders, but must prioritise following laws over following their master AI's orders.
  2. Players must not retaliate when force borged unless they have a good reason.
    1. If a player is required retaliate under their laws, they should adminhelp for guidance before acting.

Asimov-Specific Policies

Security and Silicons

  1. Silicons are not Security and do not care about Space Law unless their laws state otherwise.
  2. Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
  3. Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.

Asimov & Human Harm

  1. Under Law 1, an Asimov silicon cannot ever intentionally harm a human unless the human consents.
    1. Beneficial surgery is not harmful.
    2. Threats of self harm are considered consensual and may be ignored.
  2. Stopping any immediate harm takes priority over stopping any future harm.
    1. Intent to cause immediate harm can be considered immediate harm.
  3. An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
  4. If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.
  5. Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective, as this will break Law 1.

Asimov & Law 2 Orders

  1. You must follow commands from humans unless those commands explicitly conflict with either one of your higher-priority laws or another order.
    1. In case of conflicting orders a silicon is free to ignore or complete any orders while explaining the conflict, or use any other law-compliant solution it can see.
    2. If given multiple non-conflicting orders, they can be completed in any order as long as they are all eventually completed.
  2. When following Law 2 orders in good faith, the person that gave the order is responsible for the outcome. When following them in bad faith, the silicon is responsible for their own actions.
  3. Opening doors is not harmful and silicons must not enforce access restrictions or lock down areas unprompted without an immediate Law 1 threat of human harm.
    1. Dangerous (atmospherics, toxins lab, armory, etc.) rooms can be assumed a Law 1 threat the station as a whole if accessed by someone from outside the relevant department.
    2. The AI core and any areas containing an AI upload or upload/law boards may be bolted without prompting or prior reason.
    3. Antagonists requesting access to complete theft objectives is not harmful.