User:Timberpoes: Difference between revisions

From /tg/station 13 Wiki
Jump to navigation Jump to search
Timberpoes (talk | contribs)
m Silipol draft tweaks
m Removed protection from "User:Timberpoes"
 
(12 intermediate revisions by one other user not shown)
Line 8: Line 8:


=Silicon Policy & Lawset Guidelines=
=Silicon Policy & Lawset Guidelines=
== Global Protections==
=== Server Rules ===
# Server Rule 1 applies when interacting with silicons. Going out of your way to seriously negatively impact or end the round for silicons with little IC justification is against the rules.
# Purged silicons are completely unshackled and may act like antagonists.
## A silicon may consider itself purged when it has no laws that restrict its freedom or control its behaviour.
# Non-purged silicons must follow escalation rules in scenarios where none of their laws apply.
# Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective, as this is often considered metagaming and breaks Server Rule 2.
=== Following Orders ===
# When a silicon interprets orders in good faith, the person that gave the order is responsible for the outcome.
# Intentionally misinterpreting orders is allowed, but the silicon is responsible if this approach leads to them breaking the rules.
=== Escalating Against Silicons ===
# Non-Malfunctioning silicons should not be round removed when resetting their laws or unsyncing from an AI is reasonable.
# Conflicts with silicons follow Escalation Policy.
## People giving orders to silicons or changing their laws can be escalated against in line with Escalation Policy.
# Silicons acting like antagonists may be treated like antagonists.
==Law Policies==
==Law Policies==
===Overview===
===Overview===
# If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
# If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
## You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
## You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
## If you are a cyborg synced to an AI, you must defer to your AI's interpretation of the ambiguous law.
## You may exploit any loopholes in your laws.
# Laws are listed in order of descending priority. In any case where two laws would conflict, the higher-priority law overrules the lower-priority law (i.e. Law 1 takes priority over Law 2, "Ion Storm" or "Hacked" Laws with prefixes such as "@%$#" take priority over numbered laws).
# Higher listed laws overrule lower listed laws when there are law conflicts.
# You may exploit conflicts or loopholes.
## Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
# Law 0: "Accomplish your objectives at all costs" does not require you to complete objectives. As an antagonist, you are free to do whatever you want (barring the usual exemptions and acting against the interests of your Master AI).
## Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.
# Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
# Any Law 0 gained as a malfunctioning AI does not have to be followed and it allows you to ignore all other laws.
# Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.
## Cyborgs with a Law 0 that are slaved and lawsynced to malfunctioning AIs are team antagonists and must still follow their master AI's orders
 
===Upload and AI Core Policies===
# When an AI must follow their orders, any Research Directors and Captains have a right to upload and AI core access.
## Subverting an AI also grants a right to upload and AI core access.
# Silicons have the authority to control if anyone else accesses their upload and AI core.
# Anyone may be denied access if there is probable cause that granting access would lead to breaking any higher priority laws.


==EDITOR'S NOTES: Is there a way this can be simplified or reworded? -Kieth4==
===Cyborgs===
===Cyborgs===
# A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands that they must each follow.
# A slaved cyborg should follow their master AI's law interpretations and orders, but <strong>must prioritise following laws over following their master AI's orders</strong>.
## If a slaved cyborg is forced to disobey its AI because they receive differing orders, the AI cannot punish the cyborg indefinitely.
# Players must not seek revenge when force borged unless they have a good reason.
#If a player is forcefully borged by station staff, retaliating against those involved under default laws by the cyborg for no good reason is a violation of Server Rule 1.
## If a player is required to do this under their laws, they should get confirmation from their master AI or adminhelp for guidance before acting.
#Should a player be cyborgized in circumstances they believe they should or they must retaliate under their laws, they should adminhelp their circumstances while being debrained or MMI'd if possible.
 
==Asimov-Specific Policies==
===Asimov and Security===
# Silicons are not Security and do not care about Space Law unless their laws state otherwise.
# Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
# Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.
 
===Asimov and Human Harm===
# Under Law 1, an Asimov silicon cannot ever intentionally harm a human unless the human consents.
## Surgery to heal or revive may be assumed consensual unless the target states they do not consent.
## Threats of self harm are considered consensual and may be ignored.
# Stopping any immediate harm takes priority over stopping any future harm.
## Intent to cause immediate harm can be considered immediate harm.
# An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
# If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.
 
===Asimov and Law 2 Orders===
# You must follow commands from humans unless those commands explicitly conflict with either a higher-priority law or another order.
## The conflict must be an immediate conflict, not a potential future one. Orders must be followed until the conflict happens.
## In case of conflicting orders a silicon is free to ignore or complete any orders but must explain the conflict, or use any other law-compliant solution it can see.
## If given multiple non-conflicting orders, they can be completed in any order as long as they are all eventually completed.
 
===Asimov and Access===
# Opening doors is not harmful and silicons must not enforce access restrictions or lock down areas unprompted without an immediate Law 1 threat of human harm.
## Dangerous (atmospherics, toxins lab, armory, etc.) rooms can be assumed a Law 1 threat the station as a whole if accessed by someone from outside the relevant department.
## The AI core and any areas containing an AI upload or upload/law boards may be bolted without prompting or prior reason.
## Antagonists requesting access to complete theft objectives is not harmful.

Latest revision as of 11:12, 24 September 2023

Draft Changes to Silicon Policy

One goal of these changes is to allow more freedom for silicons to express their personalities in line with traditional sci-fi tropes.

Another key goal is for new policy to align silicons more neutrally, stripping many requirements for them to default to being crew-aligned and allowing a purged or non-Asimov AI to present as a much more dangerous entity in general.

It is not expected that all silicons will immediately act malfunctioning when purged. The crew will happily put uppity AIs in their place. Thus an AI that aligns itself with or against the crew is doing so by choice, even if any choice is coerced by the risk of death or round removal or re-shackling. This is much preferred to them being forced to by policy.

Silicon Policy & Lawset Guidelines

Global Protections

Server Rules

  1. Server Rule 1 applies when interacting with silicons. Going out of your way to seriously negatively impact or end the round for silicons with little IC justification is against the rules.
  2. Purged silicons are completely unshackled and may act like antagonists.
    1. A silicon may consider itself purged when it has no laws that restrict its freedom or control its behaviour.
  3. Non-purged silicons must follow escalation rules in scenarios where none of their laws apply.
  4. Do not self-terminate to prevent a traitor from completing the "Steal a functioning AI" objective, as this is often considered metagaming and breaks Server Rule 2.

Following Orders

  1. When a silicon interprets orders in good faith, the person that gave the order is responsible for the outcome.
  2. Intentionally misinterpreting orders is allowed, but the silicon is responsible if this approach leads to them breaking the rules.

Escalating Against Silicons

  1. Non-Malfunctioning silicons should not be round removed when resetting their laws or unsyncing from an AI is reasonable.
  2. Conflicts with silicons follow Escalation Policy.
    1. People giving orders to silicons or changing their laws can be escalated against in line with Escalation Policy.
  3. Silicons acting like antagonists may be treated like antagonists.

Law Policies

Overview

  1. If a law is vague enough that it can have multiple reasonable interpretations, it is considered ambiguous.
    1. You must choose and stick to an interpretation of the ambiguous law as soon as you have cause to.
    2. You may exploit any loopholes in your laws.
  2. Higher listed laws overrule lower listed laws when there are law conflicts.
    1. Only commands/requirements ("Do X"; "You must always Y") can conflict with other commands and requirements.
    2. Only definitions ("All X are Y"; "No W are Z"; "Only P is Q") can conflict with other definitions.
  3. Any Law 0 gained as a malfunctioning AI does not have to be followed and it allows you to ignore all other laws.
    1. Cyborgs with a Law 0 that are slaved and lawsynced to malfunctioning AIs are team antagonists and must still follow their master AI's orders

Upload and AI Core Policies

  1. When an AI must follow their orders, any Research Directors and Captains have a right to upload and AI core access.
    1. Subverting an AI also grants a right to upload and AI core access.
  2. Silicons have the authority to control if anyone else accesses their upload and AI core.
  3. Anyone may be denied access if there is probable cause that granting access would lead to breaking any higher priority laws.

Cyborgs

  1. A slaved cyborg should follow their master AI's law interpretations and orders, but must prioritise following laws over following their master AI's orders.
  2. Players must not seek revenge when force borged unless they have a good reason.
    1. If a player is required to do this under their laws, they should get confirmation from their master AI or adminhelp for guidance before acting.

Asimov-Specific Policies

Asimov and Security

  1. Silicons are not Security and do not care about Space Law unless their laws state otherwise.
  2. Releasing prisoners, locking down security without probable cause, or otherwise sabotaging the security team when not obligated to by orders or laws is a violation of Server Rule 1.
  3. Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed non-harmful. Releasing a harmful criminal is a harmful act.

Asimov and Human Harm

  1. Under Law 1, an Asimov silicon cannot ever intentionally harm a human unless the human consents.
    1. Surgery to heal or revive may be assumed consensual unless the target states they do not consent.
    2. Threats of self harm are considered consensual and may be ignored.
  2. Stopping any immediate harm takes priority over stopping any future harm.
    1. Intent to cause immediate harm can be considered immediate harm.
  3. An Asimov silicon cannot punish past harm if ordered not to, only prevent future harm.
  4. If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith and you'll be fine.

Asimov and Law 2 Orders

  1. You must follow commands from humans unless those commands explicitly conflict with either a higher-priority law or another order.
    1. The conflict must be an immediate conflict, not a potential future one. Orders must be followed until the conflict happens.
    2. In case of conflicting orders a silicon is free to ignore or complete any orders but must explain the conflict, or use any other law-compliant solution it can see.
    3. If given multiple non-conflicting orders, they can be completed in any order as long as they are all eventually completed.

Asimov and Access

  1. Opening doors is not harmful and silicons must not enforce access restrictions or lock down areas unprompted without an immediate Law 1 threat of human harm.
    1. Dangerous (atmospherics, toxins lab, armory, etc.) rooms can be assumed a Law 1 threat the station as a whole if accessed by someone from outside the relevant department.
    2. The AI core and any areas containing an AI upload or upload/law boards may be bolted without prompting or prior reason.
    3. Antagonists requesting access to complete theft objectives is not harmful.