-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resource Oriented AI Proposal - AI that cares about the station #334
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall I love the proposal, of course as a draft it needs further writing and discussion, detailing the nuances of AI/Malf AI, how it would interact and benefit the crew, what it would look like in a round, etc. Sweet design doc, I highly look forward to gameplay like this.
## Background | ||
|
||
AI is one of the staple roles in practically any SS13 codebase. It’s ever-present both in and out of the game. | ||
(TODO: why do we *even want it*? What’s the selling point? What does it add?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The AI should basically "make a very sweet deal" with the crew. It should offer a better toolset or direct benefit. Being a door opener is one thing, directly contributing to science research is another. I don't have many big ideas for crew-AI benefits, but remember that the AI is under supervision of the RD, it's RD's responsibility. Maybe you can give science a very attractive benefit if it helps out with AI. This can also extend to the crew (if you can figure out ways to get general crew/departments to help AI in exchange for a benefit)
|
||
## General overview | ||
|
||
The AI is a big eye in the sky capable of quickly observing any point on the station, and acting on it. It’s shackled by the silicon laws, yet can be subverted or be a round start antagonist. It’s a fully digital entity - a standard AI can not affect the physical world outside of a few specialized devices for such. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: Does the AI have a physical "container" that it resides for the purposes of having a physical entity to talk to? Can it turn into a fully digital entity (shunt?) and spread itself across multiple machines? Or is it game over if a immovable rod obliterates the AI Core?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly discussed with Scar the other day, the AI definitely needs a gradient of its destroyed states, like machines or borgs. Borgs can be exploded, which ejects the MMI, and similar traits should apply to AI (i.e. you can break the core which ejects the AI's Intellicard, or it can be completely obliterated with enough damage at once)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be a posibrain imo, intellicard is a temporary storage device. otherwise agreed, needs a death gradient
well, a posibrain if it still has a "core" and isnt an ephemeral digital entity. if it is the latter, then there should be a machine that spits it out or something
|
||
The antagonist, “Malf” AI, must not be immediately distinguishable from a normal AI. It has access to abilities that affect the physical world way more, and that push the resource management element even further. Is AI asking for more servers because it wants to help the crew even more, or is it because it plans to detonate the nuke? | ||
|
||
Multiple AIs are not in the design scope of this document. They should however be possible, and all of the systems should NOT treat AI as a singleton. Machine takeover system could be reused, for example to implement minor “virus” ghost role antag/NPC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can design a solution much like what happens when you have multiple R&D servers at play. Each research-oriented device that can supply points has a menu to choose from a lost of potential servers. Same with lathes that talk to the server, they can choose the ID of the R&D server they're connecting to.
Dedicated AI servers and machines that directly interface with AI (current ones like the law console and future ones like a "AI system manager" to view connected servers, etc.) should be able to choose which AI they interface with.
This can add heavy amounts of emergent gameplay. The crew can construct a crew-aligned AI to combat the Malf AI. In SS13, Malf AI had a problem where a second AI can instantly rat out a Malf AI and screw it over, turning off its defenses and its APC, with little to no counterplay from Malf.
With the new system, you can have the new AI take the same resources, processing power and bandwidth. Now, it's a crew and crew-aligned AI fight to liberate Malf AI resources and transfer them to the crew-aligned AI, in hopes of overriding and killing the Malf AI (through messing with its plans). I could talk for a while about how this system could be implemented, but this is your doc and idea, not mine.
|
||
**Bandwidth** is the momentary resource. It’s used for having something happen right now, near-instantly, and something that does not last for long. It’s a pool with a maximum size extensible by controlled machines, and every momentary ability consumes some of the resource from it. It regenerates at a base rate, further enhanced by the amount of unallocated **Compute**. | ||
|
||
*Alternative*: Shared pool that gets extended/shrunk based on controlled devices, and chunks of it are allocated persistently while unallocated part can be used for temporary abilities. Something to playtest later! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Leave Compute how you proposed, rename Bandwidth to something more intuitive like Bits.
You can have a bigger Bits buffer if you have more controlled devices, which tie into your Compute.
More compute = regen bits faster as well as a greater storage of them.
Lock certain actions behind a large enough bit count, so that AIs with low compute don't have enough bits to preform the action. They would have to request compute be built to raise the amount of bits they can store, so they have enough to preform that action.
This would realistically be seen on higher level benefits (you'd have to think of some that would super benefit crew) and at the tail-end of a dying AI/Malf AI. It would be gripping at straws trying to preform actions but it can't preform general actions (bolting doors) fast enough because it can't regenerate Bits fast enough. It also couldn't shock doors (something that could require bits/s) because it doesn't have enough compute to sustain the bits/s that action would require.
|
||
*Alternative*: Shared pool that gets extended/shrunk based on controlled devices, and chunks of it are allocated persistently while unallocated part can be used for temporary abilities. Something to playtest later! | ||
|
||
And a “secret”, third resource: **Player’s Attention**. As an example: door bolting as implemented in SS14 initially uses a radial menu popping up on alt click, rather than instantly letting AI bolt doors, thus requiring the AI player to pay by being precise and quick with controls as a person behind the screen. This applies to any kind of “minigames” or other interaction heavy sequences. Think of science's APE. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a player's robustness when it comes to micromanaging and quickly moving through the UI it uses to control machines.
Note: there needs to be a UI that brings up multiple controllable devices on one tile (SEE FIRELOCKS ON TOP OF DOORS) in a radial menu to choose which device to control.
Keep in mind a robust player is limited not by their own skill, but the bits/s measure. If they preform actions fast, that's great, but they might eat up their stockpile of bits and would have to wait until they regenerate more. Adding costs to certain actions is a good idea if done right.
|
||
The “map” view replaces the entirety of the AI player's vision if the camera ability is shut down, or if there’s no cameras *and* they activate a composite vision ability. It should provide information about the station’s hull - walls, floors (or a glaring lack of one or being stripped down to a lattice), windows. It also should display machines that have an AI control wire - deemphasizing them if the wire’s been snipped, and potentially not showing them at all if there’s no camera that can see the machine (to make camera snipping actually work around APCs, air alarms). Fire alarms and atmosphere alarm status should potentially be shown over the area too - perhaps as an ability. | ||
|
||
While using the AI map vision, the crew can be seen via suit sensors, only if they have tracking on. Otherwise AI is unable to tell if someone is in the area without using cameras. A potential ability idea is to give cameras a motion detection cone that gets rendered for AI, and highlight entities within it - differently from tracked crew that enters it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good and tie into motion alarm alerts. For stuff like feral slimes, spiders, crew with a cryptographic sequencer in Captain's Quarters, and crew in blood-red hardsuits holding C-20Rs in the Armory.
|
||
Cameras cover almost all of the crew areas, places where most people work and exist within, to monitor their wellbeing and the state of the station. They should not be nearly free to build, and should require an autolathe producible board, some steel, LV wire. | ||
|
||
They should not be more durable than a person - a hard enough knock should deactivate it for a small period of time, while sustaining enough damage will keep it offline until repaired. A non-violent option for keeping cameras offline should be available, using cloth and/or gauze makes it impossible to see through a camera, taking a relatively long time to cover it completely (3 or 5 second doafter, balanced by it being a “stealth” option). A camera disabled in such a way should still be shown as active for AI and warden. Though it should have no motion detection cone and convey such explicitly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be reinforced cameras or emplaced wall cameras for things like Armory/Perma/AI Core? They would have a reduced vision code (as they wouldn't be mounted one tile out from a wall and wouldn't be able to see behind them/in a wider cone as such). These would be much harder to break, and might be more difficult to cover up.
|
||
The AI has no way to affect anything physical (at least, a shackled one - not malf). Its primary way to do anything in that regard is borgs, and directing the crew. While the crew has own concerns, like air, health and such, borgs are hardier and are almost always available for AI to use. | ||
|
||
Borgs can opt into synchronizing their laws with AI, gaining map or camera view from AI (todo: detail the incentives). This does not make them less susceptible to ion storm law corruption, but the AI can fix the laws by rewriting them to its own at some cost. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. Borgs should also get something similar to remote AI control (having the ability to remote interact with things in their viewport, as well as having camera view from AI). This means that borgs could see into other rooms, remote open/close doors, and view APCs remotely. Of course, this relies on them being synced and the AI wire uncut.
|
||
Borgs can opt into synchronizing their laws with AI, gaining map or camera view from AI (todo: detail the incentives). This does not make them less susceptible to ion storm law corruption, but the AI can fix the laws by rewriting them to its own at some cost. | ||
|
||
Emagging should forcefully but silently unlink a borg from the AI if it’s synced. If it’s not, AI is not relevant. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be delved into a lot because there's edge cases for this. This also kinda conflicts with how we want emagged borgs to be found out: through roleplay,
If a borg is emagged, it should break the sync but still fake the sync for AI and crew, basically continuing a one-way sync. The borg gets its benefits it would normally get from being synced (remote control and camera vision). But do we want synced borgs to also have a camera the AI can view? Because if we want the camera vision to cut upon being emagged, it would be a bit obvious if a borg is emagged.
Alternatively we could keep borg vision for emagged borgs and have them attempt to fool the AI as much as possible while still following directives.
A person who attempts to EMAG a borg that is synced/hacked by a Malf AI should just fake the EMAG working, otherwise any poor thief/traitor could just "malfcheck" AI. It would also be interesting to not fake the interaction and instead have the EMAG fry, that way roleplay could occur: the AI and thief/syndie would have to barter with the AI to achieve its goals, otherwise they would rat. But that's in a perfect world of HRP.
Also PS: Make camera vision through borgs take bits/s. Have research for "runtime optimizations" which reduces the bits/s cost for actions. Could also make alternative research for Malf AI with further improvements so Malf AI has more effective power from the same amount of servers. But you could have Malf AI do something "risky" as a way to earn it.
- Paperwork management UI that AI gets *instead* of being able to interact with fax machines. It would list all papers received and let the player read them. It would also allow editing (if not stamped). Allows sending the same way a fax can. | ||
- AI stamp that it can put on any received paper through paperwork management UI. If it fits into the art design, make it a QR or DataMatrix code that just says “AI”. Otherwise just have it be a text. | ||
- Holopads are a way to communicate with the crew for free, with full vision and such. The downside is that it completely takes over the AI’s viewpoint for the duration of a personal call. Just formalizing the existing implementation by Chromiumboy here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All points here are cool and I agree.
There we go, Yet Another AI Proposal. This one based on one of the existing ones, at least in spirit.
Goal is to make AI have... a bit more to chew on and care about. It shouldn't be utterly detached from the station, it really should care about the state of it, and that it's powered, not on fire and there's people on it. The proposal should encourage interactions with other departments, and just people in general. Also discourage AI players from being just an extra security officer with ghost flying.
Not sure if I've accomplished all of that here, but here's hoping for at least a step into a good direction.
TODO: