An adventure 'nugget'? Phil Pugliese (17 Jan 2023 18:46 UTC)
Re: [TML] An adventure 'nugget'? Evyn MacDude (17 Jan 2023 22:07 UTC)
Re: [TML] An adventure 'nugget'? Timothy Collinson (17 Jan 2023 22:12 UTC)
Re: [TML] An adventure 'nugget'? Alex Goodwin (17 Jan 2023 22:25 UTC)
Re: [TML] An adventure 'nugget'? Greg Nokes (17 Jan 2023 23:59 UTC)
Re: [TML] An adventure 'nugget'? Ethan McKinney (18 Jan 2023 01:37 UTC)
Re: [TML] An adventure 'nugget'? Greg nokes (18 Jan 2023 01:40 UTC)
Re: [TML] An adventure 'nugget'? Alex Goodwin (18 Jan 2023 02:07 UTC)
Re: [TML] An adventure 'nugget'? Greg Nokes (18 Jan 2023 03:24 UTC)
Re: [TML] An adventure 'nugget'? Rupert Boleyn (18 Jan 2023 07:53 UTC)
Re: [TML] An adventure 'nugget'? Greg Nokes (18 Jan 2023 17:33 UTC)
Re: [TML] An adventure 'nugget'? Evyn MacDude (21 Jan 2023 01:49 UTC)
Re: [TML] An adventure 'nugget'? Alex Goodwin (18 Jan 2023 20:59 UTC)
Re: [TML] An adventure 'nugget'? Timothy Collinson (22 Jan 2023 13:00 UTC)
Re: [TML] An adventure 'nugget'? Alex Goodwin (22 Jan 2023 14:09 UTC)
Re: [TML] An adventure 'nugget'? Timothy Collinson (22 Jan 2023 17:39 UTC)
Re: [TML] An adventure 'nugget'? Alex Goodwin (22 Jan 2023 18:08 UTC)
Re: [TML] An adventure 'nugget'? Timothy Collinson (22 Jan 2023 18:14 UTC)
Re: [TML] An adventure 'nugget'? Greg Nokes (22 Jan 2023 20:14 UTC)
Re: [TML] An adventure 'nugget'? Phil Pugliese (23 Jan 2023 00:28 UTC)
Re: [TML] An adventure 'nugget'? Rupert Boleyn (23 Jan 2023 04:16 UTC)
Re: [TML] An adventure 'nugget'? Richard Aiken (13 Apr 2023 02:14 UTC)
Re: [TML] An adventure 'nugget'? David Johnson (22 Jan 2023 15:34 UTC)
Re: [TML] An adventure 'nugget'? Timothy Collinson (22 Jan 2023 17:39 UTC)
Re: [EXT]Re: [TML] An adventure 'nugget'? Johnson, Bruce E - (bjohnson) (20 Jan 2023 23:47 UTC)
Re: [EXT]Re: [TML] An adventure 'nugget'? Tom Rux (21 Jan 2023 01:13 UTC)

Re: [TML] An adventure 'nugget'? Alex Goodwin 22 Jan 2023 14:09 UTC

Collision,

I'll do what I can to help you create more realistic computer problems
for players - subject to correction by Greg and those who know more than
I do.

I agree with you that we're mapping actions in a fictional then (eg 2129
AD) to our now, without getting too bogged down in the minutiae. 
"Detailed, but not technical" may be something to aim for.

On an off-topic note, that sort of (Nokesian?) architecture outlined
would also have forced the Cylons to work much harder to pull off the
Great Day Of Universal Colonial Bereavement in neoBSG.

System security is in fundamental tension with system _usability_.  A
security measure (or set of) that users perceive as too inconvenient
will almost certainly be subverted or bypassed.

An "air gap" is shorthand for "these systems ain't connected, no way, no
how".  This may not have stopped the more exotic machine-psi versions of
Virus, but just about everything else _needs_ some existing medium to
travel along.  And the system designer(s) need to worry about side channels.

Frinstance, one of Derryn Dodgie's boys wants to leak data from an
allegedly air-gapped system A to another air-gapped system B using
(frinstance) the sound of A operating.  Or the _time_ between some
subtle indication.

"One way data gates" , aka "data diodes" (I think - blame me, not Greg,
if I've got this one wrong) are just that - they allow data to flow in
the "forward" direction, but not in the "reverse" direction - for
example, from left to right, but not from right to left.  I guess
'physical control" is a Big Purple Button to physically
connect/disconnect the gate/diode (introducing an air gap, as above).

"Only allowing sanitized messages" is (again, I think - blame me if I've
got it wrong) implementing the principle of "default deny" - aka using
whitelists.

For an analogy: the default position to being read into the codeword
compartment PEACEFUL HANDS (the Terran Confederation's bioweapon
retaliation for setting up Terra the bomb) is "sod off" (same with any
other restricted/classified program) - only those with a substantiated
need-to-know get read in.

Following that analogy, only _well-formed_ messages are accepted.  All
the i's must be dotted, all the t's must be crossed, etc - whatever
"well-formed" means in the particular case you're looking at.

Further paranoia might demand the as-received message's contents get
copied across to a new, known-clean, message first.  The received
message (tainted by contact with the outside) then gets discarded, and
the new one gets sent on.

In OTL, the OpenBSD operating system has implemented this to a massive
extent - one thing in particular is their pledge mechanism.  Long story
short, a program makes a promise "I'm only going to talk to (say)
standard input/output, network sockets, and domain-name lookup" (such as
the ping command) - and, later on, can only surrender its existing
pledge or parts of it.  If this promise is later violated (the program
tries to colour outside the whitelist set up by pledge - say trying to
access a file on disk), it gets summarily flattened by the operating
system, and thus can't be used by an external miscreant.

Systems built around default-deny tend to be both more secure, and more
reliable, than those built around default-accept - lots more junk input
is rejected up front, before it can cause problems.

"Hardware systems with read-only underlying OS's" - the actual
operating-system smarts are built in _hardware_ (which runs much quicker
than in software) and, as a result, can't be changed while they're
running.  As a benefit, any wedgitude can (at least in principle, modulo
psionic chips or other hardware silly buggers) be cleared by a reset.

"Triplicate, consensus systems" harks back to the Byzantine Generals
Problem (I think - Greg, please correct me), aka Byzantine failure.

Not only may some components have failed, but you also have to deal with
imperfect information about said failures - which may be malicious

 From the system design end, a failed/compromised component can do _any
damned thing it wants_ - such as lie to other components about its status.

Given all that, how do you design and build a
_safety-and-security-critical_ distributed system to not only _keep
functioning_ in the presence of Byzantine faults, but _keep its realtime
guarantees_ as well?

It's hardly useful to be ostensibly resistant to Byzantine faults at the
cost of flight control inputs being 1500 milliseconds late.  The late
Nikki Thornton, et al, might have something to say here.

This _has_ to be a solved problem for starships to operate in a TU with
anything like their implied reliability.

SOTA here, as far as I know, with all the strong cryptomagic that Greg
mentioned, is being able to tolerate T failed instances with 2T + 1
overall instances.  3 instances being needed to tolerate 1 failed, 5
instances to tolerate 2, etc.

This couples nicely with hardware systems mentioned above.  If you need
to reset a replicated system, you can do it one element at a time
(subject to some preparation) - a "rolling reboot". After all "down due
to reboot" is seen as simply another failure condition to deal with.

"Neural network based firewalls" - I'm not sure what Greg was meaning
here.  At a guess, some sort of pseudo-AI automagically-adaptive set of
defences that can react and adapt to electronic attacks at electronic speed.

Given the Nokesian architecture in place, subverting a system via the
electron two-step is thus orders of magnitude more difficult than
conning some twit - so twits will continue to be conned.

It's the same way an attacker bypasses an alarm system - convince the
sophont(s) monitoring it that it's "another bloody false alarm".

Hope that helps.

Alex

On 22/1/23 23:00, Timothy Collinson - timothy.collinson at port.ac.uk
(via tml list) wrote:
> Just a note...
>
> On Wed, 18 Jan 2023 at 03:25, Greg Nokes - greg at nokes.name
> <http://nokes.name> (via tml list) <xxxxxx@simplelists.com> wrote:
>
>     <snip>
>     I would expect systems to operate in far different modes than we
>     are accustomed to. Given that the smartest folks in the Terran, 2I
>     and 3I have had a long time to think about this, I cannot really
>     imagine what those safeguards would look like. Given what we know
>     now, I would expect things like
>
>     * Air gaps between sensor, commo, and control systems
>     * Physically controlled one way data gates
>     * triplicate unconnected consensus  based systems ( because
>     everything on a ship is triplicate, right?)
>     * Hardware systems with read only underlying OS’s
>     * Some pretty intense crypto (and the real crypto, not NFT’s 🤣)
>     * Using that intense crypto to sign everything to insure that
>     there is no tampering
>     * Only allowing sanitized messages between systems.
>     * Neural Network based firewalls.
>
>
>
> ... to say that I appreciated this list and if I understood a quarter
> of it would pinch it for more realistic computer problems for players!