Mr. Roboto

I’m a skeptic and an intellectual, so I don’t put too much weight coincidence. But then again, I’m a storyteller, so I love chalking up coincidences as some sort of element of an unseen plot.

Yesterday, my YouTube music playlist brought me across Halsey’s Gasoline. Thinking it over, I probably heard this song in passing some time ago, but if I did, I didn’t commit it to memory, because hearing it was like listening to it for the first time. And what a day to stumble across it. The lyrics, if you’ve never heard them, go thusly:

And all the people say
You can’t wake up, this is not a dream
You’re part of a machine, you are not a human being
With your face all made up, living on a screen
Low on self esteem, so you run on gasoline

I think there’s a flaw in my code
These voices won’t leave me alone
Well my heart is gold and my hands are cold

Why did this resonate with me so much today of all days? Because I had just completed an upgrade of my life support systems to new software, which for the first time includes new computer algorithms that allow the cyborg parts of me to act in a semi-autonomous manner instead of relying solely on human input.

It’s a small step, both from a technical and medical perspective. The algorithm it uses is simple linear regression model rather than a proper machine learning program as people expect will be necessary for fully autonomous artificial organs. The only function the algorithm has at the moment is to track biometrics and shut off the delivery of new medication to prevent an overdose, rather than keeping those biometrics in range in general. And it only does this within very narrow limits; it’s not really a fail-safe against overdoses, because the preventative mechanism is still very narrowly applied, and very fallible.

But the word prevention is important here. Because this isn’t a simple dead man’s switch. The new upgrade is predictive, making decisions based on what it thinks is going to happen, often before the humans clue in (in twelve hours, this has already happened to me). In a sense, it is already offloading human cognitive burden and upgrading the human ability to mimic body function. As of yesterday, we are now on the slippery slope that leads to cyborgs having superhuman powers.

We’re getting well into sci-fi and cyberpunk territory here, with the door open to all sorts of futurist speculation, but there are more questions that need to be answered sooner rather than later. For instance, take the EU General Data Protection Regulation, which (near as I, an American non-lawyer can make heads or tails of it,) mandates companies and people disclose when they use AI or algorithms to make decisions regarding EU citizens or their data, and mandating recourse for those who want the decisions reviewed by a human; a nifty idea for ensuring the era of big data remains rooted in human ethics.

But how does it fit in if, instead of humans behind algorithms, its algorithms behind humans? In its way, all of my decisions are at least now partially based on algorithms, given that the algorithms keep me alive to be able to make decisions, and have taken over other cognitive functions that would occupy my time and focus otherwise. And I do interact with EU citizens. A very strict reading of the EU regulations suggests this might be enough for me to fall under its aegis.

And sure, this is a relatively clear cut answer today; an EU court isn’t going to rule that all of my actions need to be regulated like AI because I’m wearing a medical device. But as the technology becomes more robust, the line is going to get blurrier, and we’re going to need to start treating some hard ethical questions not as science fiction, but as law. What happens when algorithms start taking over more medical functions? What happens when we start using machines for neurological problems, and there really isn’t a clear line between human and machine for decision making process?

I have no doubt that when we get to that point, there will be people who oppose the technology, and want it to be regulated like AI. Some of them will be Westboro Baptist types, but many will be ordinary citizens legitimately concerned about privacy and ethics. How do we build a society so that people who take advantage of these medical breakthroughs aren’t, as in Halsey’s song, derided and ostracized in public? How do we avoid creating another artificial divide and sparking fear between groups?

As usual, I don’t know the answer. Fortunately for us, we don’t need an answer today. But we will soon. The next software update for my medical device, which will have the new algorithms assuming greater functions and finer granularity, is already in clinical trials, and expected to launch this time next year. The EU GDPR was first proposed in 2012 and only rolled out this year. The best way to avoid a sci-fi dystopia future is conscious and concerted thought and discussion today.

Automatism

I’m not sure what exactly the nightmares was that stuck with me for a solid twenty minutes after I got out of bed and before I woke up. Whatever it was, it had me utterly convinced that I was in mortal peril from my bed linens. And so I spent a solid twenty minutes trying desperately to remove them from me, before the cold woke me up enough to realize what I was doing, and had the presence of mind to stop.

This isn’t the first time I’ve woken up in the middle of doing something in an absentminded panic. Most of those times, however, I was either in a hospital, or would be soon. There have been a handful of isolated incidents in which I have woken up, so to speak, at the tail end of a random black-out. That is, I will suddenly realize that I’m most of the way through the day, without any memory of events for some indeterminate time prior. But this isn’t waking up per se; more like my memory is suddenly snapping back into function, like a recording skipping and resuming at a random point later on.

I suppose it is strictly preferable to learn that my brain has evidently delegated its powers to operate my body such that I need not be conscious to perform tasks, as opposed to being caught unawares by whatever danger my linens posed to me that required me to get up and dismantle them from my bed with such urgency that I could not wake up first. Nevertheless I am forced to question the judgement of whatever fragment of my unconscious mind took it upon its own initiative to operate my body without following the usual channels and getting my conscious consent.

The terminology, I recognize, is somewhat vague and confusing, as I have difficulty summoning words to express what has happened and the state it has left me in.

These episodes, both these more recent ones, and my longer history of breaks in consciousness, are a reminder of a fact that I try to put out of mind on a day to day basis, and yet which I forget at my own peril. Namely, the acuity of my own mortality and fragility of my self.

After all, who, or perhaps what, am I outside of the mind which commands me? What, or who, gives orders in my absence? Are they still orders if given by a what rather than a who, or am I projecting personhood onto a collection of patterns executed by the simple physics of my anatomy? Whatever my (his? Its?) goal was in disassembling my bed, I did a thorough job of it, stripping the bed far more efficiently and thoroughly than I could have by accident.

I might not ever find serious reason to ask these questions, except that every time so far, it has been me that has succeeded it. That is, whatever it does, it is I who has to contend with the results when I come back to full consciousness. I have to re-make the bed so that both of us can sleep. I have to explain why one of us saw fit to make a huge scuffle in the middle of the night, waking others up.

I am lucky that I live with my family, who are willing to tolerate the answer of “actually, I have no idea why I did that, because, in point of fact, it wasn’t me who did that, but rather some other being by whom I was possessed, or possibly I have started to exhibit symptoms of sleepwalking.” Or at least, they accept this answer now, for this situation, and dismiss it as harmless, because it is, at least so far. Yet I am moved to wonder where the line is.

After all, actions will have consequences, even if those actions aren’t mine. Let’s suppose for the sake of simplicity that these latest episodes are sleepwalking. If I sleepwalk, and knock over a lamp, that lamp is no more or less broken than if I’d thrown it to the ground in a rage. Moreover, the lamp has to be dealt with; its shards have to be cleaned up and disposed of, and the lamp will have to be replaced, so someone will have to pay for it. I might get away with saying I was sleepwalking, but more likely I would be compelled to help in the cleanup and replacement of the lamp.

But what if there had been witnesses who had seen me at the time, and said that they saw my eyes were open? It is certainly possible for a sleepwalker to have their eyes open, even to speak. And what if this witness believes that I was in fact awake, and fully conscious when I tipped over the lamp?

There is a relevant legal term and concept here: Automatism. It pertains to a debate surrounding medical conditions and culpability that is still ongoing and is unlikely to end any time soon. Courts and juries go back and forth on what precisely constitutes automatism, and to what degree it constitutes a legal defence, an excuse, or merely an avenue to plead down charges (e.g. manslaughter instead of murder). As near as I can tell, and without delving too deeply into the tangled web of case law, automatism is when a person is not acting as their own person, but rather like an automaton. Or, to quote Arlo Guthrie: “I’m not even here right now, man.”

This is different from insanity, even temporary insanity, or unconsciousness, for reasons that are complex and contested, and have more to do with the minutiae of law than I care to get into. But to summarize: unconsciousness and insanity have to do with denying criminal intent, which is required in most, though not all, crimes. Automatism, by subtle contrast, denies the criminal act itself, by arguing that there is not an actor by whom an act can be committed.

As an illustration, suppose an anvil falls out of the sky, cartoon style, and clobbers innocent bystander George Madison as he is standing on the corner, minding his own business, killing him instantly. Even though something pretty much objectively bad has happened; something which the law would generally seek to prevent, no criminal act per se has occurred. After all, who would be charged? The anvil? Gravity? God?

Now, if there is a human actor somewhere down the chain of causality; if the reason the anvil had been airborne was because village idiot Jay Quincy had pushed a big red button, which happened to be connected to an anvil-railgun being prepared by a group of local high schoolers for the google science fair; then maybe there is a crime. Whether or not Jay Quincy acted with malice aforethought, or was merely negligent, or reckless, or really couldn’t have been expected to know better, would be a matter for a court to decide. But there is an actor, so there is an act, so there might be a crime.

There are many caveats to this defence. The most obvious is that automatism, like (most) insanity is something that has to be proven by the defence, rather than the prosecution. So, to go back to our earlier example of the lamp, I would have to prove that during the episode, that I was sleepwalking. Merely saying that I don’t recall being myself at the time is not enough. For automatism to stick, it has to be proven, with hard evidence. Having a medical diagnosis of somnambulance and a history of sleepwalking episodes might be useful here, although it could also be used as evidence that I should have known better to prevent this in the first place (I’ll get to this point in a minute).

Whether or not this setup is fair, forcing the defence to prove that they weren’t responsible and assuming guilt otherwise, this is the only way that the system can work. The human sense of justice demands that crimes be committed, to some degree or another, voluntarily and of free will. Either there must be an act committed that oughtn’t have been, or something that ought have been prevented that wasn’t. Both of these, however, imply choices, and some degree of conscious free will.

Humans might have a special kind of free will, at least on our good days, that engenders us these rights and responsibilities, but science has yet to prove how this mechanism operates discretely from the physical (automatic) processes that make up our bodies. Without assuming free will, prosecutors would have to contend with proving something that has never even been proven in the abstract for each and every case. So the justice system makes a perhaps unreasonable assumption that people have free will unless there is something really obvious (and easily provable) that impedes it, like a gun to one’s head, or a provable case of sleepwalking.

There is a second caveat here that’s also been hinted at: while a person may not be responsible for their actions while in a state of automatism, they can still be responsible for putting themselves into such a state, either intentionally or negligently, which discounts the defence of automatism. So, while sleeping behind the wheel might happen in an automatic state, the law takes the view that you should have known better than to allow yourself to be behind the wheel if you were at risk of being asleep, and therefore you can still be charged. Sleepwalking does not work if, say, there was an unsecured weapon that one should’ve stowed away while conscious. Intoxication, even involuntary intoxication, whether from alcohol or some other drug, is almost never accepted.

This makes a kind of sense, after all. You don’t want to let people orchestrate crimes beforehand and then disclaim responsibility because they were asleep or what have you when the act occurred. On the other hand, this creates a strange kind of paradox for people with medical conditions that might result in a state of automatism at some point, and who are concerned about being liable for their actions and avoiding harm to others. After all, taking action beforehand shows that you knew something might have happened and should have been prepared for it, and are therefore liable. And not taking action is obviously negligent, and makes it difficult to prove that you weren’t acting under your own volition in the first place.

Incidentally, this notion of being held responsible; in some sense, of being responsible; for actions taken by a force other than my own free will, is one of my greatest fears. The idea that I might hurt someone, not even accidentally, but as an involuntary consequence of my medical situation; that is to say, the condition of the same body that makes me myself; I find absolutely petrifying. This has already happened before, as I have accidentally hurt people while flailing passing in and out of a coma, and there is no reason to believe that the same thing couldn’t happen again.

So, what to do? I was hoping that delving into the law might find me some solace from this fear; that I might encounter some landmark argument that would satisfy not just some legal liability, but which I would be able to use as a means of self-assurance. Instead it has done the opposite, and I am less confident now than when I started.