This Was A Triumph

Today I am happy to announce a new milestone. As of today I have received from my manufacturer the authorization code to initiate semi-closed loop mode on my life support devices. This means that for the first time, my life support devices are capable of keeping me alive for short periods without immediate direct human intervention. For the first time in more than a decade, it is now safe for me to be distracted by such luxuries as homework, and sleep. At least, for short periods, assuming everything works within normal parameters. 

Okay, yes, this is a very qualified statement. Compared to the kind of developments which are daily promised by fundraising groups and starry eyed researchers, this is severely underwhelming. Even compared solely to technologies which have already proven themselves in other fields and small scale testing, the product which is now being rolled out is rather pathetic. There are many reasons for this, from the risk-aversiveness of industry movers, to the glacial pace of regulatory shakers, to a general shortage of imagination among decision makers. It is easy to find reasons to be angry and feel betrayed that the US healthcare system has once again failed to live up to its promise of delivering breakneck innovation and improvement.

Even though this is disappointing compared to the technological relief we were marketed, I am still excited about this development. First of all, because it is a step in the right direction, even if a small one, and any improvement is worth celebrating. Secondly, and chiefly, because I believe that even if this particular new product is only an incremental improvement over the status quo, and pales in comparison to what had been promised for the past several decades, the particular changes represent the beginning of a larger shift. After all, this is the first iteration of this kind of life support device which uses machine learning, not merely to enable a fail-safe to prevent medication overdoses, but which actually intends to make proactive treatment decisions without human oversight.

True, the parameters for this decision making are remarkably conservative, some argue to the point of uselessness. The software will not deploy under anything short of perfect circumstances, its treatment targets are short of most clinical targets, let alone best practices, the modeling is not self-correcting, and the software can not interpret human intervention and is therefore mutually exclusive with aggressive treatment by a human.

Crucially, however, it is making decisions instead of a human. We are over the hill on this development. Critiques of its decision-making skill can be addressed down the line, and I expect once the data is in, it will be a far easier approval and rollout process than the initial version. But unless some new hurdle appears, as of now we are on the path towards full automation.

Close Paren

Classes continue to go apace. I have had some trouble parsing classes and where they fall on the difficulty spectrum. On the one hand, the readings are, if not necessarily challenging themselves, then at least, reflective of an intellectual stature that seems to foreshadow challenge in class. On the other hand, classes themselves are unnervingly easy; or at least, the level of engagement by other students makes it distressingly easy to appear capable by comparison.

This unnerved feeling isn’t helped by my schedule. The downside of having a very light course load, which requires from me really only two afternoons a week, plus however long it takes to accomplish homework, is that my brain doesn’t seem to really cycle between periods of productivity and downtime. I haven’t seemed to slip into a daily cadence which allows me to intrinsically know what day of the week it is, and have an intrinsic perception of the events of the next several days.

I say this is a downside; in truth I don’t know. It is unexpected compared to how I expected to handle things, but at least so far I have continued to handle things, which I suppose is sufficient for now. It may be that my old notions of how I viewed the week were solely a product of my high school schedule at the time, and that in time I shall develop a new perspective tailored to the present situation. If so, I expect this will take some time to develop.

One sign that this is happening is that I have begun to pick up old projects again. In particular, I have taken to toying around with the modding tools on my Hearts of Iron IV game, with the end goal of adding the factions from some of my writings. Although I have used some tutorials in this process, it has mostly been a matter of reverse engineering the work of others, and experimenting through trial and error. Despite being totally out of my depth, in the sense that this is a matter of modifying computer code files more than writing alternate history, I consider myself talented at throwing myself into learning new things, and have made great strides in my programming efforts, despite setbacks.

I am still tickled by the image of staring at computer code in an editor, making tweaks and squashing bugs in the code. It strikes me because I am not a very technically savvy person. I can follow instructions, and with a vague understanding of what I want to do and examples of how it can be done, I can usually cobble together something that works. That is, after all, how I built this site, and how I have managed to get alternate history countries onto the map of my game; though the cryptic error messages and apparent bugs tell me I’ve still got a way to go. But even so, I’ve never considered myself a computer person.

What’s funny is that I fit into the stereotype. I am a pale, skinny, young man, I wear glasses, t-shirts, and trousers with many pockets, and I have trouble with stereotypical jocks. When I volunteer for my favorite charity, which provides free open source software for medical data, people assume I am one of the authors of the code. I have had to go to great lengths to convince people that I don’t write the code, but merely benefit from it, and even greater lengths to convince the same people that when I say the process by which the code is made operational is easy, I am not presupposing any kind of technical knowledge.

In any case the last week has been not necessarily uneventful, but focused on small headlines. There are other projects in the pipes, but nothing with a definitive timeframe. Actually that’s an outright lie. There are several things with definitive timeframes. But those things are a secret, to be revealed in due course, at the appropriate juncture.

Mr. Roboto

I’m a skeptic and an intellectual, so I don’t put too much weight coincidence. But then again, I’m a storyteller, so I love chalking up coincidences as some sort of element of an unseen plot.

Yesterday, my YouTube music playlist brought me across Halsey’s Gasoline. Thinking it over, I probably heard this song in passing some time ago, but if I did, I didn’t commit it to memory, because hearing it was like listening to it for the first time. And what a day to stumble across it. The lyrics, if you’ve never heard them, go thusly:

And all the people say
You can’t wake up, this is not a dream
You’re part of a machine, you are not a human being
With your face all made up, living on a screen
Low on self esteem, so you run on gasoline

I think there’s a flaw in my code
These voices won’t leave me alone
Well my heart is gold and my hands are cold

Why did this resonate with me so much today of all days? Because I had just completed an upgrade of my life support systems to new software, which for the first time includes new computer algorithms that allow the cyborg parts of me to act in a semi-autonomous manner instead of relying solely on human input.

It’s a small step, both from a technical and medical perspective. The algorithm it uses is simple linear regression model rather than a proper machine learning program as people expect will be necessary for fully autonomous artificial organs. The only function the algorithm has at the moment is to track biometrics and shut off the delivery of new medication to prevent an overdose, rather than keeping those biometrics in range in general. And it only does this within very narrow limits; it’s not really a fail-safe against overdoses, because the preventative mechanism is still very narrowly applied, and very fallible.

But the word prevention is important here. Because this isn’t a simple dead man’s switch. The new upgrade is predictive, making decisions based on what it thinks is going to happen, often before the humans clue in (in twelve hours, this has already happened to me). In a sense, it is already offloading human cognitive burden and upgrading the human ability to mimic body function. As of yesterday, we are now on the slippery slope that leads to cyborgs having superhuman powers.

We’re getting well into sci-fi and cyberpunk territory here, with the door open to all sorts of futurist speculation, but there are more questions that need to be answered sooner rather than later. For instance, take the EU General Data Protection Regulation, which (near as I, an American non-lawyer can make heads or tails of it,) mandates companies and people disclose when they use AI or algorithms to make decisions regarding EU citizens or their data, and mandating recourse for those who want the decisions reviewed by a human; a nifty idea for ensuring the era of big data remains rooted in human ethics.

But how does it fit in if, instead of humans behind algorithms, its algorithms behind humans? In its way, all of my decisions are at least now partially based on algorithms, given that the algorithms keep me alive to be able to make decisions, and have taken over other cognitive functions that would occupy my time and focus otherwise. And I do interact with EU citizens. A very strict reading of the EU regulations suggests this might be enough for me to fall under its aegis.

And sure, this is a relatively clear cut answer today; an EU court isn’t going to rule that all of my actions need to be regulated like AI because I’m wearing a medical device. But as the technology becomes more robust, the line is going to get blurrier, and we’re going to need to start treating some hard ethical questions not as science fiction, but as law. What happens when algorithms start taking over more medical functions? What happens when we start using machines for neurological problems, and there really isn’t a clear line between human and machine for decision making process?

I have no doubt that when we get to that point, there will be people who oppose the technology, and want it to be regulated like AI. Some of them will be Westboro Baptist types, but many will be ordinary citizens legitimately concerned about privacy and ethics. How do we build a society so that people who take advantage of these medical breakthroughs aren’t, as in Halsey’s song, derided and ostracized in public? How do we avoid creating another artificial divide and sparking fear between groups?

As usual, I don’t know the answer. Fortunately for us, we don’t need an answer today. But we will soon. The next software update for my medical device, which will have the new algorithms assuming greater functions and finer granularity, is already in clinical trials, and expected to launch this time next year. The EU GDPR was first proposed in 2012 and only rolled out this year. The best way to avoid a sci-fi dystopia future is conscious and concerted thought and discussion today.

Unreachable

I suspect that my friends think that I lie to them about being unreachable as an excuse to simply ignore them. In the modern world there are only a small handful situations in which a person genuinely can’t be expected to be connected and accessible.

Hospitals, which used to be a communications dead zone on account of no cell-phone policies, have largely been assimilated into the civilized world with the introduction of guest WiFi networks. Airplanes are going the same way, although as of yet WiFi is still a paid commodity, and in that is sufficiently expensive as to make it still a reasonable excuse.

International travel used to be a good excuse, but nowadays even countries that don’t offer affordable and consistent cellular data have WiFi hotspots at cafes and hotels. The only travel destinations that are real getaways in this sense- that allow you to get away from the modern life by disconnecting you from the outside world -are developing countries without infrastructure, and the high seas. This is the best and worst part of cruise ships, which charge truly extortionate rates for slow, limited internet access.

The best bet for those who truly don’t want to be reached is still probably the unspoilt wilderness. Any sufficiently rural area will have poor cell reception, but areas which are undeveloped now are still vulnerable to future development. After all, much of the rural farming areas of the Midwest are flat and open. It only takes one cell tower to get decent, if not necessarily fast, service over most of the area.

Contrast this to the geography of the Appalachian or Rocky Mountains, which block even nearby towers from reaching too far, and in many cases are protected by regulations. Better yet, the geography of Alaska combines several of these approaches, being sufficiently distant from the American heartland that many phone companies consider it foreign territory, as well as being physically huge, challenging to develop, and covered in mountains and fjords that block signals.

I enjoy cruises, and my grandparents enjoy inviting us youngsters up into the mountains of the northeast, and so I spend what is probably for someone of my generation, a disproportionate amount of time disconnected from digital life. For most of my life, this was an annoyance, but not a problem, mostly because my parents handled anything important enough to have serious consequences, but partially because, if not before social media, then at least before smartphones, being unreachable was a perfectly acceptable and even expected response to attempts at contact.

Much as I still loath the idea of a phone call, and will in all cases prefer to text someone, the phone call, even unanswered, did provide a level of closure that an unanswered text message simply doesn’t. Even if you got the answering machine, it was clear that you had done your part, and you could rest easy knowing that they would call you back at their leisure; or if it was urgent, you kept calling until you got them, or it became apparent that they were truly unreachable. There was no ambiguity whether you had talked to them or not; whether your message had really reached them and they were acting on it, or you had only spoken to a machine.

Okay, sure, there was some ambiguity. Humans have a way of creating ambiguity and drama through whatever form we use. But these were edge cases, rather than seemingly being a design feature of text messages. But I think this paradigm shift is more than just the technology. Even among asynchronous means, we have seen a shift in expectations.

Take the humble letter, the format that we analogize our modern instant messages (and more directly, e-mail) to most frequently and easily. Back in the day when writing letters was a default means of communication, writing a letter was an action undertaken on the part of the sender, and a thing that happened to the receiver. Responding to a letter by mail was polite where appropriate, but not compulsory. This much he format shares with our modern messages.

But unlike our modern systems, with a letter it was understood that when it arrived, it would be received, opened, read, and replied to all in due course, in the fullness of time, when it was practical for the recipient, and not a moment sooner. To expect a recipient to find a letter, tear it open then and there, and drop everything to write out a full reply at that moment, before rushing it off to the post office was outright silly. If a recipient had company, it would be likely that they would not even open the letter until after their business was concluded, unlike today, where text messages are read and replied to even in the middle of conversation.

Furthermore, it was accepted that a reply, even to a letter of some priority, might take some several days to compose, redraft, and send, and it was considered normal to wait until one had a moment to sit down and write out a proper letter, for which one was always sure to have something meaningful to say. Part of this is an artifact of classic retrospect, thinking that in the olden day’s people knew the art of conversation better, and much of it that isn’t is a consequence of economics. Letters cost postage, while today text messaging is often included in phone plans, and in any case social media offers suitable replacements for free.

Except that, for a while at least, the convention held in online spaces too. Back in the early days of email, back when it was E-mail (note the capitalization and hyphenation), and considered a digital facsimile of postage rather than a slightly more formal text message, the accepted convention was that you would sit down to your email, read it thoroughly, and compose your response carefully and in due course, just as you would on hard copy stationary. Indeed, our online etiquette classes*, we were told as much. Our instructors made clear that it was better to take time in responding to queries with a proper reply than get back with a mere one or two sentences.

*Yes, my primary school had online etiquette classes, officially described as “nettiquete courses”, but no one used that term except ironically. The courses were instituted after a scandal in parliament, first about students’ education being outmoded in the 21st century, and second about innocent children being unprepared for the dangers of the web, where, as we all know, ruffians and thugs lurk behind every URL. The curriculum was outdated the moment it was made, and it was discontinued only a few years after we finished the program, but aside from that, and a level of internet paranoia that made Club Penguin look lassaiz faire, it was helpful and accurately described how things worked.

In retrospect, I think this training helps explain a lot of the anxieties I face with modern social media, and the troubles I have with text messages and email. I am acclaimed by others as an excellent writer and speaker, but brevity is not my strong suit. I can cut a swathe through paragraphs and pages, but I stumble over sentences. When I sit down to write an email, and I do, without fail, actually sit down to do so, I approach the matter with as much gravity as though I were writing with quill and parchment, with all the careful and time-consuming redrafting, and categorical verbosity that the format entails.

But email and especially text messages are not the modern reincarnation of the bygone letter, nor even the postcard, with it’s shorter format and reduced formality. Aside from a short length that is matched in history perhaps only by the telegram, the modern text message has nearly totally forgone not only the trappings of all previous formats, but indeed, has seemed to forgo the trappings of form altogether.

Text messages have seemed to become accepted not as a form of communication so much as an avenue of ordinary conversation. Except this is a modern romanticization of text messages. Because while text messages might well be the closest textual approximation of a face to face conversation that doesn’t involve people actually speaking simultaneously, it is still not a synchronous conversation.

More importantly than the associated pleasantries of the genre, text messages work on an entirely different timescale than letters. Where once, with a letter, it might be entirely reasonable for a reply to take a fortnight, nowadays a delay in responding to a text message between friends beyond a single day is a cause for concern and anxiety.

And if it were really a conversation, if two people were conversing in person, or even over the phone, and one person without apparent reason failed to respond to the other’s prompts for a prolonged period, this would indeed be cause for alarm. But even ignoring the obvious worry that I would feel if my friend walking alongside me in the street suddenly stopped answering me, in an ordinary conversation, the tempo is an important, if underrated, form of communication.

To take an extreme example, suppose one person asks another to marry them. What does it say if the other person pauses? If they wait before answering? How is the first person supposed to feel, as opposed to an immediate and enthusiastic response? We play this game all the time in spoken conversation, drawing out words or spacing out sentences, punctuating paragraphs to illustrate our point in ways that are not easily translated to text, at least, not without the advantage of being able to space out one’s entire narrative in a longform monologue.

We treat text messages less like correspondence, and more like conversation, but have failed to account for the effects of asyncronicity on tempo. It is too easy to infer something that was not meant by gaps in messages; to interpret a failure to respond as a deliberate act, to mistake slow typing for an intentional dramatic pause, and so forth.

I am in the woods this week, which means I am effectively cut off from communication with the outside world. For older forms of communication, this is not very concerning. My mail will still be there when I return, and any calls to the home phone will be logged and recorded to be returned at my leisure. Those who sent letters, or reached an answering machine know, or else can guess, that I am away from home, and can rest easy knowing that their missives will be visible when I return.

My text messages and email inbox, on the other hand, concern me, because of the very real possibility that someone will contact me thinking I am reading messages immediately, since my habit of keeping my phone within arm’s reach at all times is well known, and interpreting my failure to respond as a deliberate snub, when in reality I am out of cell service. Smart phones and text messages have become so ubiquitous and accepted that we seem to have silently arrived at the convention that shooting off a text message to someone is as good as calling them, either on the phone or even in person. Indeed, we say it is better, because text messages give the recipient the option of postponing a reply, even though we all quietly judge those people who take time to respond to messages, and will go ahead and imply all the social signals of a sudden conversational pause in the interim, while decrying those who use text messages to write monologues.

I’ll say it again, because it bears repeating after all the complaints I’ve given: I like text messages, and I even prefer them as a communication format. I even like, or at least tolerate, social media messaging platforms, despite having lost my appreciation for social media as a whole. But I am concerned that we, as a society, and as the first generation to really build the digital world into the foundations of our lives, are setting ourselves up for failure in our collective treatment of our means of communication.

When we fail to appreciate the limits of our technological means, and as a result, fail to create social conventions that are realistic and constructive, we create needless ambiguity and distress. When we assign social signals to pauses in communication that as often as not have more to do with the manner of communication than the participants or their intentions, we do a disservice to ourselves and others. We may not mention it aloud, we may not even consciously consider it, but it lingers in our attitudes and impressions. And I would wager that soon enough we will see a general rise in anxiety and ill will towards others.

Personal Surveillance – Part 2

This is the second installment in a continued multi-part series entitled Personal Surveillance. To read the other parts once they become available, click here.


Our modern surveillance system is not the totalitarian paradigm foreseen by Orwell, but a decentralized, and in the strictest sense, voluntary, though practically compulsory network. The goal and means are different, but the ends, a society with total insight into the very thoughts of its inhabitants, are the same.

Which brings me to last week. Last week, I was approached by a parent concerned about the conduct of her daughter. Specifically, her daughter has one of the same diagnoses I do, and had been struggling awfully to keep to her regimen, and suffering as a result. When I was contacted the daughter had just been admitted to the hospital to treat the acute symptoms and bring her back from the brink. This state of affairs is naturally unsustainable, in both medical and epistemological terms. I was asked if there was any advice I could provide, from my experience of dealing with my own medical situation as a teenager, and in working closely with other teenagers and young adults.

Of course, the proper response depends inextricably upon the root cause of the problem. After all, treating what may be a form of self harm, whether intentional or not, which has been noted to be endemic to adolescents who have to execute their own medical regimen, or some other mental illness, with the kind of disciplinary tactics that might be suited to the more ordinary teenage rebellion and antipathy, would be not only ineffective and counterproductive, but dangerous. There are a myriad of different potential causes, many of which are mutually exclusive, all of which require different tactics, and none of which can be ruled out without more information.

I gave several recommendations, including the one I have been turning over in my head since. I recommended that this mother look into her daughter’s digital activities; into her social media, her messages, and her browser history. I gave the mother a list of things to look out for: evidence of bullying online or at school, signs that the daughter had been browsing sites linked to mental illness, in particular eating disorders and depression, messages to her friends complaining about her illness or medical regimen, or even a confession that she was willfully going against it. The idea was to try and get more information to contextualize her actions, and that this would help her parents help her.

After reflecting for some time, I don’t feel bad about telling the mother to look through private messages. The parents are presumably paying for the phone, and it’s generally accepted that parents have some leeway to meddle in children’s private lives, especially when it involves medical issues. What bothers me isn’t any one line being crossed. What bothers me is this notion of looking into someone’s entire life like this.

That is, after all, the point here. The mother is trying to pry into her daughter’s whole life at once, into her mind, to figure out what makes her tick, why she does what she does, and what she is likely to do in the future. Based on the information I was provided, it seemed justified; even generous. As described, the daughter’s behavior towards her health is at best negligent, and at worst suggests she is unstable and a danger to herself. The tactics described, sinister though they are, are still preferable to bringing down the boot-heel of discipline or committing her to psychiatric care if neither may be warranted.

This admittedly presupposes that intervention is necessary in any case, in effect presuming guilt. In this instance, it was necessary, because the alternative of allowing the daughter to continue her conduct, which was, intentional or not, causing medical harm and caused her to be hospitalized, was untenable. At least, based on the information I had. But even such information was certainly enough to be gravely concerned, if not enough to make a decision on a course of action.

The goal, in this case, was as benevolent as possible: to help the daughter overcome whatever it was that landed her in this crisis in the first place. Sometimes such matters truly are a matter of doing something “for their own good”. But such matters have to be executed with the utmost kindness and open-mindedness. Violating someone’s privacy may or may not be acceptable under certain circumstances, but certainly never for petty vendettas.

It would not, for example, be acceptable for the mother to punish the daughter for a unkind comment made to a friend regarding the mother. Even though this might suggest that some discipline is in order to solve the original problem, as, without other evidence to the contrary, it suggests a pattern of rebellion that could reasonably be extrapolated to include willful disobedience of one’s medical regimen, such discipline needs to be meted out for the original violation, not for one that was only discovered because of this surveillance.

Mind you, I’m not just talking out of my hat here. This is not just a philosophical notion, but a legal one as well. The fifth amendment, and more broadly the protections against self-incrimination, are centered around protecting the core personhood- a person’s thoughts and soul -from what is known as inquisitorial prosecution. Better scholars than I have explained why this cornerstone is essential to our understanding of justice and morality, but, to quickly summarize: coercing a person by using their private thoughts against them deprives them of the ability to make their own moral choices, and destroys the entire notion of rights, responsibilities, and justice.

Lawyers will be quick to point out that the fifth amendment as written doesn’t apply here per se (and as a matter of law, they’d be right). But we know that our own intentions are to look into the daughter’s life as a whole, her thoughts and intentions, which is a certain kind of self incrimination, even if you would be hard pressed to write a law around it. We are doing this not to find evidence of new wrongs to right, but to gain context which is necessary for the effective remedy of problems that are already apparent, that were already proven. By metaphor: we are not looking to prosecute the drug user for additional crimes, but to complete rehabilitation treatment following a previous conviction.

In government, the state can circumvent the problems posed to fact-finding by the fifth amendment by granting immunity to the testifying witness so that anything they say can not be used against them, as though they had never said it, neutralizing self-incrimination. In our circumstances, it is imperative that the information gathered only be used as context for the behaviors we already know about. I tried to convey this point in my recommendations to the mother in a way that also avoided implying that I expected she would launch an inquisition at the first opportunity.

Of course, this line of thinking is extremely idealistic. Can a person really just ignore a social taboo, or minor breach, and carry on unbiased and impartial in digging through someone’s entire digital life? Can that person who has been exposed to everything the subject has ever done, but not lived any of it, even make an objective judgment? The law sweeps this question under the rug, because it makes law even more of an epistemological nightmare than it already is, and in practical terms probably doesn’t matter unless we are prepared to overhaul our entire constitutional system. But it is a pertinent question for understanding these tactics.

The question of whether such all-inclusive surveillance of our digital lives can be thought to constitute self-incrimination cannot be answered in a blog post, and is unlikely to be settled in the foreseeable future. The generation which is now growing up, which will eventually have grown up with nothing else but the internet, will, I am sure, be an interesting test case. It is certainly not difficult to imagine that with all the focus on privacy and manipulation of online data that we will see a shift in opinions, so that parts of one’s online presence will be thought to be included as part of one’s mind. Or perhaps, once law enforcement catches up to the 21st century, we will see a subtle uptick in the efficacy of catching minor crimes and breaches of taboo, possibly before they even happen.

Personal Surveillance – Part 1

This is the first installment in a multi-part series entitled Personal Surveillance. To read the other parts once they become available, click here.


George Orwell predicted, among many other things, a massive state surveillance apparatus. He wasn’t wrong; we certainly have that. But I’d submit that it’s also not the average person’s greatest threat to privacy. There’s the old saying that the only thing protecting citizens from government overreach is government inefficiency, and in this case there’s something to that. Surveillance programs are terrifyingly massive in their reach, but simply aren’t staffed well enough to parse everything. This may change as algorithms become more advanced in sifting through data, but at the moment, we aren’t efficient enough to have a thought police.

The real danger to privacy isn’t what a bureaucrat is able to pry from an unwilling suspect, but what an onlooker is able to discern from an average person without any special investigative tools or legal duress. The average person is generally more at risk from stalkers than surveillance. Social media is especially dangerous in this regard, and the latest scandals surrounding Cambridge Analytica, et. al. are a good example of how social media can be used for nefarious purposes.

Yet despite lofty and varied criticism, I am willing to bet the overall conclusion of this latest furor: the eventual consensus will be that, while social media may be at fault, its developers are not guilty of intentional malice, but rather of pursuing misaligned incentives, combined with an inability to keep up, whether through laziness or not grasping the complete picture soon enough, with the accelerating pace with which our lives have become digitized.

Because that is the root problem. Facebook and its ilk started as essentially decentralized contact lists and curated galleries, and twitter and its facsimiles started as essentially open-ended messaging services, but they have evolved into so much more. Life happens on the Internet nowadays.

In harkening back to the halcyon days before the scandal du jour, older people have called attention to the brief period between the widespread adoption of television and the diversification; the days when there were maybe a baker’s dozen channels. In such times, we are told, people were held together by what was on TV. The political issues of the day were chosen by journalists, and public discourse shaped almost solely by the way they were presented on those few channels. Popular culture, we are told, was shaped in much the same way, so that there was always a baseline of commonality.

Whether or not this happened in practice, I cannot say. But I think the claim about those being the halcyon days before all this divide and subdivide are backwards. On the contrary, I would submit that those halcyon days were the beginning of the current pattern, as people began to adapt to the notion that life is a collective enterprise understood through an expansive network. Perhaps that time was still a honeymoon phase of sorts. Or perhaps the nature of this emerging pattern of interconnectedness is one of constant acceleration, like a planet falling into a black hole, slowly, imperceptibly at first, but always getting faster.

But getting back to the original point, in addition to accelerating fragmentation, we are also seeing accelerated sharing of information, which is always, constantly being integrated, woven into a more complete mosaic narrative. Given this, it would be foolish to think that we could be a part of it without our own information being woven into the whole. Indeed, it would be foolish to think that we could live in a world so defined by interconnectedness and not be ourselves part of the collective.

Life, whether we like it or not, is now digital. Social media, in the broadest sense, is the lenses through which current events are now projected onto the world, regardless of whether or not social media was built for or to withstand this purpose. Participation is compulsory (that is, under compulsion, if not strictly mandatory) to be a part of modern public life. And to this point, jealous scrutiny of one’s internet presence is far more powerful than merely collecting biographical or contact information, such as looking one up in an old fashioned directory.

Yet society has not adapted to this power. We have not adapted to treat social media interactions with the same dignity with which we respect, for example, conversations between friends in public. We recognize that a person following us and listening in while we were in public would be a gross violation of our privacy, even if it might skirt by the letter of the law*. But trawling back through potentially decades of interactions online, is, well… we haven’t really formulated a moral benchmark.

This process is complicated by the legitimate uses of social media as a sort of collective memory. As more and more mental labor is unloaded onto the Internet, the importance of being able to call up some detail from several years ago becomes increasingly important. Take birthdays, for example. Hardly anyone nowadays bothers to commit birthdays to memory, and of the people I know, increasingly few keep private records, opting instead to rely on Facebook notifications to send greetings. And what about remembering other events, like who was at that great party last year, or the exact itinerary of last summer’s road trip?

Human memory fades, even more quickly now that we have machines to consult, and no longer have to exercise our own powers of recognizance. Trawling through a close friend’s feed in order to find the picture of the both of you from Turks and Caicos, so that you can get it framed as a present, is a perfectly legitimate, even beneficial, use of their otherwise private, even intimate, data, which would hardly be possible if that data were not available and accessible. The modern social system- our friendships, our jobs, our leisure -rely on this accelerating flow of information. To invoke one’s privacy even on a personal level seems now to border on the antisocial.

Technological Milestones and the Power of Mundanity

When I was fairly little, probably seven or so, I devised a short list of technologies based on what I had seen on television that I reckoned were at least plausible, and which I earmarked as milestones of sorts to measure how far human technology would progress during my lifetime. I estimated that if I was lucky, I would be able to have my hands on half of them by the time I retired. Delightfully, almost all of these have in fact already been achieved, less than fifteen years later.

Admittedly, all of these technologies that I picked were far closer than I had envisioned at the time. Living in Australia, which seemed to be the opposite side of the world from where everything happened, and living outside of the truly urban areas of Sydney which, as a consequence of international business, were kept up to date, it often seems that even though I technically grew up after the turn of the millennium, that I was raised in a place and culture that was closer to the 90s.

For example, as late as 2009, even among adults, not everyone I knew had a mobile phone. Text messaging was still “SMS”, and was generally regarded with suspicion and disdain, not least of all because not all phones were equipped to handle them, and not all phone plans included provisions for receiving them. “Smart” phones (still two words) did exist on the fringes; I know exactly one person who owned an iPhone, and two who owned a BlackBerry, at that time. But having one was still an oddity. Our public school curriculum was also notably skeptical, bordering on technophobic, about the rapid shift towards Broadband and constant connectivity, diverting much class time to decrying the evils of email and chat rooms.

These were the days when it was a moral imperative to turn off your modem at night, lest the hacker-perverts on the godless web wardial a backdoor into your computer, which weighed as much as the desk it was parked on, or your computer overheat from being left on, and catch fire (this happened to a friend of mine). Mice were wired and had little balls inside them that you could remove in order to sabotage them for the next user. Touch screens might have existed on some newer PDA models, and on some gimmicky machines in the inner city, but no one believed that they were going to replace the workstation PC.

I chose my technological milestones based on my experiences in this environment, and on television. Actually, since most of our television was the same shows that played in the United States, only a few months behind their stateside premier, they tended to be more up to date with the actual state of technology, and depictions of the near future which seemed obvious to an American audience seemed terribly optimistic and even outlandish to me at the time. So, in retrospect, it is not surprising that after I moved back to the US, I saw nearly all of my milestones commercially available within half a decade.

Tablet Computers
The idea of a single surface interface for a computer in the popular consciousness dates back almost as far as futuristic depictions of technology itself. It was an obvious technological niche that, despite numerous attempts, some semi-successful, was never truly cracked until the iPad. True, plenty of tablet computers existed before the iPad. But these were either klunky beyond use, incredibly fragile to the point of being unusable in practical circumstances, or horrifically expensive.

None of them were practical for, say, completing homework for school on, which at seven years old was kind of my litmus test for whether something was useful. I imagined that if I were lucky, I might get to go tablet shopping when it was time for me to enroll my own children. I could not imagine that affordable tablet computers would be widely available in time for me to use them for school myself. I still get a small joy every time I get to pull out my tablet in a productive niche.

Video Calling
Again, this was not a bolt from the blue. Orwell wrote about his telescreens, which amounted to two-way television, in the 1940s. By the 70s, NORAD had developed a fiber-optic based system whereby commanders could conduct video conferences during a crisis. By the time I was growing up, expensive and klunky video teleconferences were possible. But they had to be arranged and planned, and often required special equipment. Even once webcams started to appear, lessening the equipment burden, you were still often better off calling someone.

Skype and FaceTime changed that, spurred on largely by the appearance of smartphones, and later tablets, with front-facing cameras, which were designed largely for this exact purpose. Suddenly, a video call was as easy as a phone call; in some cases easier, because video calls are delivered over the Internet rather than requiring a phone line and number (something which I did not foresee).

Wearable Technology (in particular smartwatches)
This was the one that I was most skeptical of, as I got this mostly from the Jetsons, a show which isn’t exactly renowned for realism or accuracy. An argument can be made that this threshold hasn’t been fully crossed yet, since smartwatches are still niche products that haven’t caught on to the same extent as either of the previous items, and insofar as they can be used for communication like in The Jetsons, they rely on a smartphone or other device as a relay. This is a solid point, to which I have two counterarguments.

First, these are self-centered milestones. The test is not whether an average Joe can afford and use the technology, but whether it has an impact on my life. And indeed, my smart watch, which was enough and functional enough for me to use in an everyday role, does indeed have a noticeable positive impact. Second, while smartwatches may not be as ubiquitous as once portrayed, they do exist, and are commonplace enough to be largely unremarkable. The technology exists and is widely available, whether or not consumers choose to use it.

These were my three main pillars of the future. Other things which I marked down include such milestones as:

Commercial Space Travel
Sure, SpaceX and its ilk aren’t exactly the same as having shuttles to the ISS departing regularly from every major airport, with connecting service to the moon. You can’t have a romantic dinner rendezvous in orbit, gazing at the unclouded stars on one side, and the fragile planet earth on the other. But we’re remarkably close. Private sector delivery to orbit is now cheaper and more ubiquitous than public sector delivery (admittedly this has more to do with government austerity than an unexpected boom in the aerospace sector).

Large-Scale Remotely Controlled or Autonomous Vehicles
This one came from Kim Possible, and a particular episode in which our intrepid heroes got to their remote destination by a borrowed military helicopter flown remotely from a home computer. Today, we have remotely piloted military drones, and early self-driving vehicles. This one hasn’t been fully met yet, since I’ve never ridden in a self-driving vehicle myself, but it is on the horizon, and I eagerly await it.

Cyborgs
I did guess that we’d have technologically altered humans, both for medical purposes, and as part of the road to the enhanced super-humans that rule in movies and television. I never guessed at seven that in less than a decade, that I would be one of them, relying on networked machines and computer chips to keep my biological self functioning, plugging into the wall to charge my batteries when they run low, studiously avoiding magnets, EMPs, and water unless I have planned ahead and am wearing the correct configuration and armor.

This last one highlights an important factor. All of these technologies were, or at least, seemed, revolutionary. And yet today they are mundane. My tablet today is only remarkable to me because I once pegged it as a keystone of the future that I hoped would see the eradication of my then-present woes. This turned out to be overly optimistic, for two reasons.

First, it assumed that I would be happy as soon as the things that bothered me then no longer did, which is a fundamental misunderstanding of human nature. Humans do not remain happy the same way than an object in motion remains in motion until acted upon. Or perhaps it is that as creatures of constant change and reecontextualization, we are always undergoing so much change that remaining happy without constant effort is exceedingly rare. Humans always find more problems that need to be solved. On balance, this is a good thing, as it drives innovation and advancement. But it makes living life as a human rather, well, wanting.

Which lays the groundwork nicely for the second reason: novelty is necessarily fleeting. What advanced technology today marks the boundary of magic will tomorrow be a mere gimmick, and after that, a mere fact of life. Computers hundreds of millions more times more powerful than those used to wage World War II and send men to the moon are so ubiquitous that they are considered a basic necessity of modern life, like clothes, or literacy; both of which have millennia of incremental refinement and scientific striving behind them on their own.

My picture of the glorious shining future assumed that the things which seemed amazing at the time would continue to amaze once they had become commonplace. This isn’t a wholly unreasonable extrapolation on available data, even if it is childishly optimistic. Yet it is self-contradictory. The only way that such technologies could be harnessed to their full capacity would be to have them become so widely available and commonplace that it would be conceivable for product developers to integrate them into every possible facet of life. This both requires and establishes a certain level of mundanity about the technology that will eventually break the spell of novelty.

In this light, the mundanity of the technological breakthroughs that define my present life, relative to the imagined future of my past self, is not a bad thing. Disappointing, yes; and certainly it is a sobering reflection on the ungrateful character of human nature. But this very mundanity that breaks our predictions of the future (or at least, our optimistic predictions) is an integral part of the process of progress. Not only does this mundanity constantly drive us to reach for ever greater heights by making us utterly irreverent of those we have already achieved, but it allows us to keep evolving our current technologies to new applications.

Take, for example, wireless internet. I remember a time, or at least, a place, when wireless internet did not exist for practical purposes. “Wi-Fi” as a term hadn’t caught on yet; in fact, I remember the publicity campaign that was undertaken to educate our technologically backwards selves about what term meant, about how it wasn’t dangerous, and about how it would make all of our lives better, as we could connect to everything. Of course, at that time I didn’t know anyone outside of my father’s office who owned a device capable of connecting to Wi-Fi. But that was beside the point. It was the new thing. It was a shiny, exciting novelty.

And then, for a while, it was a gimmick. Newer computers began to advertise their Wi-Fi antennae, boasting that it was as good as being connected by cable. Hotels and other establishments began to advertise Wi-Fi connectivity. Phones began to connect to Wi-Fi networks, which allowed phones to truly connect to the internet even without a data plan.

Soon, Wi-Fi became not just a gimmick, but a standard. First computers, then phones, without internet began to become obsolete. Customers began to expect Wi-Fi as a standard accommodation wherever they went, for free even. Employers, teachers, and organizations began to assume that the people they were dealing with would have Wi-Fi, and therefore everyone in the house would have internet access. In ten years, the prevailing attitude around me went from “I wouldn’t feel safe having my kid playing in a building with that new Wi-Fi stuff” to “I need to make sure my kid has Wi-Fi so they can do their schoolwork”. Like television, telephones, and electricity, Wi-Fi became just another thing that needed to be had in a modern home. A mundanity.

Now, that very mundanity is driving a second wave of revolution. The “Internet of Things” as it is being called, is using the Wi-Fi networks that are already in place in every modern home to add more niche devices and appliances. We are told to expect that soon that every major device in our house will be connected to out personal network, controllable either from our mobile devices, or even by voice, and soon, gesture, if not through the devices themselves, then through artificially intelligent home assistants (Amazon echo, Google Home, and related).

It is important to realize that this second revolution could not take place while Wi-Fi was still a novelty. No one who wouldn’t otherwise buy into Wi-Fi at the beginning would have bought it because it could also control the sprinklers, or the washing machine, or what have you. Wi-Fi had to become established as a mundane building block in order to be used as the cornerstone of this latest innovation.

Research and development may be focused on the shiny and novel, but technological process on a species-wide scale depends just as much on this mundanity. Breakthroughs have to not only be helpful and exciting, but useful in everyday life, and cheap enough to be usable by everyday consumers. It is easy to get swept up in the exuberance of what is new, but the revolutionary changes happen when those new things are allowed to become mundane.

Incremental Progress Part 2 – Innovation Fatigue

This is part two of a multi-part perspective on patient engagement in charity and research. Though not strictly required, it is strongly recommended that you read part one before continuing.


The vague pretense of order in the conversation, created by the presence of the few convention staff members, broke all at once, as several dozen eighteen to twenty one year olds all rushed to get in their two cents on the topic of fundraising burnout (see previous section). Naturally this was precisely the moment where I struck upon what I wanted to say. The jumbled thoughts and feelings, that had hinted at something to add while other people were talking, suddenly crystallized into a handful of points I wanted to make, all clustered around a phrase I had heard a few years earlier.

Not one to interrupt someone else, and also wanting to have undivided attention in making my point, I attempted to wait until the cacophony of discordant voices became more organized. And, taking example from similar times earlier in my life when I had something I wished to contribute before a group, I raised my hand and waited for silence.

Although the conversation was eventually brought back under control by some of the staff, I never got a chance to make my points. The block of time we had been allotted in the conference room ran out, and the hotel staff were anxious to get the room cleared and organized for the next group.

And yet, I still had my points to make. They still resonated within me, and I honestly believed that they might be both relevant and of interest to the other people who were in that room. I took out my phone and jotted down the two words which I had pulled from the depths of my memory: Innovation Fatigue.

That phrase has actually come to mean several different things to different groups, and so I shall spend a moment on etymology before moving forward. In research groups and think tanks, the phrase is essentially a stand in for generic mental and psychological fatigue. In the corporate world, it means a phenomenon of diminishing returns on creative, “innovative” projects, that often comes about as a result of attempts to force “innovation” on a regular schedule. More broadly in this context, the phrase has come to mean an opposition to “innovation” when used as a buzzword similar to “synergy” and “ideate”.

I first came across this term in a webcomic of all places, where it was used in a science fiction context to explain why the society depicted, which has advanced technology such as humanoid robots, neurally integrated prostheses, luxury commercial space travel, and artificial intelligence, is so similar to our own, at least culturally. That is to say, technology continues to advance at the exponential pace that it has across recorded history, but in a primarily incremental manner, and therefore most people, either out of learned complacency or a psychological defense mechanism to avoid constant hysteria, act as though all is as it always has been, and are not impressed or excited by the prospects of the future.

In addition to the feeling of fundraising burnout detailed in part one, I often find that I suffer from innovation fatigue as presented in the comic, particularly when it comes to medical research that ought to directly affect my quality of life, or promises to in the future. And what I heard from other patients during our young adults sessions has led me to believe that this is a fairly common feeling.

It is easy to be pessimistic about the long term outlook with chronic health issues. Almost definitionally, the outlook is worse than average, and the nature of human biology is such that the long term outlook is often dictated by the tools we have today. After all, even if the messianic cure arrives perfectly on schedule in five to ten years (for the record, the cure has been ten years away for the last half-century), that may not matter if things take a sharp turn for the worse six months from now. Everyone already knows someone for whom the cure came too late. And since the best way to predict future results, we are told, is from past behavior, then it would be accurate to say that no serious progress is likely to be made before it is too late.

This is not to say that progress is not being made. On the contrary, scientific progress is continuous and universal across all fields. Over the past decade alone, there has been consistent, exponential progress in not only quality of care, and quality of health outcomes, but quality of life. Disease, where it is not less frequent, but it is less impactful. Nor is this progress being made in secret. Indeed, amid all the headlines about radical new treatment options, it can be easy to forget that the diseases they are made to treat still have a massive impact. And this is precisely part of the problem.

To take an example that will be familiar to a wider audience, take cancer. It seems that in a given week, there is at least one segment on the evening TV news about some new treatment, early detection method, or some substance or habit to avoid in order to minimize one’s risk. Sometimes these segments play every day, or even multiple times per day. In checking my online news feed, one of every four stories was something regarding improvements in the state of cancer; to be precise, one was a list of habits to avoid, while one was about a “revolutionary treatment [that] offers new hope to patients”.

If you had just been diagnosed with cancer, you would be forgiven for thinking that with all this seemingly daily progress, that the path forward would be relatively simple and easy to understand. And it would be easy for one who knows nothing else to get the impression that cancer treatment is fundamentally easy nowadays. This is obviously untrue, or at least, grossly misleading. Even as cancer treatments become more effective and better targeted, the impact to life and lifestyle remains massive.

It is all well and good to be optimistic about the future. For my part, I enjoy tales about the great big beautiful tomorrow shining at the end of the day as much as anyone. In as much as I have a job, it is talking to people about new and exciting innovations in their medical field, and how they can best take advantage of them as soon as possible for the least cost possible. I don’t get paid to do this; I volunteer because I am passionate about keeping progress moving forward, and because some people have found that my viewpoint and manner of expression are uniquely helpful.

However, this cycle of minor discoveries, followed by a great deal of public overstatement and media excitement, which never (or at least, so seldom as to appear never) quite lives up to the hype, is exhausting. Active hoping, in the short term, as distinct from long term hope for future change, is acutely exhausting. Moreover, the routine of having to answer every minor breakthrough with some statement to interested, but not personally-versed friends and relations, who see media hyperbole about (steps towards) a cure and immediately begin rejoicing, is quite tiring.

Furthermore, these almost weekly interactions, in addition to carrying all of the normal pitfalls of socio-familial transactions, have a unique capability to color the perceptions of those who are closest to oneself. The people who are excited about these announcements because they know, or else believe, it represents an end, or at least, decrease, to one’s medical burden, are often among those who one wishes least to alienate with causal pessimism.

For indeed, failing to respond with appropriate zeal to each and every announcement does lead to public branding of pessimism, even depression. Or worse: it suggests that one is not taking all appropriate actions to combat one’s disease, and therefore is undeserving of sympathy and support. After all, if the person on the TV says that cancer is curable nowadays, and your cancer hasn’t been cured yet, it must be because you’re not trying hard enough. Clearly you don’t deserve my tax dollars and donations to fund your treatment and research. After all, you don’t really need it anymore. Possibly you are deliberately causing harm to yourself, and therefore are insane, and I needn’t listen to anything you say to the contrary. Hopefully, it is easy to see how frustrating this dynamic can become, even when it is not quite so exaggerated to the point of satire.

One of the phrases that I heard being repeated at the conference a lot was “patient investment in research and treatment”. When patients aren’t willing to invest emotionally and mentally in their own treatment; in their own wellbeing, the problems are obvious. To me, the cause, or at least, one of the causes, is equally obvious. Patients aren’t willing to invest because it is a risky investment. The up front cost of pinning all of the hopes and dreams for one’s future on a research hypothesis is enormous. The risk is high, as anyone who has stupefied the economics of research and development knows. Payouts aren’t guaranteed, and when they do come, they will be incremental.

Patients who aren’t “investing” in state of the art care aren’t doing so because they don’t want to get better care. They aren’t investing because they either haven’t been convinced that it is a worthwhile investment, or are emotionally and psychologically spent. They have tried investing, and have lost out. They have developed innovation fatigue. Tired of incremental progress which does not offer enough payback to earnestly plan for a better future, they turn instead to what they know to be stable: the pessimism here and now. Pessimism isn’t nearly as shiny or enticing, and it doesn’t offer the slim chance of an enormous payout, but it is reliable and predictable.

This is the real tragedy of disability, and I am not surprised in the slightest that now that sufficient treatments have been discovered to enable what amounts to eternally repeatable stopgaps, but not a full cure, that researchers, medical professionals, and patients themselves, have begun to encounter this problem. The incremental nature of progress, the exaggeratory nature of popular media, and the basic nature of humans in society amplify this problem and cause it to concentrate and calcify into the form of innovation fatigue.

A Few Short Points

1: Project Crimson Update

Look, let’s get this out of the way: I’m pretty easy to excite and amuse. Give me something for free, and it’ll make my day. Give me some item that I can use in my normal routine, and it will make my week. So far, my free trial of YouTube Red/ Google Play music has hit all of these buttons, which is good, because it goes a long way towards assuaging my parentally-instilled aversion to ever parting with my credit card number for any reason whatsoever.

Last week, I mentioned that this had been something that I had been considering. Today, after receiving my new iPhone SE in the mail, I decided to pull the trigger. In my research, I actually managed to find a slightly better deal; four months’ free trial instead of three, with the same terms and conditions, through a referral code from another tech blog. Technically my trial is with Google Play Music, but seeing as it has the same price as YouTube Red, and gets YouTube Red thrown in for free (it also works the same way the other way around; Red subscribers get Google Play Music for free as part of their subscription), the distinction is academic in my case, and only matters in other countries where different laws govern music and video services, forcing the split.

With about six hours of experience behind me, I can say that so far I am quite pleased with the results so far. I certainly wouldn’t recommend it as a universal necessity, and I wouldn’t recommend it for anyone trying to keep an especially tight budget. Ten dollars a month isn’t nothing, and the way things seem to be set up to be automated makes it deceptively easy to simply keep paying. Moreover, because purchasing YouTube Red requires a Google payment account, it removes one more psychological barrier from spending more in the future.

In my case, I determined that the cost, at least over the short and medium term, would be justified, because without being able to download my YouTube playlists to my devices for offline consumption, I would wind up spending more purchasing the same songs for downloads. But unless you really listen to a wide variety of songs, or particularly obscure songs, on a daily basis, this will probably be a wash.

There is another reason why I expect this cost to be financially justified. Because I have fallen into the habit of needing a soundtrack for all of my activities, my data usage rates have gone through the roof. I maintain that much of this is a result of the iPhone’s use of cellular data to supplement egregiously slow wifi (the only kind of wifi that exists in my household, and at hotspots around town), and hence, not really my fault. I can’t disable this setting, because my phone is used to send life support data to and from the cloud to help support me alive.

Financial details aside, I am enjoying my trial so far. Being able to listen to music while using other apps, and without advertisements, has been a great convenience, and is already doing things to help my battery life. I have briefly perused the selection of exclusive subscriber content, and most of it falls into the category of “vaguely interesting, and probably amusing, but mostly not the kind of thing I’d set aside time to watch”. This is, perhaps interestingly, the same category into which most television series and movies also fall.

2: Give me money, maybe.

In tangentially related news, I am giving serious thought to starting a patreon page, which would allow people to give me money for creating stuff. It’s basically an internet tip jar. Not because I feel that I need money to continue inflicting my opinions on the world. Rather, because I’ve been working on a “short” (in the sense that eighty thousand words is short) story, which my friends have been trying to convince me to serialize and post here. It’s and interesting idea, and one that has a certain lure to it.

Even with my notions of someday writing a novel, this story isn’t the kind of thing that I’d seek to publish in book format, at least not until after I’ve already I’ve broken into publishing. I’m already writing this story, so the alternative is it sitting on my hard drive until something happens to it. Even if the number of people who like it is in the single digits, it costs me nothing (except maybe a bit of bruised ego that my first creation isn’t a runaway hit). And there’s always the outside chance that I might exceed my own expectations.

So me branching into fiction on this blog is looking more and more like a serious possibility. But, if I’m going to do this, I want to do this right. Committing to writing a serial story online means committing to following through with plots and characters to a satisfactory conclusion. On the sliding scale of “writing for personal entertainment” and “writing as a career”, writing a web serial inches closer to the second part than I’ve really had to think about until now. This means having the long term infrastructure in place so that I can write sustainably and regularly.

In my case, because I still aim to create things for fun, for free to the public, and on my own terms, this means having the infrastructure to accept crowdfunding donations. I wouldn’t expect to make a living this way. In fact, I’d be amazed if the site hosting would pay for itself. But it would ensure that, on the off chance that, by the time I finished this first story, a large number of people had found and enjoyed my stuff, or a small but dedicated group had decided they enjoyed my writing enough to support me, that I would have all the infrastructure in place to, first of all, gauge what was happening, and second of all, be able to double down on what works.

All of this is, of course, purely hypothetical at this point. Were it to happen, it would require a level of organization that I don’t see happening imminently. Given my summer travel plans, progress on this likely wouldn’t happen until at least mid to late July.

3: Expect Chaos

On a related note, this week starts off my much anticipated summer travels. I expect that this will be a major test of my more or less weekly plus of minus a few hours upload schedule. As a result, it is quite possible that new updates will be chaotic in when they come.

Note that I don’t know whether that means more or fewer posts than usual. Sometimes these events leave me with lots of things to say, and so inspire me to write more and release more. On the other hand, as we saw in April, sometimes I come back tired, or even sick, and have to take a few days off.

It is also possible that I will be motivated, but busy, and so may wind up posting pieces that were written a long time ago that haven’t been published for one reason or another. If this last one happens, I will endeavor to leave a note on the post to explain any chronological discrepancies.

Technological Overhaul

As my summer travels draw nearer, and my phone increasingly refuses to do my bidding when I require it, my attention has been increasingly drawn to adopting new components in my technological routine.

First, I need a new phone. While I could hypothetically squeeze another three, or maybe even six months out of it, at some point the temporary savings made by prolonging the inevitable only serve to make my life harder, which is kind of the opposite of the role that a smartphone is supposed to fulfill.

Specifically, I have two problems with my current phone. The first is battery life. I rely on my phone as a foundation on which to organize my medical routine and life support, and so when my phone fails me, things get bad quite quickly. While I’m not the most active person, I do need my phone to be capable of going eighteen hours on a single charge without dying. I don’t think this is totally unreasonable, given that it was the standard that my phone held to when I first got it. But with time and use, the time that a full charge lasts for has slowly diminished to a point where I am only scrapping by if I give my phone a mid day top off.

The second problem is memory. Admittedly, this is at least partially self-inflicted, as I thought at the time that I got my phone that I would replace it in a year or so. But then the iPhone 6 line turned out to be not what I was looking for (my 5S is already cramped in my jean pocket, so anything bigger is a problem) and I couldn’t bring myself to buy a new phone that wasn’t the newest, with the fastest chip, and so on). Subsequently, we got to a point where, today, if I want to download an app, I have to find another one to delete. Same for podcasts, music, and the like.

I’ve been able to strategically avoid this problem for most of winter into spring primarily by not being away from home wifi and chargers for more than a few days. This doesn’t exactly work for my summer itinerary, however, which includes places that don’t have easy access to streaming and charging, like the woods. This leaves me with a frustrating choice: either I can double down on my current stopgap measures and carry around portable chargers, try to shift major downloads to my iPad (something that would cause disproportionate distress and hardship) and so fourth, or I can bite the bullet and switch over to a new phone.

The main thing that has prevented me from making this leap already is the agonizing decision over which new model to pick. Back in the days when all iPhones were essentially identical except for memory, and later, color, it was a relatively simple matter. Now, I have to factor in size, chip, camera, and how much I value having a headphone jack versus how much I value having the newest and shiniest mode. Previously I had told myself that I would be content to purchase a newer version of my current phone with a better battery and larger memory. However, committing myself to purchase what is currently the oldest model still offered as my phone for the next several years is a difficult pill to swallow.

In a related vein, during my usual cost analysis which I conduct for all nonessential purchases, I came to an interesting revelation. The amount which I was prepared to spend in order to ensure that I could still access the same music which I had been streaming from YouTube while offline in the woods would vastly exceed the cost to subscribe to YouTube Red, which, allegedly, would allow me to download playlists to my phone.

Now, I have never tried YouTube Red, or any other paid streaming service. For that matter, neither I nor anyone in my family have ever paid for any kind of media subscription service (aside from paying for TV and Internet, obviously). This approach is viewed as bafflingly backwards by my friends, who are still trying to convince me to move past my grudges against Steam and Netflix. To my household, however, the notion of paying money for something that doesn’t include some physical good or deed of ownership is absurd. The notion of paying money for something that can be obtained for free is downright heretical. It’s worth noting that a disproportionate share of my family is from an economics background, academically.

Still, the math is pretty compelling. Much as I might loath the idea of not owning physical copies of my music (an idea that is quickly becoming reality regardless of my personal behavior), if we assume that my main motivations for purchasing music in the first place are to support creators I like, and to make sure I still have access to them in those edge cases where direct and constant internet access are untenable, YouTube Red seems, at least on paper, to accomplish both of those goals at a cost which is, if not lower, then at least comparable in the short and intermediate terms. And of course, there is the tangential benefit that I can listen to a far wider variety on a regular basis than if I kept to purchasing music outright.

As if to try and pounce on this temptation, YouTube has launched a new extended free trial offer: three months instead of the regular one. Naturally, a closer examination of the fine print is in order, but it appears that the only catch is signing up for automatically renewing subscription. Assuming that this is indeed the case, this may well prove enough to lure me in, at least for the trial period.

The extended free trial has a signup deadline of July 4th, which incidentally is about a week after the deadline by which I will need to have made arrangements for a new phone, or else lump it with my current one for the purposes of my summer travels. At present I am leaning towards the idea that I will move forward with both of these plans under the auspicious title of “Project Crimson”. Though it would be a trial by fire for a new technological routine, the potential benefits are certainly enticing.