Time Flying

No. It is not back to school season. I refuse to accept it. I have just barely begun to enjoy summer in earnest. Don’t tell me it’s already nearly over.

It feels like this summer really flew by. This is always true to an extent, but it feels more pronounced this year, and I’m not really sure how to parse it. I’m used to having time seemingly ambush me when I’m sick, having weeks seem to disappear from my life in feverish haze, but not when I’m well.

If I have to start working myself back towards productivity, and break my bohemian habit of rising at the crack of noon, then I suppose that summer was worthwhile. I didn’t get nearly as much done as I expected. Near as I can tell, nothing I failed to accomplish was vital in any meaningful respect, but it is somewhat disappointing. I suppose I expected to have more energy to tick things off my list. Then again, the fact that nothing was vital meant that I didn’t really push myself. It wasn’t so much that I tried and failed as I failed to try.

Except I can’t help but think that the reason that I didn’t push myself; that I’m still not pushing myself, despite having a few days left; is, aside from a staunch commitment to avoid overtaxing myself before the school year even begins, a sense that I would have plenty of time later. Indeed, this has been my refrain all season long. And despite this, the weeks and months have sailed by, until, to my alarm and terror, we come upon mid-August, and I’ve barely reached the end of my June checklist.

Some of it is simple procrastination, laziness, and work-shyness, and I’ll own that. I spent a lot of my time this summer downright frivolously, and even in retrospect, I can’t really say I regret it. I enjoyed it, after all, and I can’t really envision a scenario where I would’ve enjoyed it in moderation and been able to get more done without the sort of rigid planned schedules that belie the laid back hakunnah mattata attitude that, if I have not necessarily successfully adopted, then at least have taken to using as a crutch in the face of looming terror of starting college classes.

But I’m not just saying “summer flew by” as an idle excuse to cover my apparent lack of progress. I am genuinely concerned that the summer went by faster than some sort of internal sense of temporal perception says it ought have, like a step that turned out to be off kilter from its preceding stairs, causing me to stumble. And while this won’t get me back time, and is unlikely to be a thing that I can fix, even if it is an internal mental quirk, should I not at least endeavor to be aware of it, in the interest of learning from past mistakes?

So, what’s the story with my sense of time?

One of the conversation I remember most vividly of my childhood was about how long an hour is. It was a sunny afternoon late in the school year, and my mother was picking my brother and I up from school. A friend of mine invited us over for a play*, but my mother stated that we had other things to do and places to be.

*Lexicographical sidenote: I have been made aware that the turn of phrase, “to have a play” may be unique to Australian vocabulary. Its proper usage is similar to “have a swim” or “have a snack”. It is perhaps most synonymous with a playdate, but is more casual, spontaneous, and carries less of a distinctly juvenile connotation.

I had a watch at this point, and I knew when we had to be elsewhere, and a loose idea of the time it took to get between the various places, and so I made a case that we did in fact have time to go over and have a play, and still get to our other appointments. My mother countered that if we did go, we wouldn’t be able to stay long. I asked how long we would have, and she said only about an hour. I considered this, and then voiced my opinion that an hour is plenty of time; indeed more than enough. After all, an hour was an unbearably long time to wait, and so naturally it should be plenty of time to play.

I would repudiate this point of view several months later, while in the hospital. Laying there in my bed, hooked up to machines, my only entertainment was watching the ticking wall clock, and trying to be quiet enough to hear it reverberate through the room. It should, by all accounts, have been soul-crushingly boring. But the entire time I was dwelling on my dread, because I knew that at the top of every hour, the nurses would come and stab me to draw blood. And even if I made it through this time, I didn’t know how many hours I had left to endure, or indeed, to live.

I remember sitting there thinking about how my mother had in fact been right. An hour isn’t that long. It isn’t enough to make peace, or get over fears, or get affairs in order. It’s not enough to settle down or gear up. This realization struck me like a groundbreaking revelation, and when I look back and try to put a finger on exactly where my childhood ended, that moment stands out as a major point.

That, eleven years ago, was the last major turning point; the last time I remember revising my scale for how long an hour, a day, and so on are in the scheme of things. Slowly, as I’ve gotten older, I’ve become more comfortable with longer time scales, but this hasn’t really had a massive effect on my perception.

Over the past half-decade there have been occasions when, being sick, I have seemed to “lose” time, by being sick and not at full processing capacity as time passes. Other occasions it has been a simple matter of being a home body, and so the moments I remember most recently having seen people, which are in reality some time ago, seem to be more recent than they were, creating a disconnect. But this has always happened as a consequence of being unwell and disconnected from everyday life. In other situations, time has always seemed to match my expectations, and I have been able to use my expectations and perception to have a more intrinsic sense of when I needed to be at certain places.

In the past few months this perception seems to have degraded. Putting my finger on when this started being a noticeable problem is difficult, because much of the past few months has been spent more or less relaxing, which in my case means sleeping in and ignoring the outside world, which as previously noted does tend to affect my perception of how much time has passed. The first time I recall mentioning that time had passed me by was in May, at a conference. I don’t want to give that one data point too much weight, though, because, for one thing, it was a relatively short break in my routine, for another, it was a new conference with nothing to compare it to, and finally, I was jet lagged.

But I definitely do recall mentioning this feeling during the buildup to, and all throughout, our summer travels. This period, unlike previous outings, is definitely long enough that I can say it doesn’t fall into the category of being a homebody. Something has changed in my perception of time, and my sense of how much time I have to work with before scheduled events is degraded.

So what gives? The research into perception of time falls into the overlap between various fields, and is fraught with myths and pseudoscience. For example, it is commonly held and accepted that perception of time becomes faster with age. But this hypothesis dates back to the 1870s, and while there is some evidence to support a correlation, particularly early in life, the correlation is weak, and not linear. Still, this effect is present early in life, and it is plausible that this is part of my problem.

One point that is generally agreed upon in the scientific literature regards the neurochemistry. It seems to be that the perception of time is moderated by the same mechanisms that regulate our circadian rhythm, specifically dopamine and a handful of other neurotransmitters. Disruptions to these levels causes a corresponding disruption to the sense of time. In particular, it seems that more dopamine causes time to go faster; hence time seeming to pass faster when one is having fun. This would explain why the passage of time over my vacation has seemed particularly egregious, and also why jet lag seems to have such a profound effect on time perception.

Both of these explanations would go a ways towards explaining the sensorial discrepancy I find. Another explanation would place blame on my glasses, since eye movement seems to also be tied to small-scale passage of time. Perhaps since I have started wearing glasses in the last couple of years, my eyes have been squinting less, and my internal clock has been running subtly slow since, and I am only now starting to notice it.

With the field of time perception research still in relative infancy, the scientific logic behind these explanations is far from ironclad. But then again, it doesn’t need to be ironclad. For our purposes, the neurobiological mechanisms are almost entirely irrelevant. What matters is that the effect is real, that it isn’t just me, nor is it dangerous, and that there’s nothing I can really do about it other than adapt. After all, being blind without my glasses, or giving myself injections of neurotransmitters as a means of deterring procrastination might be a bit overkill.

What matters is that I can acknowledge this change as an effect that will need to be accounted for going forwards. How I will account for it is outside the scope of this post. Probably I will work to be a bit more organized and sensitive to the clock. But what’s important is that this is a known quantity now, and so hopefully I can avoid being caught so terribly off guard next summer.

Works Consulted
Eagleman, David M. “Human Time Perception and Its Illusions.” Current Opinion in Neurobiology, vol. 18, no. 2, 2008, pp. 131–136., doi:10.1016/j.conb.2008.06.002.

Friedman, W.J. and S.M.J. Janssen. 2010. Aging and the speed of time. Acta Psychologica 134: 130-141.

Janssen, S.M.J., M. Naka, and W.J. Friedman. 2013. Why does life appear to speed up as people get older? Time & Society 22(2): 274-290.

Wittmann, M. and S. Lehnhoff. 2005. Age effects in perception of time. Psychological Reports 97: 921-935.

Unreachable

I suspect that my friends think that I lie to them about being unreachable as an excuse to simply ignore them. In the modern world there are only a small handful situations in which a person genuinely can’t be expected to be connected and accessible.

Hospitals, which used to be a communications dead zone on account of no cell-phone policies, have largely been assimilated into the civilized world with the introduction of guest WiFi networks. Airplanes are going the same way, although as of yet WiFi is still a paid commodity, and in that is sufficiently expensive as to make it still a reasonable excuse.

International travel used to be a good excuse, but nowadays even countries that don’t offer affordable and consistent cellular data have WiFi hotspots at cafes and hotels. The only travel destinations that are real getaways in this sense- that allow you to get away from the modern life by disconnecting you from the outside world -are developing countries without infrastructure, and the high seas. This is the best and worst part of cruise ships, which charge truly extortionate rates for slow, limited internet access.

The best bet for those who truly don’t want to be reached is still probably the unspoilt wilderness. Any sufficiently rural area will have poor cell reception, but areas which are undeveloped now are still vulnerable to future development. After all, much of the rural farming areas of the Midwest are flat and open. It only takes one cell tower to get decent, if not necessarily fast, service over most of the area.

Contrast this to the geography of the Appalachian or Rocky Mountains, which block even nearby towers from reaching too far, and in many cases are protected by regulations. Better yet, the geography of Alaska combines several of these approaches, being sufficiently distant from the American heartland that many phone companies consider it foreign territory, as well as being physically huge, challenging to develop, and covered in mountains and fjords that block signals.

I enjoy cruises, and my grandparents enjoy inviting us youngsters up into the mountains of the northeast, and so I spend what is probably for someone of my generation, a disproportionate amount of time disconnected from digital life. For most of my life, this was an annoyance, but not a problem, mostly because my parents handled anything important enough to have serious consequences, but partially because, if not before social media, then at least before smartphones, being unreachable was a perfectly acceptable and even expected response to attempts at contact.

Much as I still loath the idea of a phone call, and will in all cases prefer to text someone, the phone call, even unanswered, did provide a level of closure that an unanswered text message simply doesn’t. Even if you got the answering machine, it was clear that you had done your part, and you could rest easy knowing that they would call you back at their leisure; or if it was urgent, you kept calling until you got them, or it became apparent that they were truly unreachable. There was no ambiguity whether you had talked to them or not; whether your message had really reached them and they were acting on it, or you had only spoken to a machine.

Okay, sure, there was some ambiguity. Humans have a way of creating ambiguity and drama through whatever form we use. But these were edge cases, rather than seemingly being a design feature of text messages. But I think this paradigm shift is more than just the technology. Even among asynchronous means, we have seen a shift in expectations.

Take the humble letter, the format that we analogize our modern instant messages (and more directly, e-mail) to most frequently and easily. Back in the day when writing letters was a default means of communication, writing a letter was an action undertaken on the part of the sender, and a thing that happened to the receiver. Responding to a letter by mail was polite where appropriate, but not compulsory. This much he format shares with our modern messages.

But unlike our modern systems, with a letter it was understood that when it arrived, it would be received, opened, read, and replied to all in due course, in the fullness of time, when it was practical for the recipient, and not a moment sooner. To expect a recipient to find a letter, tear it open then and there, and drop everything to write out a full reply at that moment, before rushing it off to the post office was outright silly. If a recipient had company, it would be likely that they would not even open the letter until after their business was concluded, unlike today, where text messages are read and replied to even in the middle of conversation.

Furthermore, it was accepted that a reply, even to a letter of some priority, might take some several days to compose, redraft, and send, and it was considered normal to wait until one had a moment to sit down and write out a proper letter, for which one was always sure to have something meaningful to say. Part of this is an artifact of classic retrospect, thinking that in the olden day’s people knew the art of conversation better, and much of it that isn’t is a consequence of economics. Letters cost postage, while today text messaging is often included in phone plans, and in any case social media offers suitable replacements for free.

Except that, for a while at least, the convention held in online spaces too. Back in the early days of email, back when it was E-mail (note the capitalization and hyphenation), and considered a digital facsimile of postage rather than a slightly more formal text message, the accepted convention was that you would sit down to your email, read it thoroughly, and compose your response carefully and in due course, just as you would on hard copy stationary. Indeed, our online etiquette classes*, we were told as much. Our instructors made clear that it was better to take time in responding to queries with a proper reply than get back with a mere one or two sentences.

*Yes, my primary school had online etiquette classes, officially described as “nettiquete courses”, but no one used that term except ironically. The courses were instituted after a scandal in parliament, first about students’ education being outmoded in the 21st century, and second about innocent children being unprepared for the dangers of the web, where, as we all know, ruffians and thugs lurk behind every URL. The curriculum was outdated the moment it was made, and it was discontinued only a few years after we finished the program, but aside from that, and a level of internet paranoia that made Club Penguin look lassaiz faire, it was helpful and accurately described how things worked.

In retrospect, I think this training helps explain a lot of the anxieties I face with modern social media, and the troubles I have with text messages and email. I am acclaimed by others as an excellent writer and speaker, but brevity is not my strong suit. I can cut a swathe through paragraphs and pages, but I stumble over sentences. When I sit down to write an email, and I do, without fail, actually sit down to do so, I approach the matter with as much gravity as though I were writing with quill and parchment, with all the careful and time-consuming redrafting, and categorical verbosity that the format entails.

But email and especially text messages are not the modern reincarnation of the bygone letter, nor even the postcard, with it’s shorter format and reduced formality. Aside from a short length that is matched in history perhaps only by the telegram, the modern text message has nearly totally forgone not only the trappings of all previous formats, but indeed, has seemed to forgo the trappings of form altogether.

Text messages have seemed to become accepted not as a form of communication so much as an avenue of ordinary conversation. Except this is a modern romanticization of text messages. Because while text messages might well be the closest textual approximation of a face to face conversation that doesn’t involve people actually speaking simultaneously, it is still not a synchronous conversation.

More importantly than the associated pleasantries of the genre, text messages work on an entirely different timescale than letters. Where once, with a letter, it might be entirely reasonable for a reply to take a fortnight, nowadays a delay in responding to a text message between friends beyond a single day is a cause for concern and anxiety.

And if it were really a conversation, if two people were conversing in person, or even over the phone, and one person without apparent reason failed to respond to the other’s prompts for a prolonged period, this would indeed be cause for alarm. But even ignoring the obvious worry that I would feel if my friend walking alongside me in the street suddenly stopped answering me, in an ordinary conversation, the tempo is an important, if underrated, form of communication.

To take an extreme example, suppose one person asks another to marry them. What does it say if the other person pauses? If they wait before answering? How is the first person supposed to feel, as opposed to an immediate and enthusiastic response? We play this game all the time in spoken conversation, drawing out words or spacing out sentences, punctuating paragraphs to illustrate our point in ways that are not easily translated to text, at least, not without the advantage of being able to space out one’s entire narrative in a longform monologue.

We treat text messages less like correspondence, and more like conversation, but have failed to account for the effects of asyncronicity on tempo. It is too easy to infer something that was not meant by gaps in messages; to interpret a failure to respond as a deliberate act, to mistake slow typing for an intentional dramatic pause, and so forth.

I am in the woods this week, which means I am effectively cut off from communication with the outside world. For older forms of communication, this is not very concerning. My mail will still be there when I return, and any calls to the home phone will be logged and recorded to be returned at my leisure. Those who sent letters, or reached an answering machine know, or else can guess, that I am away from home, and can rest easy knowing that their missives will be visible when I return.

My text messages and email inbox, on the other hand, concern me, because of the very real possibility that someone will contact me thinking I am reading messages immediately, since my habit of keeping my phone within arm’s reach at all times is well known, and interpreting my failure to respond as a deliberate snub, when in reality I am out of cell service. Smart phones and text messages have become so ubiquitous and accepted that we seem to have silently arrived at the convention that shooting off a text message to someone is as good as calling them, either on the phone or even in person. Indeed, we say it is better, because text messages give the recipient the option of postponing a reply, even though we all quietly judge those people who take time to respond to messages, and will go ahead and imply all the social signals of a sudden conversational pause in the interim, while decrying those who use text messages to write monologues.

I’ll say it again, because it bears repeating after all the complaints I’ve given: I like text messages, and I even prefer them as a communication format. I even like, or at least tolerate, social media messaging platforms, despite having lost my appreciation for social media as a whole. But I am concerned that we, as a society, and as the first generation to really build the digital world into the foundations of our lives, are setting ourselves up for failure in our collective treatment of our means of communication.

When we fail to appreciate the limits of our technological means, and as a result, fail to create social conventions that are realistic and constructive, we create needless ambiguity and distress. When we assign social signals to pauses in communication that as often as not have more to do with the manner of communication than the participants or their intentions, we do a disservice to ourselves and others. We may not mention it aloud, we may not even consciously consider it, but it lingers in our attitudes and impressions. And I would wager that soon enough we will see a general rise in anxiety and ill will towards others.

Boring

I had an earth-shattering revelation the other day: I am a fundamentally boring person.

I’ve known, or at least suspected, this much deep in my heart for some time. The evidence has been mounting for a while. For one, despite enjoying varied cuisine, my stomach can not tolerate even moderately spicy food, and I generally avoid it. I do not enjoy roller coasters, nor horror movies, as I do not find joy in tricking my brain into believing that I am in danger from something that poses no threat. I prefer books at home to crowded parties. I take walks to get exercise, but do not work out, and would never run a marathon. I immensely enjoy vanilla ice cream.
For a while I justified these eccentricities in various ways. I would cite instructions from my gastroenterologist to avoid foods that aggravate my symptoms, and claim that this dictate must automatically override my preferences. I would say that roller coasters exacerbated my vertigo and dizziness, which is true inasmuch as any movement exacerbates them, and that the signs say for people with disabilities ought not ride. I would argue that my preference for simple and stereotypically bookish behaviors reflected an intellectual or even spiritual superiority which is only perceptible to others of a similar stature.
The far simpler explanation is that I am simply boring. My life may be interesting; I may go interesting places, meet cool people, have grand adventures and experiences, and through this amalgam I may, in conjunction with a decent intelligence and competence in storytelling, be able to weave yarns and tall tales that portray myself as a rugged and fascinating protagonist, but stripped of extenuating circumstances, it seems like I am mostly just a boring guy.
My moment of revelation happened the other night. It was at a party, organized by the conference I was attending. Obviously, professionally arranged conferences aren’t prone to generating the most spectacular ragers, but these are about as close as I get to the stereotypical saturnalia. Going in, I was handed a card valid for one free drink. I turned it over in my hand, considering the possibilities. Given the ingredients behind the bar, there were hundreds of possible permutations to try, even forgoing alcohol because of its interactions with my seizure medication. Plenty of different options to explore.
I considered the value of the card, both compared to the dollar cost of drinks, and in terms of drinks as a social currency. Buying someone else a drink is, after all, as relevant to the social dynamic as having one for oneself. I looked around the room, noting the personalities. If I were a smooth operator, I could buy a drink for one of the ladies as an icebreaker. If I sought adventure and entertainment, I could use my free drink to entice one of the more interesting personalities, perhaps the professional actor, or the climber who submitted Everest for the discovery channel, to tell me a story. Or, if I wanted to work on networking, I could approach one of the movers and shakers in the room, buying myself a good first impression.
Instead, I took the boring option. I slid the card into my pocket and walked over to the forlorn cart by the edge of the room, and poured myself a glass of crystal lite red punch, which, just to hammer the point home, I watered it down by about half. As I walked over towards an empty standing table, the revelation hit me, and the evidence that had been building up calcified into a pattern. I was boring.
Now, as character traits go, being boring is far from the worst. In many respects, it is an unsung positive. As my pediatrician once explained to me on week three of hospital quarantine, and my eighth group of residents coming through my room to see if anyone could provide new guesses about my condition and prognosis, abnormal or fascinating is, from a medical perspective, bad news. Boring people don’t get arrested, hunted by security services, and disappeared in the night. Boring people get through alright.
Indeed, Catherine the Great, empress of Russia, once described her second great lover (and the first to avoid being disgraced from court and/or assassinated after the fact), Alexander Vasilchikov, as “the most boring man in all of Russia”. Contemporary accounts generally concur with this assessment, describing him as polite, intellectual, and effete. Though he was eventually replaced at court by the war hero Potemkin, in receiving what amounted to bribes to leave court quietly, he was awarded a lavish Moscow estate, a pile of rubles, and a generous pension. He married some years later, and by historical accounts, lived happily ever after.
Now, sure, Vasilchikov isn’t as well remembered as Potemkin or Orlov, but he wound up alright. Orlov was stripped of his rank unceremoniously by an enraged Empress, and Potemkin’s historical legacy is forever marred by rumors created by the ottomans, and his opponents at court, about his deep insecurities, culminating in his inventing Potemkin villages to impress the Empress on her inspection tours of conquered territory.
So being boring isn’t inherently bad. And should I choose to own it, it means I can opt to focus on the things that I do enjoy, rather than compelling myself to attend parties where I feel totally alienated.

Hidden Problems

One of the problems that I and people with diagnoses similar to my own face is the problem of hidden disability. While a Sherlock-esque character could easily deduce much of my condition from my appearance, it isn’t a thing that people notice without looking. On the whole, this is most likely strictly preferable to the alternative, of having a more obvious handicap, such as being in a wheelchair or on oxygen (or perhaps rather, since my condition has at times had me in both situations, I should say, permanently), it raises a whole host of other problems.

Here’s the situation: on my best days, my capabilities are on par with an average person in almost every respect. I say “almost” solely because, even if everything goes perfectly health-wise, I’m still going to be monitoring and thinking about all of it in the back of my head. And realistically, things never go perfectly without continued micromanagement. But despite all this, I can function at a normal level. On such days, I look and act like a normal person.
On the other hand, on a bad day… well, on a really bad day, I’m in a coma, or unconscious, and don’t wake up. On a slightly less bad day, I might be confined to my bed all day, forced to watch helplessly from my window as the world goes by. On such days, my mind is generally in such a state of confusion that I do not use more than a baker’s dozen more words. On a regularly bad day, I might walk as far as to the downstairs couch and speak as many as fifty words.
Such days make up between 30% (extremely generous) and 95% (exceptionally bad) of all days in a year. These wide discrepancies are due to variations in the selection of cold, flu, and other illnesses in my vicinity, and depending on how one counts. Most days I spend at least part of the day unable to go about my business due to health, even if I’m fine the rest of the day. That’s the other part of the equation: I go from being dandy to being at death’s door at the drop of a hat, and without warning.
Naturally, if I am unable to leave the house, I tend to avoid interfacing with people. Even when I am physically capable, and by most standards coherent, I see it as plainly disrespectful to deal with other people when I am cognitively impaired, and would rather postpone a meeting than beg pardon for not being able to understand something they said that I would normally be able to grasp instantaneously. Others can argue that this is a fault with me, but I have dealt with enough people who have grown upset or concerned to see me trying to push through that I do not wish to repeat the ordeal.
I can also be painfully shy at times. Like most people, I have a romantic ideal of myself which I prefer to show off to the world in place of the more flawed reality. I like to be the renaissance guy: the witty, intelligent, cosmopolitan, intellectual, organized, well spoken and written, with a palpable aura that engulfs those around him; a frood who knows where his towel is, if there ever was one. And when part of my life doesn’t jive with that image, it is edited out of the narrative. The weeks or even months at home sick are glossed over, and the story skips from something that happened in November of 2016 to the events of February 2017 without even a pause.
This isn’t a sinister plot to make me sound like a more interesting person than I am or anything. There’s really just not much to be said about being sick for such a long period. Having a migraine for a week and a half is pretty much the same experience as having it for a few hours, just longer and slightly more intense in places. Being conscious of the fact that others don’t know what’s happening, to my detriment, doesn’t make me any more likely to post on Facebook affirming that I’m still sick for the eighth day in a row, and rumors either of my death or retirement to the Bahamas are much exaggerated.
Rubbing my misery in other’s faces by describing my symptoms in detail goes against this instinct. It goes against my efforts to create a Potemkin village of my life, where I always get everything I want and there are no problems, because good things happen to good people, and I’m a good person. There is also fear. Fear that others won’t believe me when I describe my symptoms, because how could I deal with such trials discretely out of sight? Or fear that they’ll believe that the symptoms exist, but that they’re my own fault, because I don’t do a good enough job treating them; that they’ll think they know better than I do, and refuse to support me in handling my own medical issues.
There’s the fear that others will want to exclude me, either because they believe me to be contagious, or because they think keeping me out of trouble is for my own good. I’ve even had people, upon being told of my symptoms, suggest to my face that my illnesses are divine or karmic punishment, and that I should feel ashamed for whatever it was that makes me deserve my fate.
And so I return to society exactly as I left, without any sign of the ordeals that I have endured in the interim, with only vague explanations of being sick. And so those around go on without understanding my problems, or what they can do to help. Possibly they wonder why I keep needing so much slack, or what I do when I’m not around, since surely I can’t be sick all the time. After all, it’s not like they know anyone who’s regularly sick for so long. And they’ve never seen me sick.
One of my earlier doctors had the advice to fill out all paperwork for accommodations assuming a worst case scenario, as a means of making people understand the possible consequences of having to go on unsupported. This is a helpful notion, but for the fact that the worst case scenario is always the same: I drop dead.
This is accurate, and a very real danger, but somehow never gets the point across. Guest services at Disney, for example, doesn’t react to being told that I could drop dead, but will rush to get me a disability access pass upon being told that I could have a mere seizure. I suppose this is a quirk of human nature. Death seems faraway and intangible compared to the concrete, visceral experience of a person having a seizure. Perhaps moreover, since death is to some degree inevitable, if not necessarily imminent, it seems like the lesser of the evils compared to an entirely avoidable pain. Dropping dead seems less real or hyperbolic than a seizure.
Hence the problem of communicating the gravity of the situation while still making it seem real. It is a delicate balance; a difficult story to weave and act to maintain. I credit any ability in the areas of persuasive writing and public speaking to the experience I have gained involuntarily from these exercises; from having to always find and maintain this delicate line which allows me to get what I need; communicating and proving to others that I have needs, without giving up all my cards and being completely at their mercy.

A Witch’s Parable

Addendum: Oh good grief. This was supposed to go up at the beginning of the week, but something went awry. Alas! Well, it’s up now.


Suppose we live in colonial times, in a town on an archipelago. The islands are individually small and isolated, but their position relative to the prevailing winds and ocean currents mean that different small islands can grow a wide variety of crops that are normally only obtainable by intercontinental trade. The presence of these crops, and good, predictable winds and currents, has made those islands that don’t grow food into world renowned trade hubs, and attracted overseas investment.

With access to capital and a wide variety goods, the archipelago has boomed. Artisans, taking advantage of access to exotic painting supplies, have taken to the islands, and scientists of all stripes have flocked to the archipelago, both to study the exotic flora and fauna, and to set up workshops and universities in this rising world capital. As a result of this local renaissance, denizens of the islands enjoy a quality of life hitherto undreamt of, and matched only in the palaces of Europe.

The archipelago is officially designated as a free port, open to ships from across the globe, but most of daily life on the islands is managed by the Honorable South India Trading Company, who collect taxes and manage infrastructure. Nobody likes the HSITC, whose governor is the jealous brother of the king, and is constantly appropriating funds meant for infrastructure investment to spend on court intrigue.

Still, the HSITC is entrenched in the islands, and few are willing to risk jeopardizing what they’ve accomplished by attempting insurrection. The cramped, aging vessels employed by the HSITC as ferries between the islands pale in comparison to the new, foreign ships that dock at the harbors, and their taxes seem to grow larger each year, but as long as the ferry system continues to function, there is little more than idle complaint.

In this town, a local woman, who let’s say is your neighbor, is accused of witchcraft. After the debacle at Salem, the local magistrates are unwilling to prosecute her without absolute proof, which obviously fails to materialize. Nevertheless, vicious rumors about men being transmogrified into newts, and satanic rituals conducted at night, spread. Local schoolchildren and off duty laborers congregate around your house, hoping to get a glimpse of the hideous wretch that legend tells dwells next door.
For your part, you carry on with your daily business as best you can, until one day, while waiting at the docks to board a ferry to the apothecary, a spat erupts between the woman in question and the dock guard, who insists that he shan’t allow her to board, lest her witchery cause them to become shipwrecked. The woman is denied boarding, and since the HSITC run all the ferries, this now means that she’s effectively cut off from rest of the world, not by any conviction, but because there were not adequate safeguards against the whims of an unaccountable monopoly.
As you’ve probably guessed, this is a parable about the dangers posed by the removal of net neutrality regulations. The internet these days is more than content. We have banks, schools, even healthcare infrastructure that exist solely online. In my own case, my life support systems rely on internet connectivity, and leverage software and platforms that are distributed through open source code sharing. These projects are not possible without a free and open internet.
Others with more resources than I have already thoroughly debunked the claims made by ISPs against net neutrality. The overwhelming economic consensus is that the regulations on the table will only increase economic growth, and will have no impact on ISP investment. The senate has already passed a bill to restore the preexisting regulations that were rescinded under dubious circumstances, and a house vote is expected soon.
I would ask that you contact your elected representatives, but this issue requires more than that. Who has access to the internet, and under what terms, may well be the defining question of this generation, and regardless of how the vote in the house goes, this issue and variants of it will continue to crop up. I therefore ask instead that you become an active participant in the discussion, wherever it takes us. Get informed, stay informed, and use your information to persuade others.
I truly believe that the internet, and its related technologies, have the potential to bring about a new renaissance. But this can only happen if all of us are aware and active in striving for the future we seek. This call to arms marks the beginning of a story that in all likelihood will continue for the duration of most of our lifetimes. We must consult with each other, and our elected representatives, and march, and rally, and vote, by all means, vote. Vote for an open internet, for equal access, for progress, and for the future.

Shiny

Alright, listen up Pandora, Diamonds International, Tiffany & Co., and other brands of fancy upscale jewelry that I can’t be bothered to recall at this time because I’m a guy. I’m about to talk about an idea that could help you dodge a bullet and get ahead of the next big thing.

Let’s face it, jewelry is seen as feminine. This is true to a place where guys feel out of place in a jewelry store; not just lost, but in many cases subtly unwelcome. This is a problem for you, because, as businesses, you want to be able to appeal to as wide an audience as possible. And perhaps more to the point, you don’t want to be marked as being too much part of traditional gender roles in the minds of the younger generation.
To your credit, you’re clearly trying, with your displays of cuff links, tie clips, and other implements of haberdashery. There’s just one problem- I have only the vaguest idea what a cuff link or a tie clip is supposed to do for me. As far as I know for sure, cuff links are the little pieces that hold the two wrist parts in handcuffs together, and tie clips are part of a wardrobe organizational system that prevent ties from becoming creased in a way that’s noticeable. And I’ll wager a lot that I’m far from the only guy for whom this holds true.
So you need something more obvious in its application. Something that I can walk up to the display and immediately surmise and articulate precisely why it is I need to own that thing, instead of needing a sales representative to explain to me how back in the olden days, fancy shirts didn’t used to come with buttons on their cuffs, and why I should care to replace mine with something more expensive and less practical.
Like, say, a wristwatch. It’s obvious why I would want to have a watch to tell me the time, and if I’m going to be wearing one anyways, a convincing argument can be made that I ought to treat myself to the finest and shiniest, which, I will be told, has been perfectly painstakingly custom engineered by the best in alpine watchmaking tradition. I may not have any use for such a watch today, but at least it’s a defensible reason for me to indulge myself to peruse shiny and expensive objects.
Except wristwatches are dying. Not just fancy and expensive models that use gratuitous amounts of valuable metals and stones, which have been slowly getting replaced by smaller, lighter, digital models that can also tell me the date, weather, set alarms, and act as a stopwatch since the calculator watches of the 1970s, but even these are being edged out. In some cases by smart watches, which can do everything listed previously, and then also take over several functions of a phone. But in most cases, watches are simply disappearing and being replaced by… nothing.
That’s because the need for a watch has been steadily eroded as young people have decided that they can just use their phone to tell the time. The smartphone didn’t kill the wristwatch, but the cultural shift towards having the action of glancing at one’s phone as casually acceptable as looking at one’s watch will. The more accepted having phones out becomes in polite company, the faster watches will disappear.
So, how do watches compete? It is still marginally easier to glance at a watch than to take a phone out from a pocket and look at it. But when a phone can also do so much more, to the point that pulling out a phone is a routine action anyways (to check texts, news alerts, and the like), the watch is still going to lose. The watch has to be able to take over some tasks from the phone in the same way that the phone can from the watch.
Modern smart watches already meet this threshold. On my own pebble smartwatch, I can receive text messages and other notifications, and decided whether I need to respond without ever having to touch my phone. I can screen incoming calls, and route them through my headphones. I can play and adjust my music, all without ever having to unlock my phone. It does exactly what I need it to, which is why it is an essential part of my essential kit.
There’s just one problem. My smartwatch is made of clunky looking, albeit durable and relatively cheap, plastic components. It stands out like a sore thumb in a formal outfit. Moreover, smart watches aren’t accepted in the same way that smartphones are.
So, jewelry companies: you need a trend you can cash in on with young men? Try smart watches. Cast them in silver and gold, with sparkling diamonds on the menu buttons, and custom engravings. Or heck, cut out watches entirely and go straight to phone cases. The important part is embracing this paradigm shift rather than stubbornly insisting that I still need a miniature grandfather clock on my wrist because my wearable computer isn’t fancy enough.

Life Changing?

What does it take to change a life? To have such an impact on another person that it changes their default behaviors and life trajectory, even if subtly? Certainly it can be argued that it takes very little, since our behaviors are always being influenced by our surroundings. But what about a long-term difference? What does it take to really change someone?

The year 2007 was perhaps the most important and most impactful of my life. I say that 2007 was the year that my childhood ended. This may be a slight over exaggeration, but not by much. It was a year of drama and trauma, of new highs and extreme lows. In my personal history, the year 2007 stands out like 1914 in European history. It is a date I measure things from, even more so than my birthday.
That year contained both the best and worst days of my life to date. The worst day, July 20th, 2007, and the bad days that followed it, I have already written about. But what about the best day? What happened on that day?
January 5th, 2007 had all the hallmarks of a good day. I was on school holiday- summer holiday, in fact, since the Australian school calendar follows Australian seasons so that our main break comes around Christmas -and I was traveling. Being ever-curious and ever-precocious, I loved traveling, especially by plane.
All the mechanisms of air travel fascinated me: the terminals, with their lights and signs and displays, acting as literal gateways to every far flung exotic locale on the planet. Customs and security, with its fancy DHS eagles, and its sense of officiality, and finality, advertising that it once you cross this line, you have crossed some important threshold from which you could not simply return, as if somewhere, someone reading your story would be holding their breath while turning the page. And of course, the planes themselves, which not only seemed to defy physics in their flight, not only liked the world together, but did so in such comfort and luxury.
That day, we started early from the family farm in Indiana to the Indianapolis Airport, via a road that had enough dips and bumps that we called it affectionately “the rollercoaster road”. We arrived at Indianapolis Airport for our short flight to transfer at my all time favorite airport, Chicago O’hare, which I adore for its dinosaur skeleton, its Vienna beef hot dogs, and its inter-concourse tunnel, where I would stare up in wonder from the moving walkway at the ceiling light display. I was told that the abstract neon colors were meant to represent the aurora, but for my part, having seen both, I have always thought the lights at O’hare to be more impressive than the aurora.
We arrived in Orlando at about 8:00pm, which, to my then childish mind, was a kind of magical hour. Things only happened after 8:00 on special occasions- watching New Year’s fireworks or space shuttle launches on television, calls from relatives in different time zones. After 8:00pm was the time of big and exceptional things, and the fact that we were only now boarding the bus from the airport to Disney World only seemed to vindicate the feeling I had woken up with that morning that it was going to be a great day.
Much of the resort was already closed by the time we arrived. But even then, there was much excitement to be had. We found our rooms, and as we wound our way around the Port Orleans Resort, I remember drinking in every detail of the scenery and design, and thinking to myself about how much attention and intent must have gone onto adding all the little details and embellishments. At this time I used to enjoy drawing, but whenever I did, I would become obsessed with details and embellishments. I would draw an airplane, and become fixated on the precise curvature of the engines, the alignment of the ailerons, the number of windows depending on whether it was a Boeing 747 like the one we took to San Francisco or an Embraer like the one we took…
You get the idea. Details were important to me. For me to see that someone had paid enough attention to the details to add all these little decorative Easter eggs, like hidden Mickeys, or a plastic frog on a Lilly pad in a small pond beside the concrete path. To see these little acknowledgments of my attentiveness told me that other people had been paying at least as much attention as I had, which put me at ease, and made me feel welcome and safe, at a time when I had spent most of my life as a foreigner, and a great deal of my time at school being bullied.
Thus assured that I was in a place that was safe and well designed by people who thought like I did, I let loose, skipping happily along as I never did in school for fear of being mocked, and sang songs I had memorized from the inflight children’s “radio station” (which was actually just a recording loop) about fishing worms, the state of Michigan, and carps in tubs.
The next day, I was reunited with my Best Friend in the Whole Entire World, whom I knew from Australia, but who had recently moved to Denver. It was the first time we had seen each other since he had moved away. I had missed his going away party because, in what now seems like a foreshadowing of what was to come, I had been in the hospital with Acute Pan Sinusitis, and after having my immune system wiped out by the drugs, was stuck in protective quarantine.
Together, we tore up the parks, going on rides and eating Mickey out of house and home. This last point proved to be dire foreshadowing, as looking back I can say it was the first time that the earliest symptoms of the medical calamity that would consume my life just six months later were indisputably noticeable. In fact, the symptoms of hunger and thirst were so bad that they caused problems trying to eat off the Disney meal plan. It was the only bittersweet thing about the trip- that it was the last great experience of my life unmarried by the specter of disability and looming death. But that’s a story for another time.
So, back to the question at hand: what does it take to change a life? Was my trip life-changing? Did it change who I am as a person, or alter my future behavior or trajectory in a meaningful way? Hard to say. Despite picking a solidly philosophical topic I’m not willing to sit down for the requisite hours of navel gazing to try and formulate the probable alternate histories if that trip hadn’t gone just so.
It’s tempting, then, to brush it off and say that even though I definitely see that event as one of the high points of my existence, that it never changed who I am at my core. It certainly didn’t change the course of events that were about to happen, which were in retrospect so obviously already in motion. It would be easy to extrapolate that the whole event had no effect on me, but for the fact that I know of a counterexample.
The day itself, more than a decade in the past, has gotten old enough in my mind that parts of it have started to fade around the edges. I don’t, for example, remember which side of the two connecting rooms my brother and I slept in, and which side my parents slept in. The parts I do remember are as much vaguely connected vignettes as they are a consistent narrative, and correlate more to the things that struck me as important at the time than what might be important to the story now. Hence why I can’t tell you what rides we went on, but I can describe the exact configuration of the twisty straw that I had with my milkshake.
One of the things that I remember clearest about that day, one of the things that to this day will occasionally interrupt my stream of consciousness, was the in flight radio. In particular, I recall there being several songs about environmental themes. And I recall sitting there, consciously rethinking my point of view. My train of thought went something like this: The reason I’m hearing this song, which, though decent, isn’t artistically great, is because it’s about a cause, which is clearly important to whomever is picking songs to play.
The kind of causes that get songs written about them, and, despite artistic shortcomings, played constantly at children, are ones that are important to society at large: learning one’s ABCs, being prepared for emergencies, and national crises like a world war (Over There) or pandemic (there was a song about washing one’s hands that was circulated during the Mad Cow scare). That I am hearing this song indicates that it is viewed not just as something of idle interest, but as a crisis of immediate concern.
It was at that moment that I remember mentally upgrading the issue of environmentalism from something that I was merely passively sympathetic towards, to something which I actively supported where possible. Hearing that song on that trip changed my life. Or if it is melodramatic to say that hearing a song single handed lyrics changed my life trajectory, then at least it is accurate to say that hearing those songs at that time provoked me into a change in attitude and behavior.
Would I still have had such a moment of revelation on a different day? Probably, but I doubt I would have remembered it. But as to the question of what it takes to change a life, we are forced to consider how much effort it took for me to hear those songs. There is no good answer here. On the one hand, it took a massive amount of societal machinery to record, license, and select the song, and then see that it was played on the flight that I happened to be on. To do this purposely would require a massive conspiracy.
On the other hand, it requires no small number of miracles from a huge number of contributors to get me the iPad I’m writing on, and the web server I’m posting to, and massive amounts of effort to maintain the global system of communications that allow you to view my words, and yet I’d hardly argue that my writing here is the pinnacle of all of society thus far. Perhaps so, in a strictly epistemological, navel-gazing sense that is largely meaningless for the purpose of guiding future individual actions. But realistically, my authorial exercise here is only slightly more effort than recording my unpolished stream of consciousness.
The truth is, even when I can identify what it has taken in the past to change my own life, I can’t extrapolate that knowledge into a meaningful rule. It’s clearly not that hard, given that it’s happened so many times before, and on such flimsy pretenses. But it also clearly can’t be that easy, or else everyone would already be their best self.
People have in the past attempted to compliment me by insinuating that my writing, or my speeches at events, or my support, have changed their lives. Despite their intentions at flattery, I have generally been disinclined to believe them, on the grounds that, though I may take pride that my writing is decent, it is certainly not of a caliber great enough to be called life-changing. But upon reflection, perhaps it doesn’t need to be. Perhaps the bar isn’t nearly that high. Perhaps, I venture to hope, one does not need to be perfect to change another’s life for the better.

Personal Surveillance – Part 2

This is the second installment in a continued multi-part series entitled Personal Surveillance. To read the other parts once they become available, click here.


Our modern surveillance system is not the totalitarian paradigm foreseen by Orwell, but a decentralized, and in the strictest sense, voluntary, though practically compulsory network. The goal and means are different, but the ends, a society with total insight into the very thoughts of its inhabitants, are the same.

Which brings me to last week. Last week, I was approached by a parent concerned about the conduct of her daughter. Specifically, her daughter has one of the same diagnoses I do, and had been struggling awfully to keep to her regimen, and suffering as a result. When I was contacted the daughter had just been admitted to the hospital to treat the acute symptoms and bring her back from the brink. This state of affairs is naturally unsustainable, in both medical and epistemological terms. I was asked if there was any advice I could provide, from my experience of dealing with my own medical situation as a teenager, and in working closely with other teenagers and young adults.

Of course, the proper response depends inextricably upon the root cause of the problem. After all, treating what may be a form of self harm, whether intentional or not, which has been noted to be endemic to adolescents who have to execute their own medical regimen, or some other mental illness, with the kind of disciplinary tactics that might be suited to the more ordinary teenage rebellion and antipathy, would be not only ineffective and counterproductive, but dangerous. There are a myriad of different potential causes, many of which are mutually exclusive, all of which require different tactics, and none of which can be ruled out without more information.

I gave several recommendations, including the one I have been turning over in my head since. I recommended that this mother look into her daughter’s digital activities; into her social media, her messages, and her browser history. I gave the mother a list of things to look out for: evidence of bullying online or at school, signs that the daughter had been browsing sites linked to mental illness, in particular eating disorders and depression, messages to her friends complaining about her illness or medical regimen, or even a confession that she was willfully going against it. The idea was to try and get more information to contextualize her actions, and that this would help her parents help her.

After reflecting for some time, I don’t feel bad about telling the mother to look through private messages. The parents are presumably paying for the phone, and it’s generally accepted that parents have some leeway to meddle in children’s private lives, especially when it involves medical issues. What bothers me isn’t any one line being crossed. What bothers me is this notion of looking into someone’s entire life like this.

That is, after all, the point here. The mother is trying to pry into her daughter’s whole life at once, into her mind, to figure out what makes her tick, why she does what she does, and what she is likely to do in the future. Based on the information I was provided, it seemed justified; even generous. As described, the daughter’s behavior towards her health is at best negligent, and at worst suggests she is unstable and a danger to herself. The tactics described, sinister though they are, are still preferable to bringing down the boot-heel of discipline or committing her to psychiatric care if neither may be warranted.

This admittedly presupposes that intervention is necessary in any case, in effect presuming guilt. In this instance, it was necessary, because the alternative of allowing the daughter to continue her conduct, which was, intentional or not, causing medical harm and caused her to be hospitalized, was untenable. At least, based on the information I had. But even such information was certainly enough to be gravely concerned, if not enough to make a decision on a course of action.

The goal, in this case, was as benevolent as possible: to help the daughter overcome whatever it was that landed her in this crisis in the first place. Sometimes such matters truly are a matter of doing something “for their own good”. But such matters have to be executed with the utmost kindness and open-mindedness. Violating someone’s privacy may or may not be acceptable under certain circumstances, but certainly never for petty vendettas.

It would not, for example, be acceptable for the mother to punish the daughter for a unkind comment made to a friend regarding the mother. Even though this might suggest that some discipline is in order to solve the original problem, as, without other evidence to the contrary, it suggests a pattern of rebellion that could reasonably be extrapolated to include willful disobedience of one’s medical regimen, such discipline needs to be meted out for the original violation, not for one that was only discovered because of this surveillance.

Mind you, I’m not just talking out of my hat here. This is not just a philosophical notion, but a legal one as well. The fifth amendment, and more broadly the protections against self-incrimination, are centered around protecting the core personhood- a person’s thoughts and soul -from what is known as inquisitorial prosecution. Better scholars than I have explained why this cornerstone is essential to our understanding of justice and morality, but, to quickly summarize: coercing a person by using their private thoughts against them deprives them of the ability to make their own moral choices, and destroys the entire notion of rights, responsibilities, and justice.

Lawyers will be quick to point out that the fifth amendment as written doesn’t apply here per se (and as a matter of law, they’d be right). But we know that our own intentions are to look into the daughter’s life as a whole, her thoughts and intentions, which is a certain kind of self incrimination, even if you would be hard pressed to write a law around it. We are doing this not to find evidence of new wrongs to right, but to gain context which is necessary for the effective remedy of problems that are already apparent, that were already proven. By metaphor: we are not looking to prosecute the drug user for additional crimes, but to complete rehabilitation treatment following a previous conviction.

In government, the state can circumvent the problems posed to fact-finding by the fifth amendment by granting immunity to the testifying witness so that anything they say can not be used against them, as though they had never said it, neutralizing self-incrimination. In our circumstances, it is imperative that the information gathered only be used as context for the behaviors we already know about. I tried to convey this point in my recommendations to the mother in a way that also avoided implying that I expected she would launch an inquisition at the first opportunity.

Of course, this line of thinking is extremely idealistic. Can a person really just ignore a social taboo, or minor breach, and carry on unbiased and impartial in digging through someone’s entire digital life? Can that person who has been exposed to everything the subject has ever done, but not lived any of it, even make an objective judgment? The law sweeps this question under the rug, because it makes law even more of an epistemological nightmare than it already is, and in practical terms probably doesn’t matter unless we are prepared to overhaul our entire constitutional system. But it is a pertinent question for understanding these tactics.

The question of whether such all-inclusive surveillance of our digital lives can be thought to constitute self-incrimination cannot be answered in a blog post, and is unlikely to be settled in the foreseeable future. The generation which is now growing up, which will eventually have grown up with nothing else but the internet, will, I am sure, be an interesting test case. It is certainly not difficult to imagine that with all the focus on privacy and manipulation of online data that we will see a shift in opinions, so that parts of one’s online presence will be thought to be included as part of one’s mind. Or perhaps, once law enforcement catches up to the 21st century, we will see a subtle uptick in the efficacy of catching minor crimes and breaches of taboo, possibly before they even happen.

Personal Surveillance – Part 1

This is the first installment in a multi-part series entitled Personal Surveillance. To read the other parts once they become available, click here.


George Orwell predicted, among many other things, a massive state surveillance apparatus. He wasn’t wrong; we certainly have that. But I’d submit that it’s also not the average person’s greatest threat to privacy. There’s the old saying that the only thing protecting citizens from government overreach is government inefficiency, and in this case there’s something to that. Surveillance programs are terrifyingly massive in their reach, but simply aren’t staffed well enough to parse everything. This may change as algorithms become more advanced in sifting through data, but at the moment, we aren’t efficient enough to have a thought police.

The real danger to privacy isn’t what a bureaucrat is able to pry from an unwilling suspect, but what an onlooker is able to discern from an average person without any special investigative tools or legal duress. The average person is generally more at risk from stalkers than surveillance. Social media is especially dangerous in this regard, and the latest scandals surrounding Cambridge Analytica, et. al. are a good example of how social media can be used for nefarious purposes.

Yet despite lofty and varied criticism, I am willing to bet the overall conclusion of this latest furor: the eventual consensus will be that, while social media may be at fault, its developers are not guilty of intentional malice, but rather of pursuing misaligned incentives, combined with an inability to keep up, whether through laziness or not grasping the complete picture soon enough, with the accelerating pace with which our lives have become digitized.

Because that is the root problem. Facebook and its ilk started as essentially decentralized contact lists and curated galleries, and twitter and its facsimiles started as essentially open-ended messaging services, but they have evolved into so much more. Life happens on the Internet nowadays.

In harkening back to the halcyon days before the scandal du jour, older people have called attention to the brief period between the widespread adoption of television and the diversification; the days when there were maybe a baker’s dozen channels. In such times, we are told, people were held together by what was on TV. The political issues of the day were chosen by journalists, and public discourse shaped almost solely by the way they were presented on those few channels. Popular culture, we are told, was shaped in much the same way, so that there was always a baseline of commonality.

Whether or not this happened in practice, I cannot say. But I think the claim about those being the halcyon days before all this divide and subdivide are backwards. On the contrary, I would submit that those halcyon days were the beginning of the current pattern, as people began to adapt to the notion that life is a collective enterprise understood through an expansive network. Perhaps that time was still a honeymoon phase of sorts. Or perhaps the nature of this emerging pattern of interconnectedness is one of constant acceleration, like a planet falling into a black hole, slowly, imperceptibly at first, but always getting faster.

But getting back to the original point, in addition to accelerating fragmentation, we are also seeing accelerated sharing of information, which is always, constantly being integrated, woven into a more complete mosaic narrative. Given this, it would be foolish to think that we could be a part of it without our own information being woven into the whole. Indeed, it would be foolish to think that we could live in a world so defined by interconnectedness and not be ourselves part of the collective.

Life, whether we like it or not, is now digital. Social media, in the broadest sense, is the lenses through which current events are now projected onto the world, regardless of whether or not social media was built for or to withstand this purpose. Participation is compulsory (that is, under compulsion, if not strictly mandatory) to be a part of modern public life. And to this point, jealous scrutiny of one’s internet presence is far more powerful than merely collecting biographical or contact information, such as looking one up in an old fashioned directory.

Yet society has not adapted to this power. We have not adapted to treat social media interactions with the same dignity with which we respect, for example, conversations between friends in public. We recognize that a person following us and listening in while we were in public would be a gross violation of our privacy, even if it might skirt by the letter of the law*. But trawling back through potentially decades of interactions online, is, well… we haven’t really formulated a moral benchmark.

This process is complicated by the legitimate uses of social media as a sort of collective memory. As more and more mental labor is unloaded onto the Internet, the importance of being able to call up some detail from several years ago becomes increasingly important. Take birthdays, for example. Hardly anyone nowadays bothers to commit birthdays to memory, and of the people I know, increasingly few keep private records, opting instead to rely on Facebook notifications to send greetings. And what about remembering other events, like who was at that great party last year, or the exact itinerary of last summer’s road trip?

Human memory fades, even more quickly now that we have machines to consult, and no longer have to exercise our own powers of recognizance. Trawling through a close friend’s feed in order to find the picture of the both of you from Turks and Caicos, so that you can get it framed as a present, is a perfectly legitimate, even beneficial, use of their otherwise private, even intimate, data, which would hardly be possible if that data were not available and accessible. The modern social system- our friendships, our jobs, our leisure -rely on this accelerating flow of information. To invoke one’s privacy even on a personal level seems now to border on the antisocial.

Soda Cans

One of the first life changes I made after I began to spend a great deal of time in hospitals was giving myself permission to care about the small things. As a person who tends to get inside my own head, sometimes to a fault, the notion of, for example, finding joy in a beautiful sunset, has often seemed trite and beneath me, as though the only thoughts worthy of my contemplation are deep musings and speculations on the hows and whys of life, the universe, and everything.

This line of thinking is, of course hogwash. After all, even if one aims to ask the big questions, doing so is not in any way mutually exclusive with finding meaning in the little things. Indeed, on the contrary, it is often by exploration of such matters more easily grasped that we are able to make headway towards a more complete picture. And besides that, getting to enjoy the little things is quite nice.

With this background in mind, I have been thinking lately about the can design for the new round of Diet Coke flavors. So far I have only tried the twisted mango flavor. On the whole, I like it. I do not think it will supplant Coke Zero Vanilla as my default beverage option (the reasoning behind this being the default is far too elaborate to go into here). The twisted mango flavor is more novel, and hence is more appropriate on occasion than as a default option. I can imagine myself sipping it on vacation, or even at a party, but not on a random occasion when I happen to need a caffeinated beverage to dull a mild headache.

I do not, however, like the can that it comes in.

For some reason, the Coca-Cola company thought it necessary to mess with success, and change the shape of the can the new line of flavors come in. The volume is the same, but the shape is taller, with a shorter circumference, similar to the cans used by some beer and energy drink brands. I can only assume that this is the aesthetic that Coca-Cola was aiming for; that their intention is to obfuscate and confuse, by creating a can better able to camouflage among more hardcore party drinks.

If this is the reason for the redesign, I can understand, but cannot approve. Part of the reason that I have such strong feelings about various Coke products (or indeed, have feelings at all) is precisely because I cannot drink. Legally, I am not old enough in the United States (not that this has ever stopped my friends, or would stop me while traveling abroad), and moreover even if I was old enough, my medical condition and medications make alcohol extremely ill-advised.

Coke is a stand-in, in this regard. I can be fussy about my Coke products in the way that others fuss over beers. And because I have a drink over which I am seen to be fussing, it becomes common knowledge that I enjoy this very particular product. As a result, when it comes to that kind of person that is only satisfied when there is a (hard) drink in every hand, they can rest easy seeing that I have my preferred beverage, even if mine happens to be non-alcoholic. It is a subtle maneuver that satisfies everyone without anyone having to lose face or endure a complex explanation of my medical history. Coke undercuts this maneuver by making their product look more like beer. It sends the subtle subconscious message that the two are interchangeable, which in my case is untrue.

But this is hardly my primary complaint. After all, if my main problem was social camouflage, I could always, as my medical team have suggested, use camouflage, and simply during my beverage of choice out of some other container. It worked well enough for Zhukov, who naturally couldn’t be seen publicly drinking decadent western capitalism distilled in his capacity as leader of the Red Army, and so took to drinking a custom-ordered clear formulation of Coke in a bottle design to mimic those of the Soviet state vodka monopoly. It shouldn’t be my problem in the first place, but I could deal with mere cosmetic complaints.

No, what frustrates me about the can is its functionality. Or rather, its lack thereof. I’ve turned the problem over in my head, and from an engineering standpoint, I can’t fathom how the new design is anything but a step backwards. I assume that a megacorporation like Coca-Cola went through a design process at least as rigorous the one we employed in our Introduction to Engineering Design class. I would hope that they have spent at least as much time thinking about the flaws of the new design. In case they haven’t, here are my notes:

1) The can is too long for straws.
Some people prefer to drink out of a glass. For me, having to drink cold fluid in this way hurts my teeth. And if there is ice in the glass, I have to worry about accidentally swallowing the ice cubes, turning the whole experience into a struggle. Plus, then I have to deal with washing the empty glass afterwards. Drinking straight out of the can is better, but tipping a can back to take a sip makes one look like an uncivilized glutton who hasn’t been introduced to the technological marvel of the bendy straw. And conveniently, the opener on most cans can also be adjusted to secure a straw from bobbing up and down. Alas, the new can design is too long to comfortably accommodate a standard bendy straw.

2) The can doesn’t stand up as well
The fact that the can is taller, with a smaller base means that does not fit comfortably in most cup holders. Moreover, the smaller base area means that it is less stable standing upright. It does take up less space on the table, but that doesn’t matter when it falls over because I sneezed.

3) The shape makes for poor insulation
Alright, this part involves some math and physics, so bear with me. The speed at which a chilled cylindrical object, such as a soda can, will warm to room temperature is governed by the amount of surface area, because, the greater the surface area, the more direct contact with the surroundings, and the more conduction of heat. The can is taller, but the volume is the same, so the surface area must be greater to compensate. The conclusion is intuitively obvious if one remembers that a circle is the most efficient way to contain area on a 2D plane (and by extension, a sphere is most efficient for 3D, but we use cylinders and boxes for the sake of manufacturing and storage).

Consequently, the greater surface area of the can means that it comes in contact with more of the surrounding air. This increased contact results in increased conduction of heat from the air into the can, and proportionally faster warming. So my nice, refreshing, cold soda becomes room temperature and flat in a hurry. Sure, this means it also gets colder faster, and so perhaps it is a feature for that peculiar brand of soul that doesn’t keep soda refrigerated beforehand, but insists on waiting to chill it immediately before drinking out of the can, but I have no concern for such eccentrics.

I could go on, but I’m belaboring the point even now. The new can design is a step backwards. I just can’t help but feel like Coca-Cola tried to reinvent the wheel here, and decided to use Reulaux rotors instead of circles. Now, onto the important question: does it matter? Well, it clearly matters to Coca-Cola, seeing as they say fit to make the change. And, despite being objectively petty, it does matter to me, because it impacts my life, albeit in a relatively small way. Denying that I have strong feelings about this matter in favor of appearing to focus only on high minded ideals helps no one. And, as I learned in my time in the hospital, when the big picture looks bleak and can’t be changed, the small things start to matter a lot more.