The Social Media Embargo

I have previously mentioned that I do not frequently indulge in social media. I thought it might be worthwhile to explore this in a bit more detail.

The Geopolitics of Social Media

Late middle and early high school are a perpetual arms race for popularity and social power. This is a well known and widely accepted thesis, and my experience during adolescence, in addition to my study of the high schools of past ages, and of other countries and cultures, has led me to treat it as a given. Social media hasn’t changed this. It has amplified this effect, however, in the same manner that improved intercontinental rocketry and the invention of nuclear ballistic missile submarines intensified the threat of the Cold War.

To illustrate: In the late 1940s and into the 1950, before ICBMs were accurate or widely deployed enough to make a credible threat of annihilation, the minimum amount of warning of impending doom, and the maximum amount of damage that could be inflicted, were limited by the size and capability of each side’s bomber fleet. Accordingly, a war could only be waged, and hence, could only escalate, as quickly as bombers could reach enemy territory. This both served as an inherent limit on the destructive capability of each side, and acted as a safeguard against accidental escalation by providing a time delay in which snap diplomacy could take place.

The invention of long range ballistic missiles, however, changed this fact by massively decreasing the time from launch order to annihilation, and the ballistic missile submarine carried this further by putting both powers perpetually in range for a decapitation strike – a disabling strike that would wipe out enemy command and launch capability.

This new strategic situation has two primary effects, both of which increase the possibility of accident, and the cost to both players. First, both powers must adopt a policy of “Launch on Warning” – that is, moving immediately to full annihilation based only on early warning, or even acting preemptively when one believes that an attack is or may be imminent. Secondly, both powers must accelerate their own armament programs, both to maintain their own decapitation strike ability, and to ensure that they have sufficient capacity that they will still maintain retaliatory ability after an enemy decapitation strike.

It is a prisoner’s dilemma, plain and simple. And indeed, with each technological iteration, the differences in payoffs and punishments becomes larger and more pronounced. At some point the cost of continuous arms race becomes overwhelming, but whichever player yields first also forfeits their status as a superpower.

The same is, at least in my experience, true of social media use. Regular checking and posting is generally distracting and appears to have serious mental health costs, but so long as the cycle continues, it also serves as the foremost means of social power projection. And indeed, as Mean Girls teaches us, in adolescence as in nuclear politics, the only way to protect against an adversary is to maintain the means to retaliate at the slightest provocation.

This trend is not new. Mean Girls, which codified much of what we think of as modern adolescent politics and social dynamics, was made in 2004. Technology has not changed the underlying nature of adolescence, though it has accelerated and amplified its effects and costs. Nor is it limited to adolescents: the same kind of power structures and popularity contests that dominated high school recur throughout the world, especially as social media and the internet at large play a greater role in organizing our lives.

This is not inherently a bad thing if one is adept at social media. If you have the energy to post, curate, and respond on a continuous schedule, more power to you. I, however, cannot. I blame most of this on my disability, which limits my ability to handle large amounts of stimuli without becoming both physiologically and psychologically overwhelmed. The other part of this I blame on my perfectionist tendencies, which require that I make my responses complete and precise, and that I see through my interactions until I am sure that I have proven my point. While this is a decent enough mindset for academic debate, it is actively counterproductive on the social internet.

Moreover, continuous exposure to the actions of my peers reminded me of a depressing fact that I tried often to forget: that I was not with them. My disability is not so much a handicap in that is prevents me from doing things when I am with my peers in that it prevents me from being present with them in the first place. I become sick, which prevents me from attending school, which keeps me out of conversations, which means I’m not included in plans, which means I can’t attend gatherings, and so forth. Social media reminds me of this by showing me all the exciting things that my friends are doing while I am confined to bed rest.

It is difficult to remedy this kind of depression and anxiety. Stray depressive thoughts that have no basis in reality can, at least sometimes, and for me often, be talked apart when it is proven that they are baseless, and it is relatively simple to dismiss them when they pop up later. But these factual reminders that I am objectively left out; that I am the only person among my peers among these smiling faces; seemingly that my existence is objectively sadder and less interesting; is far harder to argue.

The History of the Embargo

I first got a Facebook account a little less than six years ago, on my fourteenth birthday. This was my first real social media to speak of, and was both the beginning of the end of parental restrictions on my internet consumption, and the beginning of a very specific window of my adolescence that I have since come to particularly loath.

Facebook wasn’t technically new at this point, but it also wasn’t the immutable giant that it is today. It was still viewed as a game of the young, and it was entirely possible to find someone who wasn’t familiar with the concept of social media without being a total Luddite. Perhaps more relevantly, there were then the first wave of people such as myself, who had grown up with the internet as a lower-case entity, who were now of age to join social media. That is, these people had grown up never knowing a world where it was necessary to go to a library for information, or where information was something that was stored physically, or even where past stories were something held in one’s memory rather than on hard drives.

In this respect, I consider myself lucky that the official line of the New South Wales Department of Eduction and Training’s official computer curriculum was, at the time I went through it, almost technophobic by modern standards; vehemently denouncing the evils of “chatrooms” and regarding the use of this newfangled “email” with the darkest suspicion. It didn’t give me real skills to equip me for the revolution that was coming; that I would live through firsthand, but it did, I think, give me a sense of perspective.

Even if that curriculum was already outdated even by the time it got to me, it helped underscore how quickly things had changed in the few years before I had enrolled. This knowledge, even if I didn’t understand it at the time, helped to calibrate a sense of perspective and reasonableness that has been a moderating influence on my technological habits.

During the first two years or so of having a Facebook account, I fell into the rabbit hole of using social media. If I had an announcement, I posted it. If I found a curious photo, I posted it. If I had a funny joke or a stray thought, I posted it. Facebook didn’t take over my life, but it did become a major theatre of it. What was recorded and broadcast there seemed for a time to be equally important as the actual conversations and interactions I had during school.

This same period, perhaps unsurprisingly, also saw a decline in my mental wellbeing. It’s difficult to tease apart a direct cause, as a number of different things all happened at roughly the same time; my physiological health deteriorated, some of my earlier friends began to grow distant from me, and I started attending the school that would continually throw obstacles in my path and refuse to accommodate my disability. But I do think my use of social media amplified the psychological effects of these events, especially inasmuch as it acted a focusing lens on all the things that made me different and apart from my peers.

At the behest of those closest to me, I began to take breaks from social media. These helped, but given that they were always circumstantial or limited in time, their effects were accordingly temporary. Moreover, the fact that these breaks were an exception rather than a standing rule meant that I always returned to social media, and when I did, the chaos of catching up often undid whatever progress I might have made in the interim.

After I finally came to the conclusion that my use of social media was causing me more personal harm than good, I eventually decided that the only way I would be able to remove its influence was total prohibition. Others, perhaps, might find that they have the willpower to deal with shades of gray in their personal policies. And indeed, in my better hours, so do I. The problem is that I have found that social media is most likely to have its negative impacts when I am not in one of my better hours, but rather have been worn down by circumstance. It is therefore not enough for me to resolve that I should endeavor to spend less time on social media, or to log off when I feel it is becoming detrimental. I require strict rules that can only be overridden in the most exceedingly extenuating circumstances.

My solution was to write down the rules which I planned to enact. The idea was that those would be the rules, and if I could justify an exception in writing, I could amend them as necessary. Having this as a step helped to decouple the utilitarian action of checking social media from the compulsive cycle of escalation. If I had a genuine reason to use social media, such as using it to provide announcements to far flung relatives during a crisis, I could write a temporary amendment to my rules. If I merely felt compelled to log on for reasons that I could not express coherently in a written amendment, then that was not a good enough reason.

This decision hasn’t been without its drawbacks. I am, without social media, undoubtedly less connected to my peers as I might otherwise have been, and the trend which already existed of my being the last person to know of anything has continued to intensify, but crucially, I am not so acutely aware of this trend that it has a serious impact one way or another on my day to day psyche. Perhaps some months hence I shall, upon further reflection, come to the conclusion that my current regime is beginning to inflict more damage than that which it originally remedied, and once again amend my embargo.

Arguments Against the Embargo

My reflections on my social media embargo have brought me stumbling upon two relevant moral quandaries. The first is whether ignorance can truly be bliss, and whether there is an appreciable distinction between genuine experience and hedonistic simulation. In walling myself off from the world I have achieved a measure of peace and contentment, at the possible cost of disconnecting myself from my peers, and to a lesser degree from the outside world. In the philosophical terms, I have alienated myself, both from my fellow man, and from my species-essence. Of course, the question of whether social media is a genuine solution to, or a vehicle of, alienation, is a debate unto itself, particularly given my situation.

It is unlikely, if still possible, that my health would have allowed my participation in any kind of physical activity which I could have been foreseeably invited to as a direct result of increased social media presence. Particularly given my deteriorating mental health of the time, it seems far more reasonable to assume that my presence would have been more of a one-sided affair: I would have sat, and scrolled, and become too self conscious and anxious about the things that I saw to contribute in a way that would be noticed by others. With these considerations in mind, the question of authenticity of experience appears to be academic at best, and nothing for me to loose sleep over.

The second question regards the duty of expression. It has oft been posited, particularly with the socio-political turmoils of late, that every citizen has a duty to be informed, and to make their voice heard; and that furthermore in declining to take a position, we are, if not tacitly endorsing the greater evil, then at least tacitly declaring that all positions available are morally equivalent in our apathy. Indeed, I myself have made such arguments on the past as it pertains to voting, and to a lesser extent to advocacy in general.

The argument goes that social media is the modern equivalent of the colonial town square, or the classical forum, and that as the default venue for socio-political discussion, our abstract duty to be informed participants is thus transmogrified into a specific duty to participate on social media. This, combined with the vague Templar-esque compulsion to correct wrongs that also drives me to rearrange objects on the table, acknowledge others’ sneezes, and correct spelling, is not lost on me.

In practice, I have found that these discussions are, at best, pyrrhic, and more often entirely fruitless: they cause opposition to become more and more entrenched, poison relationships, and convert no one, all the while creating a blight in what is supposed to be a shared social space. And as Internet shouting matches tend to be crowned primarily by who blinks first, they create a situation in which any withdrawal, even for perfectly valid reasons such as, say, having more pressing matters than trading insults over tax policy, is viewed as concession.

While this doesn’t directly address the dilemma posited, it does make its proposal untenable. Taking to my social media to agitate is not particularly more effective than conducting a hunger strike against North Korea, and given my health situation, is not really a workable strategy. Given that ought implies can, I feel acceptably satisfied to dismiss any lingering doubts about my present course.

PSA: Don’t Press My Buttons

Because this message seems to have been forgotten recently, here is a quick public service announcement to reiterate what should be readily apparent.

Messing with someone’s life support is bad. Don’t do that.

I’m aware that there is a certain compulsion to press buttons, especially buttons that one isn’t supposed to press, or isn’t sure what they do. Resist the temptation. The consequences otherwise could be deadly. Yes, I mean that entirely literally. It’s called life support for a reason, after all. Going up to someone and starting to press random buttons on medical devices is often equivalent to wrapping a tight arm around someone’s neck. You probably (hopefully) wouldn’t greet a stranger with a stranglehold. So don’t start fiddling with sensitive medical equipment.

Additionally, if you ignore this advice, you should not be surprised when the person whose life you are endangering reacts in self defense. You are, after all, putting their life at risk, the same as if you put them in a stranglehold. There is a very good chance that they will react from instinct, and you will get hurt. You wouldn’t be the first person I’ve heard of to wind up with a bloody nose, a few broken ribs, a fractured skull, maybe a punctured lung… you get the idea.

Don’t assume that because it doesn’t look like a medical device, that it’s fair to mess with either. A lot of newer medical devices aimed at patients who want to try and avoid sticking out are designed to look like ordinary electronic devices. Many newer models have touch screens and sleek modern interfaces. What’s more, a lot of life support setups now include smartphones as a receiver and CPU for more complicated functions, making these smartphones medical devices in practice.

Moreover, even where there is no direct life support function, many times phones are used as an integral part of one’s life support routine. For example, a patient may use their phone to convey medical data to their doctors for making adjustments. Or, a patient may rely on their phone as a means for emergency communication. While these applications do not have the same direct impact on physical safety, they nevertheless are critical to a person’s continued daily function, and an attack on such devices will present a disproportionate danger, and cause according psychological distress. Even relatively harmless phone pranks, which may not even impede the ordinary functioning of medical-related operations are liable to cause such distress.

What is concerned here is not so much the actual impediment, but the threat of impediment when it suddenly matters. For my own part, even complete destruction of my smartphone is not likely to put me in immediate physiological danger. It may, however, prove fatal if it prevents me from summoning assistance if, some time from now, something goes awry. Thus, what could have been a relatively uneventful and easily handled situation with my full resources, could become life threatening. As a result, any time there is any tampering with my phone, regardless of actual effect, it causes serious anxiety for my future wellbeing.

It is more difficult in such situations to establish the kind of causal chain of events which could morally and legally implicate the offender in the end result. For that matter, it is difficult for the would-be prankster to foresee the disproportionate impact of their simple actions. Indeed, common pranks with electronic devices, such as switching contact information, reorganizing apps, and changing background photos, is so broadly considered normal and benign that it is hard to conceive that they could even be interpreted as a serious threat, let alone result in medical harm. Hence my writing this here.

So, if you have any doubt whatsoever about messing with someone else’s devices, even if they may not look like medical devices, resist the temptation.

Schoolwork Armistice

At 5:09pm EDT, 16th of August of this year, I was sitting hunched over an aging desktop computer working on the project that was claimed to be the main bottleneck between myself and graduation. It was supposed to be a simple project: reverse engineer and improve a simple construction toy. The concept is not a difficult one. The paperwork, that is, the engineering documentation which is supposed to be part of the “design process” which every engineer must invariably complete in precisely the correct manner, was also not terribly difficult, though it was grating, and, in my opinion, completely backwards and unnecessary.

In my experience tinkering around with medical devices, improvising on the fly solutions in life or death situations is less of a concrete process than a sort of spontaneous rabbit-out-of-the-hat wizardry. Any paperwork comes only after the problem has been attempted and solved, and only then to record results. This is only sensible as, if I waited to put my life support systems back together after they broke in the field until after I had filled out the proper forms, charted the problem on a set of blueprints, and submitted it for witness and review, I would be dead. Now, admittedly this probably isn’t what needs to be taught to people who are going to be professional engineers working for a legally liable company. But I still maintain that for an introductory level course that is supposed to focus on achieving proper methods of thinking, my way is more likely to be applicable to a wider range of everyday problems.

Even so, the problem doesn’t lie in paperwork. Paperwork, after all, can be fabricated after the fact if necessary. The difficult part lies in the medium I was expected to use. Rather than simply build my design with actual pieces, I was expected to use a fancy schmancy engineering program. I’m not sure why it is necessary for me to have to work ham-fistedly through another layer of abstraction which only seems to make my task more difficult by removing my ability to maneuver pieces in 3D space with my hands.

It’s worth nothing that I have never at any point been taught to use this computer program; not from the teacher of the course, nor my teacher, nor the program itself. It is not that the program is intuitive to an uninitiated mind; quite the opposite, in fact, as the assumption seems to be that anyone using the program will have had a formal engineering education, and hence be well versed in technical terminology, standards, notation, and jargon. Anything and everything that I have incidentally learned of this program comes either from blunt trial and error, or judicious use of google searches. Even now I would not say that I actually know how to use the program; merely that I have coincidentally managed to mimic the appearance of competence long enough to be graded favorably.

Now, for the record, I know I’m not the only one to come out of this particular course feeling this way. The course is advertised as being largely “self motivated”, and the teacher is known for being distinctly laissez faire provided that students can meet the letter of course requirements. I knew this much when I signed up. Talking to other students, it was agreed that the course is not so much self motivated as it is, to a large degree, self taught. This was especially true in my case, as, per the normal standard, I missed a great deal of class time, and given the teacher’s nature, was largely left on my own to puzzle through how exactly I was supposed to make the thing on my computer look like the fuzzy black and white picture attached to packet of make up work.

Although probably not the most frustrating course I have taken, this one is certainly a contender for the top three, especially the parts where I was forced to use the computer program. It got to the point where, at 5:09, I became so completely stuck, and as a direct result so she overwhelmingly frustrated, that to wit the only two choices left before me were as follows:

Option A
Make a hasty flight from the computer desk, and go for a long walk with no particular objective, at least until the climax of my immediate frustration has passed, and I am once again able to think of some new approach in my endless trial-and-error session, besides simply slinging increasingly harsh and exotic expletives at the inanimate PC.

Option B
Begin my hard earned and well deserved nervous breakdown in spectacular fashion by flipping over the table with the computer on it, trampling over the shattered remnants of this machine and bastion of my oppression, and igniting my revolution against the sanity that has brought me nothing but misery and sorrow.

It was a tough call, and one which I had to think long and hard about before committing. Eventually, my nominally better nature prevailed. By 7:12pm, I was sitting on my favorite park bench in town, sipping a double chocolate malted milkshake from the local chocolate shop, which I had justified to myself as being good for my doctors’ wishes that I gain weight, and putting the finishing touches on a blog post about Armageddon, feeling, if not contented, then at least one step back from the brink that I had worked myself up to.

I might have called it a day after I walked home, except that I knew that the version of the program that I had on my computer, that all my work files were saved with, and which had been required for the course, was being made obsolete and unusable by the developers five days hence. I was scheduled to depart for my eclipse trip the next morning. So, once again compelled against my desires and even my good sense by forces outside my control, I set back to work.

By 10:37pm, I had a working model on the computer. By 11:23, I had managed to save and print enough documentation that I felt I could tentatively call my work done. At 11:12am August 17th, the following morning, running about two hours behind my family’s initial departure plans (which is to say, roughly normal for time), I set the envelope with the work I had completed on the counter for my tutor to collect after I departed so that she might pass it along to the course teacher, who would point out whatever flaws I needed to address, which in all probability would take another two weeks at least of work.

This was the pattern I had learned to expect from my school. They had told me that I was close to being done enough times, only to disappoint when they discovered that they had miscalculated the credit requirements, or overlooked a clause in the relevant policy, or misplaced a crucial form, or whatever other excuse of the week they could conjure, that I simply grew numb to it. I had come consider myself a student the same way I consider myself disabled: maybe not strictly permanently, but not temporarily in a way that would lead me to ever plan otherwise.

Our drive southwest was broadly uneventful. On the second day we stopped for dinner about an hour short of our destination at Culver’s, where I traditionally get some variation of chocolate malt. At 9:32 EDT August 18th, my mother received the text message from my tutor: she had given the work to the course teacher who had declared that I would receive an A in the course. And that was it. I was done.

Perhaps I should feel more excited than I do. Honestly though I feel more numb than anything else. The message itself doesn’t mean that I’ve graduated; that still needs to come from the school administration and will likely take several more months to be ironed out. This isn’t victory, at least not yet. It won’t be victory until I have my diploma and my fully fixed transcript in hand, and am able to finally, after being forced to wait in limbo for years, begin applying to colleges and moving forward with my life. Even then, it will be at best a Pyrrhic victory, marking the end of a battle that took far too long, and cost far more than it ever should have. And that assumes that I really am done.

This does, however, represent something else. An armistice. Not an end to the war per se, but a pause, possibly an end, to the fighting. The beginning of the end of the end. The peace may or may not hold; that depends entirely on the school. I am not yet prepared to stand down entirely and commence celebrations, as I do not trust the school to keep their word. But I am perhaps ready to begin to imagine a different world, where I am not constantly engaged in the same Sisyphean struggle against a never ending onslaught of schoolwork.

The nature of my constant stream of makeup work has meant that I have not had proper free time in at least half a decade. While I have, at the insistence of my medical team and family, in recent years, taken steps to ensure that my life is not totally dominated solely by schoolwork, including this blog and many of the travels and projects documented on it, the ever looming presence of schoolwork has never ceased to cast a shadow over my life. In addition to causing great anxiety and distress, this has limited my ambitions and my enjoyment of life.

I look forward to a change of pace from this dystopian mental framework, now that it is no longer required. In addition to rediscovering the sweet luxury of boredom, I look forward to being able to write uninterrupted, and to being able to move forward on executing several new and exciting projects.

Incremental Progress Part 4 – Towards the Shining Future

I have spent the last three parts of this series bemoaning various aspects of the cycle of medical progress for patients enduring chronic health issues. At this point, I feel it is only fair that I highlight some of the brighter spots.

I have long come to accept that human progress is, with the exception of the occasional major breakthrough, incremental in nature; a reorganization here paves the way for a streamlining there, which unlocks the capacity for a minor tweak here and there, and so on and so forth. However, while this does help adjust one’s day to day expectations from what is shown in popular media to something more realistic, it also risks minimizing the progress that is made over time.

To refer back to an example used in part 2 that everyone should be familiar with, let’s refer to the progress being made on cancer. Here is a chart detailing the rate of FDA approvals for new treatments, which is a decent, if oversimplified, metric for understanding how a given patient’s options have increased, and hence, how specific and targeted their treatment will be (which has the capacity to minimize disruption to quality of life), and the overall average 5-year survival rate over a ten year period.

Does this progress mean that cancer is cured? No, not even close. Is it close to being cured? Not particularly.

It’s important to note that even as these numbers tick up, we’re not intrinsically closer to a “cure”. Coronaviruses, which cause the common cold, have a mortality rate pretty darn close to zero, at least in the developed world, and that number gets a lot closer if we ignore “novel” coronaviruses like SARS and MERS, and focus only on the rare person who has died as a direct result of the common cold. Yet I don’t think anyone would call the common cold cured. Coronaviruses, like cancer, aren’t cured, and there’s a reasonable suspicion on the part of many that they aren’t really curable in the sense that we’d like.

“Wait,” I hear you thinking, “I thought you were going to talk about bright spots”. Well, yes, while it’s true that progress on a full cure is inconclusive at best, material progress is still being made every day, for both colds and cancer. While neither is at present curable, they are, increasingly treatable, and this is where the real progress is happening. Better treatment, not cures, is from whence all the media buzz is generated, and why I can attend a conference about my disease year after year, hearing all the horror stories of my comrades, and still walk away feeling optimistic about the future.

So, what am I optimistic about this time around, even when I know that progress is so slow coming? Well, for starters, there’s life expectancy. I’ve mentioned a few different times here that my projected lifespan is significantly shorter than the statistical average for someone of my lifestyle, medical issues excluded. While this is still true, this is becoming less true. The technology which is used for my life support is finally reaching a level of precision, in both measurement and dosing, where it can be said to genuinely mimic natural bodily functions instead of merely being an indefinite stopgap.

To take a specific example, new infusion mechanisms now allow dosing precision down to the ten-thousandth of a milliliter. For reference, the average raindrop is between 0.5 and 4 milliliters. Given that a single thousandth of a milliliter in either direction at the wrong time can be the difference between being a productive member of society and being dead, this is a welcome improvement.

Such improvements in delivery mechanisms has also enabled innovation on the drugs themselves by making more targeted treatments wth a smaller window for error viable to a wider audience, which makes them more commercially viable. Better drugs and dosaging has likewise raised the bar for infusion cannulas, and at the conference, a new round of cannulas was already being hyped as the next big breakthrough to hit the market imminently.

In the last part I mentioned, though did not elaborate at length on, the appearance of AI-controlled artificial organs being built using DIY processes. These systems now exist, not only in laboratories, but in homes, offices, and schools, quietly taking in more data than the human mind can process, and making decisions with a level of precision and speed that humans cannot dream of achieving. We are equipping humans as cyborgs with fully autonomous robotic parts to take over functions they have lost to disease. If this does not excite you as a sure sign of the brave new future that awaits all of us, then frankly I am not sure what I can say to impress you.

Like other improvements explored here, this development isn’t so much a breakthrough as it is a culmination. After all, all of the included hardware in these systems has existed for decades. The computer algorithms are not particularly different from the calculations made daily by humans, except that they contain slightly more data and slightly fewer heuristic guesses, and can execute commands faster and more precisely than humans. The algorithms are simple enough that they can be run on a cell phone, and have an effectiveness on par with any other system in existence.

These DIY initiatives have already caused shockwaves throughout the medical device industry, for both the companies themselves, and the regulators that were previously taking their sweet time in approving new technologies, acting as a catalyst for a renewed push for commercial innovation. But deeper than this, a far greater change is also taking root: a revolution not so much in technology or application, but in thought.

If my memory and math are on point, this has been the eighth year since I started attending this particular conference, out of ten years dealing with the particular disease that is the topic of this conference, among other diagnoses. While neither of these stretches are long enough to truly have proper capital-H historical context, in the span of a single lifetime, especially for a relatively young person such as myself, I do believe that ten or even eight years is long enough to reflect upon in earnest.

Since I started attending this conference, but especially within the past three years, I have witnessed, and been the subject of, a shift in tone and demeanor. When I first arrived, the tone at this conference seemed to be, as one might expect one primarily of commiseration. Yes, there was solidarity, and all the positive emotion that comes from being with people like oneself, but this was, at best, a bittersweet feeling. People were glad to have met each other, but still nevertheless resentful to have been put in the unenviable circumstances that dictated their meeting.

More recently, however, I have seen and felt more and more an optimism accompanying these meetings. Perhaps it is the consistently record-breaking attendance that demonstrates, if nothing else, that we stand united against the common threat to our lives, and against the political and corporate forces that would seek to hold up our progress back towards being normal, fully functioning humans. Perhaps it is merely the promise of free trade show goodies and meals catered to a medically restricted diet. But I think it is something different.

While a full cure, of the sort that would allow me and my comrades to leave the life support at home, serve in the military, and the like, is still far off, today more than ever before, the future looks, if not bright, then at least survivable.

In other areas of research, one of the main genetic research efforts, which has maintained a presence at the conference, is now closing in on the genetic and environmental triggers that cause the elusive autoimmune reaction which has been known to cause the disease, and on various methods to prevent and reverse it. Serious talk of future gene therapies, the kind of science fiction that has traditionally been the stuff of of comic books and film, is already ongoing. It is a strange and exciting thing to finish an episode of a science-fiction drama television series focused on near-future medical technology (and how evil minds exploit it) in my hotel room, only to walk into the conference room to see posters advertising clinical trial sign ups and planned product releases.

It is difficult to be so optimistic in the face of incurable illness. It is even more difficult to remain optimistic after many years of only incremental progress. But pessimism too has its price. It is not the same emotional toll as the disappointment which naive expectations of an imminent cure are apt to bring; rather it is an opportunity cost. It is the cost of missing out on adventures, on missing major life milestones, on being conservative rather than opportunistic.

Much of this pessimism, especially in the past, has been inspired and cultivated by doctors themselves. In a way, this makes sense. No doctor in their right mind is going to say “Yes, you should definitely take your savings and go on that cliff diving excursion in New Zealand.” Medicine is, by its very nature, conservative and risk averse. Much like the scientist, a doctor will avoid saying anything until after it has been tested and proven beyond a shadow of a doubt. As noted previously, this is extremely effective in achieving specific, consistent, and above all, safe, treatment results. But what about when the situation being treated is so all-encompassing in a patient’s life so as to render specificity and consistency impossible?

Historically, the answer has been to impose restrictions on patients’ lifestyles. If laboratory conditions don’t align with real life for patients, then we’ll simply change the patients. This approach can work, at least for a while. But patients are people, and people are messy. Moreover, when patients include children and adolescents, who, for better or worse, are generally inclined to pursue short term comfort over vague notions of future health, patients will rebel. Thus, eventually, trading ten years at the end of one’s life for the ability to live the remainder more comfortably seems like a more balanced proposition.

This concept of such a tradeoff is inevitably controversial. I personally take no particular position on it, other than that it is a true tragedy of the highest proportion that anyone should be forced into such a situation. With that firmly stated, many of the recent breakthroughs, particularly in new delivery mechanisms and patient comfort, and especially in the rapidly growing DIY movement, have focused on this tradeoff. The thinking has shifted from a “top-down” approach of finding a full cure, to a more grassroots approach of making life more livable now, and making inroads into future scientific progress at a later date. It is no surprise that many of the groups dominating this new push have either been grassroots nonprofits, or, where they have been commercial, have been primarily from silicon valley style, engineer-founded, startups.

This in itself is already a fairly appreciable and innovative thesis on modern progress, yet one I think has been tossed around enough to be reasonably defensible. But I will go a step further. I submit that much of the optimism and positivity; the empowerment and liberation which has been the consistent takeaway of myself and other authors from this and similar conferences, and which I believe has become more intensely palpable in recent years than when I began attending, has been the result of this same shift in thinking.

Instead of competing against each other and shaming each other over inevitable bad blood test results, as was my primary complaint during conferences past, the new spirit is one of camaraderie and solidarity. It is now increasingly understood at such gatherings, and among medical professionals in general, that fear and shame tactics are not effective in the long run, and do nothing to mitigate the damage of patients deciding that survival at the cost of living simply isn’t worth it [1]. Thus the focus has shifted from commiseration over common setbacks, to collaboration and celebration over common victories.

Thus it will be seen that the feeling of progress, and hence, of hope for the future, seems to lie not so much in renewed pushes, but in more targeted treatments, and better quality of life. Long term patients such as myself have largely given up hope in the vague, messianic cure, to be discovered all at once at some undetermined future date. Instead, our hope for a better future; indeed, for a future at all; exists in the incremental, but critically, consistent, improvement upon the technologies which we are already using, and which have already been proven. Our hope lies in understanding that bad days and failures will inevitably come, and in supporting, not shaming, each other when they do.

While this may not qualify for being strictly optimistic, as it does entail a certain degree of pragmatic fatalism in accepting the realities of disabled life, it is the closest I have yet come to optimism. It is a determination that even if things will not be good, they will at least be better. This mindset, unlike rooting for a cure, does not require constant fanatical dedication to fundraising, nor does it breed innovation fatigue from watching the scientific media like a hawk, because it prioritizes the imminent, material, incremental progress of today over the faraway promises of tomorrow.


[1] Footnote: I credit the proximal cause of this cognitive shift in the conference to the progressive aging of the attendee population, and more broadly, to the aging and expanding afflicted population. As more people find themselves in the situation of a “tradeoff” as described above, the focus of care inevitably shifts from disciplinarian deterrence and prevention to one of harm reduction. This is especially true of those coming into the 13-25 demographic, who seem most likely to undertake such acts of “rebellion”. This is, perhaps unsurprisingly, one of the fastest growing demographics for attendance at this particular conference over the last several years, as patients who began attending in childhood come of age.

Incremental Progress Part 3 – For Science!

Previously, I have talked some of the ways that patients of chronic health issues and medical disabilities feel impacted by the research cycle. Part one of this ongoing series detailed a discussion I participated in at an ad-hoc support group of 18-21 year olds at a major health conference. Part two detailed some of the things I wish I had gotten a chance to add, based on my own experiences and the words of those around me, but never got the chance to due to time constraints.

After talking at length about the patient side of things, I’d like to pivot slightly to the clinical side. If we go by what most patients know about the clinical research process, here is a rough picture of how things work:

First, a conclave of elite doctors and professor gather in secret, presumably in a poorly lit conference room deep beneath the surface of the earth, and hold a brainstorming session of possible questions to study. Illicit substances may or not be involved in this process, as the creativity required to come up with such obscure and esoteric concerns as “why do certain subspecies of rats have funny looking brains?” and “why do stressful things make people act stressed out?” is immense. At the end of the session, all of the ideas are written down on pieces of parchment, thrown inside a hat, and drawn randomly to decide who will study what.

Second, money is extracted from the public at large by showing people on the street pictures of cute, sad looking children being held at needle-point by an ominously dressed person in a lab coat, with the threat that unless that person hands over all of their disposable income, the child will be forced to receive several injections per day. This process is repeated until a large enough pile of cash is acquired. The cash is then passed through a series of middlemen in dark suits smoking cigars, who all take a small cut for all their hard work of carrying the big pile of cash.

At this point, the cash is loaded onto a private jet and flown out to the remote laboratories hidden deep in the Brazilian rainforests, the barren Australian deserts, the lost islands of the arctic and Antarctic regions, and inside the active volcanoes of the pacific islands. These facilities are pristine, shining snow white and steel grey, outfitted with all the latest technology from a mid-century science fiction film. All of these facilities are outfitted either by national governments, or the rich elite of major multinational corporations, who see to all of the upkeep and grant work, leaving only the truly groundbreaking work to the trained scientists.

And who are the scientists? The scientist is a curious creature. First observed in 1543 naturalists hypothesized scientists to be former humans transmogrified by the devil himself in a Faustian bargain whereby the subject loses most interpersonal skills and material wealth in exchange for incredible intelligence a steady, monotonous career playing with glassware and measuring equipment. No one has ever seen a scientist in real life, although much footage exists of the scientist online, usually flaunting its immense funding and wearing its trademark lab coat and glasses. Because of the abundance of such footage, yet lack of real-life interactions, it has been speculated that scientists may possess some manner of cloaking which renders them invisible and inaudible outside of their native habitat.

The scientists spend their time exchanging various colored fluid between Erlenmeyer flasks and test tubes, watching to see which produces the best colors. When the best colors are found, a large brazier is lit with all of the paper currency acquired earlier. The photons from the fire reaction may, if the stars are properly aligned, hit the colored fluid in such a way as to cause the fluid to begin to bubble and change into a different color. If this happens often enough, the experiment is called a success.

The scientists spend the rest of their time meticulously recording the precise color that was achieved, which will provide the necessary data for analyst teams to divine the answers to the questions asked. These records are kept not in English, or any other commonly spoken language, but in Scientific, which is written and understood by only a handful of non-scientists, mainly doctors, teachers, and engineers. The process of translation is arduous, and in order to be fully encrypted requires several teams working in tandem. This process is called peer review, and, at least theoretically, this method makes it far more difficult to publish false information, because the arduousness of the process provides an insurmountable barrier to those motivated by anything other than the purest truth.

Now, obviously all of this is complete fiction. But the fact that I can make all of this up with a straight face speaks volumes, both about the lack of public understanding of how modern clinical research works, and the lack of transparency of the research itself. For as much as we cheer on the march of scientific advancement and technological development, for as much media attention is spent on new results hot off the presses, and for as much as the stock images and characters of the bespectacled expert adorned in a lab coat and armed with test tubes resounds in both popular culture and the popular consciousness, the actual details of what research is being done, and how it is being executed, is notably opaque.

Much of this is by design, or is a direct consequence of how research is structured. The scientific method by which we separate fact from fiction demands a level of rigor that is often antithetical to human nature, which requires extreme discipline and restraint. A properly organized double-blind controlled trial, the cornerstone of true scientific research, requires that the participants and even the scientists measuring results be kept in the dark as to what they are looking for, to prevent even the subtlest of unconscious biases from interfering. This approach, while great at testing hypotheses, means that the full story is only known to a handful of supervisors until the results are ready to be published.

The standard of scientific writing is also incredibly rigorous. In professional writing, a scientist is not permitted to make any claims or assumptions unless either they have just proven it themselves, in which case they are expected to provide full details of their data and methodology, or can directly cite a study that did so. For example, a scientist cannot simply say that the sky is blue, no matter how obvious this may seem. Nor even can a scientist refer to some other publication in which the author agreed that the sky is blue, like a journalist might while providing citations for a story. A scientist must find the original data proving that the sky is blue, that it is consistently blue, and so forth, and provide the documentation for others to cross check the claims themselves.

These standards are not only obligatory for those who wish to receive recognition and funding, but they are enforced for accreditation and publication in the first place. This mindset has only become more entrenched as economic circumstances have caused funding to become more scarce, and as political and cultural pressure have cast doubts on “mainstream institutions” like academia and major research organizations. Scientists are trained to only give the most defensible claims, in the most impersonal of words, and only in the narrow context for which they are responsible for studying. Unfortunately, although this process is unquestionably effective at testing complex hypotheses, it is antithetical to the nature of everyday discourse.

It is not, as my colleague said during our conference session said, that “scientists suck at marketing”, but rather that marketing is fundamentally incongruous with the mindset required for scientific research. Scientific literature ideally attempts to lay out the evidence with as little human perspective as possible, and let the facts speak for themselves, while marketing is in many respects the art of conjuring and manipulating human perspective, even where such perspectives may diverge from reality.

Moreover, the consumerist mindset of our capitalist society amplifies this discrepancy. The constant arms race between advertisers, media, and political factions means that we are awash in information. This information is targeted to us, adjusted to our preferences, and continually served up on a silver platter. We are taught that our arbitrary personal views are fundamentally righteous, that we have no need to change our views unless it suits us, and that if there is really something that requires any sort of action or thought on our part, that it will be similarly presented in a pleasant, custom tailored way. In essence, we are taught to ignore things that require intellectual investment, or challenge our worldview.

There is also the nature of funding. Because it is so difficult to ensure that trials are actually controlled, and to write the results in such a counterintuitive way, the costs of good research can be staggering, and finding funding can be a real struggle. Scientists may be forced to work under restrictions, or to tailor their research to only the most profitable applications. Results may not be shared to prevent infringement, or to ensure that everyone citing the results is made to pay a fee first. I could spend pages on different stories of technologies that could have benefited humanity, but were kept under wraps for commercial or political reasons.

But of course, it’s easy to rat on antisocial scientists and pharmaceutical companies. And it doesn’t really get to the heart of the problem. The problem is that, for most patients, especially those who aren’t enrolled in clinical trials, and don’t necessarily have access to the latest devices, the whole world of research is a black hole into which money is poured with no apparent benefit in return. Maybe if they follow the news, or hear about it from excited friends and relations (see previous section), they might be aware of a few very specific discoveries, usually involving curing one or two rats out of a dozen tries.

Perhaps, if they are inclined towards optimism, they will be able to look at the trend over the last several decades towards better technology and better outcomes. But in most cases, the truly everyday noticeable changes seem to only occur long after they have long been obvious to the users. The process from patient complaints with a medical device, especially in a non-critical area like usability and quality of life, that does not carry the same profit incentive for insurers to apply pressure, to a market product, is agonizingly slow.

Many of these issues aren’t research problems so much as manufacturing and distribution problems. The bottleneck in making most usability tweaks, the ones that patients notice and appreciate, isn’t in research, or even usually in engineering, but in getting a whole new product approved by executives, shareholders, and of course, regulatory bodies. (Again, this is another topic that I could, and probably will at some future date, rant on about for several pages, but suffice it to say that when US companies complain about innovation being held up by the FDA, their complaints are not entirely without merit).

Even after such processes are eventually finished, there is the problem of insurance. Insurance companies are, naturally, incredibly averse to spending money on anything unless and until it has been proven beyond a shadow of a doubt that it is not only safe, but cost effective. Especially for basic, low income plans, change can come at a glacial pace, and for state-funded services, convincing legislators to adjust statutes to permit funding for new innovations can be a major political battle. This doesn’t even begin to take into account the various negotiated deals and alliances between certain providers and manufacturers that make it harder for new breakthroughs to gain traction (Another good topic for a different post).

But these are economic problems, not research. For that matter, most of the supposed research problems are simply perception problems. Why am I talking about markets and marketing when I said I was going to talk about research?

Because for most people, the notions of “science” and “progress” are synonymous. We are constantly told, by our politicians, by our insurers, by our doctors, and by our professors that not only do we have the very best level of care that has ever been available in human history, but that we also have the most diligent, most efficient, most powerful organizations and institutions working tirelessly on our behalf to constantly push forward the frontier. If we take both of these statements at face value, then it follows that anything that we do not already have is a research problem.

For as much talk as there was during our conference sessions about how difficult life was, how so very badly we all wanted change, and how disappointed and discouraged we have felt over the lack of apparent progress, it might be easy to overlook the fact that far better technologies than are currently used by anyone in that room already exist. At this very moment, there are patients going about their lives using systems that amount to AI-controlled artificial organs. These systems react faster and more accurately than humans could ever hope to, and the clinical results are obvious.

The catch? None of these systems are commercially available. None of them have even been submitted to the FDA. A handful of these systems are open source DIY projects, and so can be cobbled together by interested patients, though in many cases this requires patients to go against medical advice, and take on more engineering and technical responsibility than is considered normal for a patient. Others are in clinical trials, or more often, have successfully completed their trials and are waiting for manufacturers to begin the FDA approval process.

This bottleneck, combined with the requisite rigor of clinical trials themselves, is what has given rise to the stereotype that modern research is primarily chasing after its own tail. This perception makes even realistic progress seem far off, and makes it all the more difficult to appreciate what incremental improvements are released.

Incremental Progress Part 2 – Innovation Fatigue

This is part two of a multi-part perspective on patient engagement in charity and research. Though not strictly required, it is strongly recommended that you read part one before continuing.


The vague pretense of order in the conversation, created by the presence of the few convention staff members, broke all at once, as several dozen eighteen to twenty one year olds all rushed to get in their two cents on the topic of fundraising burnout (see previous section). Naturally this was precisely the moment where I struck upon what I wanted to say. The jumbled thoughts and feelings, that had hinted at something to add while other people were talking, suddenly crystallized into a handful of points I wanted to make, all clustered around a phrase I had heard a few years earlier.

Not one to interrupt someone else, and also wanting to have undivided attention in making my point, I attempted to wait until the cacophony of discordant voices became more organized. And, taking example from similar times earlier in my life when I had something I wished to contribute before a group, I raised my hand and waited for silence.

Although the conversation was eventually brought back under control by some of the staff, I never got a chance to make my points. The block of time we had been allotted in the conference room ran out, and the hotel staff were anxious to get the room cleared and organized for the next group.

And yet, I still had my points to make. They still resonated within me, and I honestly believed that they might be both relevant and of interest to the other people who were in that room. I took out my phone and jotted down the two words which I had pulled from the depths of my memory: Innovation Fatigue.

That phrase has actually come to mean several different things to different groups, and so I shall spend a moment on etymology before moving forward. In research groups and think tanks, the phrase is essentially a stand in for generic mental and psychological fatigue. In the corporate world, it means a phenomenon of diminishing returns on creative, “innovative” projects, that often comes about as a result of attempts to force “innovation” on a regular schedule. More broadly in this context, the phrase has come to mean an opposition to “innovation” when used as a buzzword similar to “synergy” and “ideate”.

I first came across this term in a webcomic of all places, where it was used in a science fiction context to explain why the society depicted, which has advanced technology such as humanoid robots, neurally integrated prostheses, luxury commercial space travel, and artificial intelligence, is so similar to our own, at least culturally. That is to say, technology continues to advance at the exponential pace that it has across recorded history, but in a primarily incremental manner, and therefore most people, either out of learned complacency or a psychological defense mechanism to avoid constant hysteria, act as though all is as it always has been, and are not impressed or excited by the prospects of the future.

In addition to the feeling of fundraising burnout detailed in part one, I often find that I suffer from innovation fatigue as presented in the comic, particularly when it comes to medical research that ought to directly affect my quality of life, or promises to in the future. And what I heard from other patients during our young adults sessions has led me to believe that this is a fairly common feeling.

It is easy to be pessimistic about the long term outlook with chronic health issues. Almost definitionally, the outlook is worse than average, and the nature of human biology is such that the long term outlook is often dictated by the tools we have today. After all, even if the messianic cure arrives perfectly on schedule in five to ten years (for the record, the cure has been ten years away for the last half-century), that may not matter if things take a sharp turn for the worse six months from now. Everyone already knows someone for whom the cure came too late. And since the best way to predict future results, we are told, is from past behavior, then it would be accurate to say that no serious progress is likely to be made before it is too late.

This is not to say that progress is not being made. On the contrary, scientific progress is continuous and universal across all fields. Over the past decade alone, there has been consistent, exponential progress in not only quality of care, and quality of health outcomes, but quality of life. Disease, where it is not less frequent, but it is less impactful. Nor is this progress being made in secret. Indeed, amid all the headlines about radical new treatment options, it can be easy to forget that the diseases they are made to treat still have a massive impact. And this is precisely part of the problem.

To take an example that will be familiar to a wider audience, take cancer. It seems that in a given week, there is at least one segment on the evening TV news about some new treatment, early detection method, or some substance or habit to avoid in order to minimize one’s risk. Sometimes these segments play every day, or even multiple times per day. In checking my online news feed, one of every four stories was something regarding improvements in the state of cancer; to be precise, one was a list of habits to avoid, while one was about a “revolutionary treatment [that] offers new hope to patients”.

If you had just been diagnosed with cancer, you would be forgiven for thinking that with all this seemingly daily progress, that the path forward would be relatively simple and easy to understand. And it would be easy for one who knows nothing else to get the impression that cancer treatment is fundamentally easy nowadays. This is obviously untrue, or at least, grossly misleading. Even as cancer treatments become more effective and better targeted, the impact to life and lifestyle remains massive.

It is all well and good to be optimistic about the future. For my part, I enjoy tales about the great big beautiful tomorrow shining at the end of the day as much as anyone. In as much as I have a job, it is talking to people about new and exciting innovations in their medical field, and how they can best take advantage of them as soon as possible for the least cost possible. I don’t get paid to do this; I volunteer because I am passionate about keeping progress moving forward, and because some people have found that my viewpoint and manner of expression are uniquely helpful.

However, this cycle of minor discoveries, followed by a great deal of public overstatement and media excitement, which never (or at least, so seldom as to appear never) quite lives up to the hype, is exhausting. Active hoping, in the short term, as distinct from long term hope for future change, is acutely exhausting. Moreover, the routine of having to answer every minor breakthrough with some statement to interested, but not personally-versed friends and relations, who see media hyperbole about (steps towards) a cure and immediately begin rejoicing, is quite tiring.

Furthermore, these almost weekly interactions, in addition to carrying all of the normal pitfalls of socio-familial transactions, have a unique capability to color the perceptions of those who are closest to oneself. The people who are excited about these announcements because they know, or else believe, it represents an end, or at least, decrease, to one’s medical burden, are often among those who one wishes least to alienate with causal pessimism.

For indeed, failing to respond with appropriate zeal to each and every announcement does lead to public branding of pessimism, even depression. Or worse: it suggests that one is not taking all appropriate actions to combat one’s disease, and therefore is undeserving of sympathy and support. After all, if the person on the TV says that cancer is curable nowadays, and your cancer hasn’t been cured yet, it must be because you’re not trying hard enough. Clearly you don’t deserve my tax dollars and donations to fund your treatment and research. After all, you don’t really need it anymore. Possibly you are deliberately causing harm to yourself, and therefore are insane, and I needn’t listen to anything you say to the contrary. Hopefully, it is easy to see how frustrating this dynamic can become, even when it is not quite so exaggerated to the point of satire.

One of the phrases that I heard being repeated at the conference a lot was “patient investment in research and treatment”. When patients aren’t willing to invest emotionally and mentally in their own treatment; in their own wellbeing, the problems are obvious. To me, the cause, or at least, one of the causes, is equally obvious. Patients aren’t willing to invest because it is a risky investment. The up front cost of pinning all of the hopes and dreams for one’s future on a research hypothesis is enormous. The risk is high, as anyone who has stupefied the economics of research and development knows. Payouts aren’t guaranteed, and when they do come, they will be incremental.

Patients who aren’t “investing” in state of the art care aren’t doing so because they don’t want to get better care. They aren’t investing because they either haven’t been convinced that it is a worthwhile investment, or are emotionally and psychologically spent. They have tried investing, and have lost out. They have developed innovation fatigue. Tired of incremental progress which does not offer enough payback to earnestly plan for a better future, they turn instead to what they know to be stable: the pessimism here and now. Pessimism isn’t nearly as shiny or enticing, and it doesn’t offer the slim chance of an enormous payout, but it is reliable and predictable.

This is the real tragedy of disability, and I am not surprised in the slightest that now that sufficient treatments have been discovered to enable what amounts to eternally repeatable stopgaps, but not a full cure, that researchers, medical professionals, and patients themselves, have begun to encounter this problem. The incremental nature of progress, the exaggeratory nature of popular media, and the basic nature of humans in society amplify this problem and cause it to concentrate and calcify into the form of innovation fatigue.

Why I Fight

Yes, I know I said that I would continue with the Incremental Progress series with my next post. It is coming, probably over or near the weekend, as that seems to be my approximate unwritten schedule. But I would be remiss if I failed to mark today of all days somehow on here.


The twentieth of July, two thousand and seven. A date which I shall be reminded of for as long as I live. The date that I define as the abrupt end of my childhood and the beginning of my current identity. The date which is a strong contender for the absolute worst day of my life, and would win hands down save for the fact that I slipped out of consciousness due to overwhelming pain, and remained in a coma through the next day.

It is the day that is marked in my calendar simply as “Victory Day”, because on that day, I did two things. First, I beat the odds on what was, according to my doctors, a coin toss over whether I would live or die. Second, it was the day that I became a survivor, and swore to myself that I would keep surviving.

I was in enough pain and misery that day, that I know I could have very easily given up. My respiratory system was already failing, and it would have been easy enough to simply stop giving the effort to keep breathing. It might have even been the less painful option. But as close as I already felt to the abyss, I decided I would go no further. I kept fighting, as I have kept fighting ever since.

I call this date Victory Day in my calendar, partly because of the victory that I won then, but also because each year, each annual observance, is another victory in itself. Each year still alive is a noteworthy triumph. I am still breathing, and while that may not mean much for people who have never had to endure as I have endured, it is certainly not nothing.

I know it’s not nothing, partly because this year I got a medal for surviving ten years. The medals are produced by one of the many multinational pharmaceutical corporations on which I depend upon for my continued existence, and date back to a few decades ago, when ten years was about the upper bound for life expectancy with this disease.

Getting a medal for surviving provokes a lot of bizarre feelings. Or perhaps I should say, it amplifies them, since it acts as a physical token of my annual Victory Day observances. This has always been a bittersweet occasion. It reminds me of what my life used to be like before the twentieth July two thousand and seven, and of the pain that I endured that day I nearly died, that I work so diligently to avoid. In short, it reminds me why I fight.

Incremental Progress Part 1 – Fundraising Burnout

Today we’re trying something a little bit different. The conference I recently attended has given me lots of ideas along similar lines for things to write about, mostly centered around the notion of medical progress, which incidentally seems to have become a recurring theme on this blog. Based on several conversations I had at the conference, I know that this topic is important to a lot of people, and I have been told that I would be a good person to write about it.

Rather than waiting several weeks in order to finish one super-long post, and probably forget half of what I intended to write, I am planning to divide this topic into several sections. I don’t know whether this approach will prove better or worse, but after receiving much positive feedback on my writing in general and this blog specifically, it is something I am willing to try. It is my intention that these will be posted sequentially, though I reserve the right to Mix that up if something pertinent crops up, or if I get sick of writing about the same topic. So, here goes.


“I’m feeling fundraising burnout.” Announced one of the boys in our group, leaning into the rough circle that our chairs had been drawn into in the center of the conference room. “I’m tired of raising money and advocating for a cure that just isn’t coming. It’s been just around the corner since I was diagnosed, and it isn’t any closer.”

The nominal topic of our session, reserved for those aged 18-21 at the conference, was “Adulting 101”, though this was as much a placeholder name as anything. We were told that we were free to talk about anything that we felt needed to be said, and in practice this anarchy led mostly to a prolonged ritual of denouncing parents, teachers, doctors, insurance, employers, lawyers, law enforcement, bureaucrats, younger siblings, older siblings, friends both former and current, and anyone else who wasn’t represented in the room. The psychologist attached to the 18-21 group tried to steer the discussion towards the traditional topics; hopes, fears, and avoiding the ever-looming specter of burnout.

For those unfamiliar with chronic diseases, burnout is pretty much exactly what it sounds like. When someone experiences burnout, their morale is broken. They can no longer muster the will to fight; to keep to the strict routines and discipline that is required to stay alive despite medical issues. Without a strong support system to fall back on while recovering, this can have immediate and deadly consequences, although in most cases the effects are not seen until several years later, when organs and nervous tissue begin to fail prematurely.

Burnout isn’t the same thing as surrendering. Surrender happens all at once, whereas burnout can occur over months or even years. People with burnout don’t necessarily have to be suicidal or even of a mind towards self harm, even if they are cognizant of the consequences of their choices. Burnout is not the commander striking their colors, but the soldiers themselves gradually refusing to follow tough orders, possibly refusing to obey at all. Like the gradual loss of morale and organization by units in combat, burnout is considered in many respects to be inevitable to some degree or another.

Because of the inherent stigma attached to medical complications, it is always a topic of discussion at large gatherings, though often not one that people are apt to openly admit to. Fundraising burnout, on the other hand, proved a fertile ground for an interesting discussion.

The popular conception of disabled or medically afflicted people, especially young people, as being human bastions of charity and compassion, has come under a great deal of critique recently (see The Fault in Our Stars, Speechless, et al). Despite this, it remains a popular trope.

For my part, I am ambivalent. There are definitely worse stereotypes than being too humanitarian, and, for what it is worth, there does seem to be some correlation between medical affliction and medical fundraising. Though I am inclined to believe that attributing this correlation to the inherent or acquired surplus of human spirit in afflicted persons is a case of reverse causality. That is to say, disabled people aren’t more inclined to focus on charity, but rather that charity is more inclined to focus on them.

Indeed, for many people, myself included, ostensibly charitable acts are often taken with selfish aims. Yes, there are plenty of incidental benefits to curing a disease, any disease, that happens to affect millions in addition to oneself. But mainly it is about erasing the pains which one feels on a daily basis.

Moreover, the fact that such charitable organizations will continue to advance progress largely regardless of the individual contributions of one or two afflicted persons, in addition to the popular stereotype that disabled people ought naturally to actively support the charities that claim to represent them, has created, according to the consensus of our group, at least, a feeling of profound guilt among those who fail to make a meaningful contribution. Which, given the scale on which these charities and research organizations operate, generally translates to an annual contribution of tens or even hundreds of thousands of dollars, plus several hours of public appearances, constant queries to political representatives, and steadfast mental and spiritual commitment. Thus, those who fail to contribute on this scale are left with immense feelings of guilt for benefiting from research which they failed to contribute towards in any meaningful way. Paradoxically, these feelings are more rather than less likely to appear when giving a small contribution rather than no contribution, because, after all, out of sight, out of mind.

“At least from a research point of view, it does make a difference.” A second boy, a student working as a lab technician in one of the research centers in question, interjected. “If we’re in the lab, and testing ten samples for a reaction, that extra two hundred dollars can mean an extra eleventh sample gets tested.”

“Then why don’t we get told that?” The first boy countered. “If I knew my money was going to buy another extra Petri dish in a lab, I might be more motivated than just throwing my money towards a cure that never gets any closer.”

The student threw up his hands in resignation. “Because scientists suck at marketing.”

“It’s to try and appeal to the masses.” Someone else added, the cynicism in his tone palpable. “Most people are dumb and won’t understand what that means. They get motivated by ‘finding the cure’, not paying for toilet paper in some lab.”

Everyone in that room admitted that they had felt some degree of guilt over not fundraising more, myself included. This seemed to remain true regardless of whether the person in question was themselves disabled or merely related to one who was, or how much they had done for ‘the cause’ in recent memory. The fact that charity marketing did so much to emphasize how even minor contributions were relevant to saving lives only increased these feelings. The terms “survivor’s guilt” and “post-traumatic stress disorder” got tossed around a lot.

The consensus was that rather than act as a catalyst for further action, these feelings were more likely to lead to a sense of hopelessness in the future, which is amplified by the continuously disappointing news on the research front. Progress continues, certainly, and this important point of order was brought up repeatedly; but never a cure. Despite walking, cycling, fundraising, hoping, and praying for a cure, none has materialized, and none seem particularly closer than a decade ago.

This sense of hopelessness has lead, naturally, to disengagement and resentment, which in turn leads to a disinclination to continue fundraising efforts. After all, if there’s not going to be visible progress either way, why waste the time and money? This is, of course, a self-fulfilling prophecy, since less money and engagement leads to less research, which means less progress, and so forth. Furthermore, if patients themselves, who are seen, rightly or wrongly, as the public face of, and therefore most important advocate of, said organizations, seem to be disinterested, what motivation is there for those with no direct connection to the disease to care? Why should wealthy donors allocate large but sill limited donations to a charity that no one seems interested in? Why should politicians bother keeping up research funding, or worse, funding for the medical care itself?

Despite having just discussed at length the dangers of fundraising burnout, I have yet to find a decent resolution for it. The psychologist on hand raised the possibility of non-financial contributions, such as volunteering and engaging in clinical trials, or bypassing charity research and its false advertising entirely, and contributing to more direct initiatives to improve quality of life, such as support groups, patient advocacy, and the like. Although decent ideas on paper, none of these really caught the imagination of the group. The benefit which is created from being present and offering solidarity during support sessions, while certainly real, isn’t quite as tangible as donating a certain number of thousands of dollars to charity, nor is it as publicly valued and socially rewarded.

It seems that fundraising, and the psychological complexities that come with it, are an inevitable part of how research, and hence progress, happens in our society. This is unfortunate, because it adds an additional stressor to patients, who may feel as though the future of the world, in addition to their own future, is resting on their ability to part others from their money. This obsession, even if it does produce short term results, cannot be healthy, and the consensus seems to be that it isn’t. However, this seems to be part of the price of progress nowadays.

This is the first part of a multi-part commentary on patient perspective (specifically, my perspective) on the fundraising and research cycle, and more specifically how the larger cause of trying to cure diseases fits in with a more individual perspective, which I have started writing as a result of a conference I attended recently. Additional segments will be posted at a later date.

Conference Pro-Tips

So every year, my family comes down to Disney for a major conference related to one of my many diagnoses. Over the years I have learned many tips and tricks that have proven invaluable for conferences. Here are a few highlights:

1) Invest in a good lanyard
Most conferences these days use name badges for identification purposes. Although most places provide basic cardholder-on-an-itchy-string accommodations that work in a pinch, for longer conferences especially, a proper lanyard with a decent holder is more than worth the upfront investment. I recommend one with plenty of space for decoration and customization, and lots of pockets to hold things like special event tickets, and all the business cards that inevitably accumulate.

As an added bonus, if you plan to spend most of your time at the conference site, you can quite easily slide some cash and a credit card into your holder, and do away with carrying a separate wallet altogether. This is especially nice for large conference centers that require a great deal of walking.

Sidenote: Many security-minded people will advise you to take off your conference lanyard when venturing offsite, to avoid looking like an easy mark to potential ne’er do wells, and so using a lanyard as a neck bound wallet may have some drawbacks if you plan to come and go.

2) Dress for walking
This is one that gets passed around a lot, so it isn’t exactly a pro-tip, but it still bears repeating. Modern conferences require a lot of walking. Depending on the size of the conference center, you can expect the distance to be measured in tens of kilometers per day. While this is still spread out over a whole day, it’s still a decent amount of walking, especially for people who aren’t used to being on their feet all day. Dressing for the occasion with comfortable shoes and clothing will help reduce the strain of this, and advanced planning can cut extra walking out of the schedule.

There are two main schools of thought on packing day bags for conferences. One school of thought is to pack as little as possible, so that the amount of weight that needs to be carried is as small as possible. The other school of thought is to carry with you everything that you think you might need, so as to avoid having to detour or go back to your place of lodging to pick up needed items. There are costs and benefits to each of these strategies, and it depends primarily on whether one is more comfortable with walking long distances, or carrying a heavier load.

Whichever strategy you choose to abide by, it is still a good idea to find a good, reliable, and comfortable bag which you can easily carry with you. This will ensure that you have plenty of space to carry all the trinkets which you will inevitably accumulate during the conference. I usually recommend a nice backpack with separate pockets and a water bottle pouch, which also will help stay hydrated.

3) Be cognizant of nutrition
I’m not going to straight up prescribe a certain number of meals or carbohydrates which you need to fit into your conference day. The exact number will depend on your individual health, metabolism, how much you’re doing, and your normal diet. I will say that you should at least be cognizant of your nutritional needs, especially if you are being more active than usual.

4) Download all the apps
Most major conferences use some kind of mobile schedule platform, in addition to hard copy schedules. This can help you sort through sessions and panels, and often will let you set reminders and get directions. If the host location has an app, go ahead and download that as well. In fact, go ahead and download the app for the local tourism authority.

Go ahead and grant them full permission for notifications, and location data if you’re comfortable. This way, not only will you have the most up to date information about your conference, but also about anything else happening in the area that might be of interest.

5) Have an Objective
For attendees, conferences exist in this strange space somewhere between leisure and business. There’s lots of fun to be had in traveling, staying in a hotel, meeting new people, and possibly exploring a new city. And conference activities themselves often have something of a celebratory air to them. Even for work-oriented conferences, sponsors want to encourage attendees to take away a hopeful, upbeat attitude about their product and the future in general.

At the same time, conferences with sessions and panels tend to hone in on trying to educate and edify attendees. Modern conferences are by their very nature, a hub for in-person networking, both professionally and personally. And sponsors are often quite keen to ensure that they fit in their sales pitch. So conferences are often as much work as they are play.

Having an objective set beforehand does two things. First of all, it clarifies the overall goal of attending, reinforcing the mindset that you want to keep. Second, it helps mitigate the effect of decision fatigue, that is, the gradual degradation of decision-making capacity from having to make too many decisions during a short time. Knowing that you’re here for business rather than leisure will make it easier to make snap judgments about, say, where to eat, which sessions to attend, and how late to stay out.

Objectives don’t have to be quite as targeted as goals, which generally have to be both specific and measurable. Objectives can be more idealistic, like saying that you intend to have fun, or make friends, or hone your communication skills. Objectives aren’t for nitty gritty planning, but to orient your general mindset and streamline the dozens of minute decisions that you will inevitably encounter. Having an overarching objective means that you don’t have to spend nearly as much time debating the relative merits of whether to go with the generic chain burger restaurant, or the seedy but well-recommended local restaurant. If your objective is to make career progress, stick with the former. If your objective is to have an interesting travel experience, go with the latter.

Kindred Spirits

This weekend I spent my time volunteering with a charity which represents people who suffer from one of the many chronic diseases and disabilities at a local barbecue cooking competition. This came about because one of the competitors’ daughters was recently diagnosed with the same disease as I, and so wanted to invite someone to advocate and educate. What’s interesting is that his daughter is approximately the same age that I was when I was first diagnosed.

Being diagnosed at that particular age, while not unheard of, is nevertheless uncommon enough that it gave me momentary pause, and in preparing to meet her my mind this week has been on what I ought to tell her, and moreover, what I wish I could tell a younger version of myself when I was diagnosed. She was, as it turned out, not greatly interested in discussing health with me, which I suppose is fair enough. Even so, I have been thinking about this topic enough that it has more or less solidified into the following post:

I could tell you it gets easier, except I would be lying. It doesn’t get easier. People might tell you that it gets easier to manage, which is sort of true inasmuch as practice and experience make the day to day stuff less immediately challenging, same with anything. And of course, technology makes things better and easier. Not to be the old man yelling at the whippersnappers about how good they have it nowadays, but it is true that in the ten years I’ve had to deal with it, things have gotten both better and easier.

The important thing here is that over the course of years, the actual difficulty level doesn’t really change. This is depressing and frustrating, but it’s also not that bad in the big scheme of things. There are a lot of chronic diseases where things only get worse with time, and that’s not really the case with our disease. We have the sword of Damocles hanging over our heads threatening us if we mess up, but if we stay vigilant, and get nothing wrong, we can postpone that confrontation basically forever.

It means that you can get to a point where you can still do most things that ordinary people can do. It’s more difficult, and you’re never not going to have to be paying attention to your health in the background. That’s never going to change. You’re going to be starting from an unfair disadvantage, and you’re going to have to work harder to catch up. Along the way you will inevitably fail (it’s nothing personal; just a matter of odds), and your failure will be all the more spectacular and set you further back than what’s considered normal. It’s not fair. But you can still do it, despite the setbacks. In fact, for most of the important things in life, it’s not really optional.

Whatever caused this, whatever you think of it, whatever happens next, as of now, you are different. You are special. That’s neither a compliment, nor an insult. That’s a biological, medically-verified, legally-recognized fact. People around you will inevitably try to deny this, telling you that your needs aren’t any different from those around you, or that you shouldn’t act or feel or be different. Some of these people will mean well but be misguided; others will be looking for a way to hurt or distract you.

If you’re like me, and most people, at some point, you too will probably try to tell yourself this. It is, I have been told, an essential part of adolescence. Futile though it may be to say this then, and believe me when I say this that I mean it in the nicest way possible, that I must declare: whoever these sentiments come from, whatever their intentions, they are straight up wrong. You are different and special. You can choose how to react to that, and you can choose how to portray this, but you cannot change the basic fact. That you are different is not any reflection on you or anything you have done, and accepting this is not any sort of concession or confession; on the contrary, it reflects maturity and understanding.

It follows that your experience and your path may not be the “normal” one. This is neither good nor bad, but simply reflects the special circumstances which exist as a matter of fact. The fact that everything is that much harder may mean that you have to pick and choose your battles, or get extra help on some things, even if those things seem normal and easy for other people. This is to be expected, and is nothing to hide or be ashamed of. People around you may not understand this, and may give you a hard time. Just remember, as I was told when I was in your shoes: The people who matter don’t mind, and the people who mind don’t matter.