The Moral Hazard of Hope


This post is part of the series: The Debriefing. Click to read all posts in this series.


Suppose that five years from today, you would receive an extremely large windfall. The exact number isn’t important, but let’s just say it’s large enough that you’ll have to budget things again. Not technically infinite, because that would break everything, but for the purposes of one person, basically undepletable. Let’s also assume that this money becomes yours in such a way that it can’t be taxed or swindled in getting it. This is also an alternate universe where inheritance and estates don’t exist, so there’s no scheming among family, and no point in considering them in your plans. Just roll with it.

No one else knows about it, so you can’t borrow against it, nor is anyone going to treat you differently until you have the money. You still have to be alive in five years to collect and enjoy your fortune. Freak accidents can still happen, and you can still go bankrupt in the interim, or get thrown in prison, or whatever, but as long as you’re around to cash the check five years from today, you’re in the money.

How would this change your behavior in the interim? How would your priorities change from what they are?

Well, first of all, you’re probably not going to invest in retirement, or long term savings in general. After all, you won’t need to. In fact, further saving would be foolish. You’re not going to need that extra drop in the bucket, which means saving it would be wasting it. You’re legitimately economically better off living the high life and enjoying yourself as much as possible without putting yourself in such severe financial jeopardy that you would be increasing your chances of being unable to collect your money.

If this seems insane, it’s important to remember here, that your lifestyle and enjoyment are quantifiable economic factors (the keyword is “utility”) that weigh against the (relative and ultimately arbitrary) value of your money. This is the whole reason why people buy stuff they don’t strictly need to survive, and why rich people spend more money than poor people, despite not being physiologically different. Because any money you save is basically worthless, and your happiness still has value, buying happiness, expensive and temporary though it may be, is always the economically rational choice.

This is tied to an important economic concept known as Moral Hazard, a condition where the normal risks and costs involved in a decision fail to apply, encouraging riskier behavior. I’m stretching the idea a little bit here, since it usually refers to more direct situations. For example, if I have a credit card that my parents pay for to use “for emergencies”, and I know I’m never going to see the bill, because my parents care more about our family’s credit score than most anything I would think to buy, then that’s a moral hazard. I have very little incentive to do the “right” thing, and a lot of incentive to do whatever I please.

There are examples in macroeconomics as well. For example, many say that large corporations in the United States are caught in a moral hazard problem, because they know that they are “too big to fail”, and will be bailed out by the government if they get in to serious trouble. As a result, these companies may be encouraged to make riskier decisions, knowing that any profits will be massive, and any losses will be passed along.

In any case, the idea is there. When the consequences of a risky decision become uncoupled from the reward, it can be no surprise when rational actors take more riskier decisions. If you know that in five years you’re going to be basically immune to any hardship, you’re probably not going to prepare for the long term.

Now let’s take a different example. Suppose you’re rushed to the hospital after a heart attack, and diagnosed with a heart condition. The condition is minor for now, but could get worse without treatment, and will get worse as you age regardless.

The bad news is, in order to avoid having more heart attacks, and possible secondary circulatory and organ problems, you’re going to need to follow a very strict regimen, including a draconian diet, a daily exercise routine, and a series of regular injections and blood tests.

The good news, your doctor informs you, is that the scientists, who have been tucked away in their labs and getting millions in yearly funding, are closing in on a cure. In fact, there’s already a new drug that’s worked really well in mice. A researcher giving a talk at a major conference recently showed a slide of a timeline that estimated FDA approval in no more than five years. Once you’re cured, assuming everything works as advertised, you won’t have to go through the laborious process of treatment.

The cure drug won’t help if you die of a heart attack before then, and it won’t fix any problems with your other organs if your heart gets bad enough that it can’t supply them with blood, but otherwise it will be a complete cure, as though you were never diagnosed in the first place. The nurse discharging you tells you that since most organ failure doesn’t appear until patients have been going for at least a decade, so long as you can avoid dying for half that long, you’ll be fine.

So, how are you going to treat this new chronic and life threatening disease? Maybe you will be the diligent, model patient, always deferring to the most conservative and risk averse in the medical literature, certainly hopeful for a cure, but not willing to bet your life on a grad student’s hypothesis. Or maybe, knowing nothing else on the subject, you will trust what your doctor told you, and your first impression of the disease, getting by with only as much invasive treatment as you can get away with to avoid dying and being called out by your medical team for being “noncompliant” (referred to in chronic illness circles in hushed tones as “the n-word”).

If the cure does come in five years, as happens only in stories and fantasies, then either way, you’ll be set. The second version of you might be a bit happier from having more fully sucked the marrow out of life. It’s also possible that the second version would have also had to endure another (probably non-fatal) heart attack or two, and dealt with more day to day symptoms like fatigue, pains, and poor circulation. But you never would have really lost anything for being the n-word.

On the other hand, if by the time five years have elapsed, the drug hasn’t gotten approval, or quite possibly, hasn’t gotten close after the researchers discovered that curing a disease in mice didn’t also solve it in humans, then the difference between the two versions of you are going to start to compound. It may not even be noticeable after five years. But after ten, twenty, thirty years, the second version of you is going to be worse for wear. You might not be dead. But there’s a much higher chance you’re going to have had several more heart attacks, and possibly other problems as well.

This is a case of moral hazard, plain and simple, and it does appear in the attitudes of patients with chronic conditions that require constant treatment. The fact that, in this case, the perception of a lack of risk and consequences is a complete fantasy is not relevant. All risk analyses depend on the information that is given and available, not on whatever the actual facts may be. We know that the patient’s decision is ultimately misguided because we know the information they are being given is false, or at least, misleading, and because our detached perspective allows us to take a dispassionate view of the situation.

The patient does not have this information or perspective. In all probability, they are starting out scared and confused, and want nothing more than to return to their previous normal life with as few interruptions as possible. The information and advice they were given, from a medical team that they trust, and possibly have no practical way of fact checking, has led them to believe that they do not particularly need to be strict about their new regimen, because there will not be time for long term consequences to catch up.

The medical team may earnestly believe this. It is the same problem one level up; the only difference is, their information comes from pharmaceutical manufacturers, who have a marketing interest in keeping patients and doctors optimistic about upcoming products, and researchers, who may be unfamiliar with the hurdles in getting a breakthrough from the early lab discoveries to a consumer-available product, and whose funding is dependent on drumming up public support through hype.

The patient is also complicit in this system that lies to them. Nobody wants to be told that their condition is incurable, and that they will be chronically sick until they die. No one wants to hear that their new diagnosis will either cause them to die early, or live long enough for their organs to fail, because even by adhering to the most rigid medical plan, the tools available simply cannot completely mimic the human body’s natural functions. Indeed, telling a patient that they will still suffer long term complications, whether in ten, twenty, or thirty years, almost regardless of their actions today, it can be argued, will have much the same effect as telling them that they will be healthy regardless.

Given the choice between two extremes, optimism is obviously the better policy. But this policy does have a tradeoff. It creates a moral hazard of hope. Ideally, we would be able to convey an optimistic perspective that also maintains an accurate view of the medical prognosis, and balances the need for bedside manner with incentivizing patients to take the best possible care of themselves. Obviously this is not an easy balance to strike, and the balance will vary from patient to patient. The happy-go-lucky might need to be brought down a peg or two with a reality check, while the nihilistic might need a spoonful of sugar to help the medicine go down. Finding this middle ground is not a task to be accomplished by a practitioner at a single visit, but a process to be achieved over the entire course of treatment, ideally with a diverse and well experienced team including mental health specialists.

In an effort to finish on a positive note, I will point out that this is already happening, or at least, is already starting to happen. As interdisciplinary medicine gains traction, patient mental health becomes more of a focus, and as patients with chronic conditions begin to live longer, more hospitals and practices are working harder to ensure that a positive and constructive mindset for self care is a priority, alongside educating patients on the actual logistics of self-care. Support is easier to find than ever, especially with organized patient conferences and events. This problem, much like the conditions that cause it, are chronic, but are manageable with effort.

 

The Professional Sick Person

This last week, I spent a bit of time keeping up my title as a professional sick person. I achieved this, luckily, without having to be in any serious danger, because the cause of my temporary disposition was the series of vaccines I received. I tend to be especially prone to the minor side effects of vaccination- the symptoms that make one feel vaguely under the weather without making one feel seriously at risk of death -which isn’t surprising given my immune pathology.

Enduring a couple of days at most of foggy-headedness, low grade fevers and chills, and frustrating but bearable aches is, if still unpleasant, then at least better than most any other illness I have dealt with in the last decade.

What struck me was when I was told, contrary to my own experience and subsequent expectations, that a couple of days would be in itself an average amount of time for a “normal” person to recover fully from an ordinary illness. That, for someone who has a healthy and attenuated immune system, it is entirely possible to get through the whole cycle of infection for a garden variety cold in a weekend.

This is rather shocking news to me. I had always assumed that when the protagonist of some television show called in sick for a single day, and returned to work/school the next, that this was just one of those idiosyncrasies of the TV universe, the same way characters always wear designer brands and are perfectly made up.

I had always assumed that in reality, of course people who caught a cold would take at least a week to recover, since it usually takes me closer to two, assuming it doesn’t develop into some more severe infection. Of course people who have the flu spend between three and five weeks at home (still optimistic, if you’re asking me), that is, if they can get by without having to be hospitalized.

This probably shouldn’t surprise me. I know, consciously, that I spend more time confined to quarantine by illness than almost anyone I know, and certainly than anyone I’m friends with for reasons other than sharing a medical diagnosis or hospital ward with. Still, it’s easy to forget this. It’s extremely easy to assume, as I find myself often doing even without thinking, that barring obvious differences, other people are fundamentally not unlike myself, and share most of my perspectives, values, and challenges. Even when I am able to avoid doing this consciously, I find that my unconscious mind often does this for me.

It’s jarring to be suddenly reminded, then, of exactly how much my health truly does, and I don’t use this phrase lightly, screw me over; apparently it does so so often and so thoroughly that I have to a large degree ceased to notice, except when it causes a jarring contrast against my peers.

Feeling slightly terrible as a side effect of getting vaccines has, on an intellectual and intuitive level, ceased to be an annoyance in itself. It is only problematic insofar as it prevents me from going about my business otherwise: my mental fog makes writing difficult, my fevers and chills compel me to swaddle my shivering body to offset its failure to maintain temperature, and my omnipresent myalgia gives me a constant nagging reminder of the frailty of my mortal coil, but these are mere physical inconveniences. Of course, this does not negate the direct physical impact of my current disposition; it merely contextualizes it.

Having long ago grown used to the mental feeling of illness, and without feeling poor enough physically to garner any genuine concern for serious danger to my long term health and survival, the fact that I am sick rather than well is reduced to a mere footnote: a status. In the day to day story that I narrate to myself and others, the symptoms I have described are mere observations of the setting, without any lasting impact on the plot, nor on the essence of the story itself.

I often call myself a professional sick person; a phrase which I learnt from John Green via Hazel Grace Lancaster. The more time I spend thinking about my health, the more I find this metaphor apt. After all, in the past decade of being enrolled in and nominally attending public school, I have spent more time in hospitals than in a classroom. My health occupies a majority of my time, and the consequences for ignoring it are both immediate and dire. I regard my health as a fundamental part of my routine and identity, the way most do their jobs. Perhaps most compelling: my focus on it, like that of a professional on their trade, has warped my perspective.

We all know of the story of the IT expert incapable of explaining in human terms, or of the engineer so preoccupied with interesting solutions as to be blind to the obvious ones, or of the artist unable to accept a design that is less than perfect. In my case it is that I have spent so much time dealing with my own medical situation that it is exceedingly difficult to understand the relative simplicity of others’.

What is a Home?

I know that I’m getting close to where I want to be when the GPS stops naming roads. That’s fine. These roads don’t have names, or even a planned logic to them, so much as they merely exist relative to other things. Out here, the roads are defined by where they go, rather than having places defined by addresses.

After a while I begin to recognize familiar landmarks. Like the roads, these landmarks don’t have names, but rather refer to some event in the past. First we drive through the small hamlet where I was strong armed into my first driving lesson. We pass the spot where my grandmother stopped the golf cart by the side of the road to point out the lavender honeysuckle to far younger versions of myself and my younger brother, and we spent a half hour sampling the taste of the flowers. Next we pass under the tree that my cousin was looking up at nervously when my father grabbed him by the shoulders and screamed that he was under attack by Drop Bears, causing my cousin to quite nearly soil himself.

I have never lived in a single house continuously for more than about eight years. I grew up traveling, an outsider wherever I went, and to me the notion of a single home country, let alone a single house for a home, is as foreign as it is incomprehensible. So is the concept of living within driving distance of most of one’s relatives, for that matter.

To me, home has always been a utilitarian rather than moral designation. Home is where I sleep for free, where my things that don’t fit in my suitcase go, and where the bills get forwarded to. Home is the place where I can take as long as I want in the bathroom, and rearrange the furniture to my arbitrary personal preferences, and invite people over without asking, but that is all. Anywhere these criteria are met can be home to me, with whatever other factors such as ownership, geographic location, and proximity to relatives, or points of personal history, being irrelevant. I can appreciate the logistical value of all of these things, but attaching much more importance to it seems strange.

Yet even as I write this I find myself challenging my points. Walking around my grandfather’s farmhouse, which is the closest thing I have to a consistent home, I am reminded of images of myself from a different time, especially of myself from a time before I was consciously able to make choices about who I am. It’s difficult to think of myself that long ago in terms of me, and my story, and much easier to think of myself in terms of the other objects that were also present.

My grandparents used to run a preschool from their house, and the front room is still stocked with toys and books from that era. Many of the decorations have remained unchanged from when my grandmother ran the place. The doors and cabinets are all painted in bright pastel colors. In my mind, these toys were as much my own as any that stayed at home while we traveled. Each of these toys has wrapped up in it the plot lines from several hundred different games between myself and whoever else I could rope into playing with me.

Against the wall is a height chart listing my, my brother’s, and my cousins’ heights since as early as we could stand. For most of my childhood this was the official scale for determining who was tallest in the ever raging battle for height supremacy, and I remember feeling ready to burst with pride the first time I was verified as tallest. I am tall enough now that I have outgrown the tallest measuring point. I am indisputably the tallest in the family. And yet I still feel some strange compulsion to measure myself there, beyond the mere curiosity that is aroused every time I see a height scale in a doctor’s office.

This place isn’t my home, not by a long shot. In many respects, it meets fewer of my utilitarian criteria than a given hotel. It is the closest I have ever felt to understanding the cultural phenomenon of Home, and yet it is still as foreign as anywhere else. If one’s home is tied to one’s childhood, as both my own observations and those of others I have read seem to indicate, then I will probably never have a home. This might be a sad realization, if I knew any different.

I have often been accused of holding a worldview that does not include room for certain “human” elements. This accusation, as far as I can tell, is probably on point, though somewhat misleading. It is not out of malice nor antipathy towards these elements that I do not place value on concepts such as “home”, “patriotism”, or, for that matter “family”. It is because they are foreign, and because from my viewpoint as an outsider, I genuinely cannot see their value.

I can understand and recognize the utilitarian value; I recognize the importance of having a place to which mail can be delivered and oversized objects can be stored; I can understand the preference for ensuring that one’s country of residence is secure and prosperous; and I can see the value of a close support network, and how one’s close relatives might easily become among one’s closest friends. But inasmuch as these things are said to suppose to have inherent value beyond their utilitarian worth, I cannot see it.

It is probably, I am told, a result of my relatively unusual life trajectory, which has served to isolate me from most cultural touchstones. I never had a home or homeland because we lived abroad and moved around when I was young. I fail to grasp the value of family because I have never lived in close proximity to extended relatives to the point of them becoming friends, and my illness and disability has further limited me from experiencing most of the cultural touchstones with which I might share with family.

It might sound like I am lamenting this fact. Perhaps I would be, if I knew what it was that I am allegedly missing. In reality, I only lament the fact that I cannot understand these things which seem to come naturally to others. That I lack a capital-H Home, or some deeper connection to extended family or country, is neither sad nor happy, but merely a fact of my existence.

Incremental Progress Part 4 – Towards the Shining Future

I have spent the last three parts of this series bemoaning various aspects of the cycle of medical progress for patients enduring chronic health issues. At this point, I feel it is only fair that I highlight some of the brighter spots.

I have long come to accept that human progress is, with the exception of the occasional major breakthrough, incremental in nature; a reorganization here paves the way for a streamlining there, which unlocks the capacity for a minor tweak here and there, and so on and so forth. However, while this does help adjust one’s day to day expectations from what is shown in popular media to something more realistic, it also risks minimizing the progress that is made over time.

To refer back to an example used in part 2 that everyone should be familiar with, let’s refer to the progress being made on cancer. Here is a chart detailing the rate of FDA approvals for new treatments, which is a decent, if oversimplified, metric for understanding how a given patient’s options have increased, and hence, how specific and targeted their treatment will be (which has the capacity to minimize disruption to quality of life), and the overall average 5-year survival rate over a ten year period.

Does this progress mean that cancer is cured? No, not even close. Is it close to being cured? Not particularly.

It’s important to note that even as these numbers tick up, we’re not intrinsically closer to a “cure”. Coronaviruses, which cause the common cold, have a mortality rate pretty darn close to zero, at least in the developed world, and that number gets a lot closer if we ignore “novel” coronaviruses like SARS and MERS, and focus only on the rare person who has died as a direct result of the common cold. Yet I don’t think anyone would call the common cold cured. Coronaviruses, like cancer, aren’t cured, and there’s a reasonable suspicion on the part of many that they aren’t really curable in the sense that we’d like.

“Wait,” I hear you thinking, “I thought you were going to talk about bright spots”. Well, yes, while it’s true that progress on a full cure is inconclusive at best, material progress is still being made every day, for both colds and cancer. While neither is at present curable, they are, increasingly treatable, and this is where the real progress is happening. Better treatment, not cures, is from whence all the media buzz is generated, and why I can attend a conference about my disease year after year, hearing all the horror stories of my comrades, and still walk away feeling optimistic about the future.

So, what am I optimistic about this time around, even when I know that progress is so slow coming? Well, for starters, there’s life expectancy. I’ve mentioned a few different times here that my projected lifespan is significantly shorter than the statistical average for someone of my lifestyle, medical issues excluded. While this is still true, this is becoming less true. The technology which is used for my life support is finally reaching a level of precision, in both measurement and dosing, where it can be said to genuinely mimic natural bodily functions instead of merely being an indefinite stopgap.

To take a specific example, new infusion mechanisms now allow dosing precision down to the ten-thousandth of a milliliter. For reference, the average raindrop is between 0.5 and 4 milliliters. Given that a single thousandth of a milliliter in either direction at the wrong time can be the difference between being a productive member of society and being dead, this is a welcome improvement.

Such improvements in delivery mechanisms has also enabled innovation on the drugs themselves by making more targeted treatments wth a smaller window for error viable to a wider audience, which makes them more commercially viable. Better drugs and dosaging has likewise raised the bar for infusion cannulas, and at the conference, a new round of cannulas was already being hyped as the next big breakthrough to hit the market imminently.

In the last part I mentioned, though did not elaborate at length on, the appearance of AI-controlled artificial organs being built using DIY processes. These systems now exist, not only in laboratories, but in homes, offices, and schools, quietly taking in more data than the human mind can process, and making decisions with a level of precision and speed that humans cannot dream of achieving. We are equipping humans as cyborgs with fully autonomous robotic parts to take over functions they have lost to disease. If this does not excite you as a sure sign of the brave new future that awaits all of us, then frankly I am not sure what I can say to impress you.

Like other improvements explored here, this development isn’t so much a breakthrough as it is a culmination. After all, all of the included hardware in these systems has existed for decades. The computer algorithms are not particularly different from the calculations made daily by humans, except that they contain slightly more data and slightly fewer heuristic guesses, and can execute commands faster and more precisely than humans. The algorithms are simple enough that they can be run on a cell phone, and have an effectiveness on par with any other system in existence.

These DIY initiatives have already caused shockwaves throughout the medical device industry, for both the companies themselves, and the regulators that were previously taking their sweet time in approving new technologies, acting as a catalyst for a renewed push for commercial innovation. But deeper than this, a far greater change is also taking root: a revolution not so much in technology or application, but in thought.

If my memory and math are on point, this has been the eighth year since I started attending this particular conference, out of ten years dealing with the particular disease that is the topic of this conference, among other diagnoses. While neither of these stretches are long enough to truly have proper capital-H historical context, in the span of a single lifetime, especially for a relatively young person such as myself, I do believe that ten or even eight years is long enough to reflect upon in earnest.

Since I started attending this conference, but especially within the past three years, I have witnessed, and been the subject of, a shift in tone and demeanor. When I first arrived, the tone at this conference seemed to be, as one might expect one primarily of commiseration. Yes, there was solidarity, and all the positive emotion that comes from being with people like oneself, but this was, at best, a bittersweet feeling. People were glad to have met each other, but still nevertheless resentful to have been put in the unenviable circumstances that dictated their meeting.

More recently, however, I have seen and felt more and more an optimism accompanying these meetings. Perhaps it is the consistently record-breaking attendance that demonstrates, if nothing else, that we stand united against the common threat to our lives, and against the political and corporate forces that would seek to hold up our progress back towards being normal, fully functioning humans. Perhaps it is merely the promise of free trade show goodies and meals catered to a medically restricted diet. But I think it is something different.

While a full cure, of the sort that would allow me and my comrades to leave the life support at home, serve in the military, and the like, is still far off, today more than ever before, the future looks, if not bright, then at least survivable.

In other areas of research, one of the main genetic research efforts, which has maintained a presence at the conference, is now closing in on the genetic and environmental triggers that cause the elusive autoimmune reaction which has been known to cause the disease, and on various methods to prevent and reverse it. Serious talk of future gene therapies, the kind of science fiction that has traditionally been the stuff of of comic books and film, is already ongoing. It is a strange and exciting thing to finish an episode of a science-fiction drama television series focused on near-future medical technology (and how evil minds exploit it) in my hotel room, only to walk into the conference room to see posters advertising clinical trial sign ups and planned product releases.

It is difficult to be so optimistic in the face of incurable illness. It is even more difficult to remain optimistic after many years of only incremental progress. But pessimism too has its price. It is not the same emotional toll as the disappointment which naive expectations of an imminent cure are apt to bring; rather it is an opportunity cost. It is the cost of missing out on adventures, on missing major life milestones, on being conservative rather than opportunistic.

Much of this pessimism, especially in the past, has been inspired and cultivated by doctors themselves. In a way, this makes sense. No doctor in their right mind is going to say “Yes, you should definitely take your savings and go on that cliff diving excursion in New Zealand.” Medicine is, by its very nature, conservative and risk averse. Much like the scientist, a doctor will avoid saying anything until after it has been tested and proven beyond a shadow of a doubt. As noted previously, this is extremely effective in achieving specific, consistent, and above all, safe, treatment results. But what about when the situation being treated is so all-encompassing in a patient’s life so as to render specificity and consistency impossible?

Historically, the answer has been to impose restrictions on patients’ lifestyles. If laboratory conditions don’t align with real life for patients, then we’ll simply change the patients. This approach can work, at least for a while. But patients are people, and people are messy. Moreover, when patients include children and adolescents, who, for better or worse, are generally inclined to pursue short term comfort over vague notions of future health, patients will rebel. Thus, eventually, trading ten years at the end of one’s life for the ability to live the remainder more comfortably seems like a more balanced proposition.

This concept of such a tradeoff is inevitably controversial. I personally take no particular position on it, other than that it is a true tragedy of the highest proportion that anyone should be forced into such a situation. With that firmly stated, many of the recent breakthroughs, particularly in new delivery mechanisms and patient comfort, and especially in the rapidly growing DIY movement, have focused on this tradeoff. The thinking has shifted from a “top-down” approach of finding a full cure, to a more grassroots approach of making life more livable now, and making inroads into future scientific progress at a later date. It is no surprise that many of the groups dominating this new push have either been grassroots nonprofits, or, where they have been commercial, have been primarily from silicon valley style, engineer-founded, startups.

This in itself is already a fairly appreciable and innovative thesis on modern progress, yet one I think has been tossed around enough to be reasonably defensible. But I will go a step further. I submit that much of the optimism and positivity; the empowerment and liberation which has been the consistent takeaway of myself and other authors from this and similar conferences, and which I believe has become more intensely palpable in recent years than when I began attending, has been the result of this same shift in thinking.

Instead of competing against each other and shaming each other over inevitable bad blood test results, as was my primary complaint during conferences past, the new spirit is one of camaraderie and solidarity. It is now increasingly understood at such gatherings, and among medical professionals in general, that fear and shame tactics are not effective in the long run, and do nothing to mitigate the damage of patients deciding that survival at the cost of living simply isn’t worth it [1]. Thus the focus has shifted from commiseration over common setbacks, to collaboration and celebration over common victories.

Thus it will be seen that the feeling of progress, and hence, of hope for the future, seems to lie not so much in renewed pushes, but in more targeted treatments, and better quality of life. Long term patients such as myself have largely given up hope in the vague, messianic cure, to be discovered all at once at some undetermined future date. Instead, our hope for a better future; indeed, for a future at all; exists in the incremental, but critically, consistent, improvement upon the technologies which we are already using, and which have already been proven. Our hope lies in understanding that bad days and failures will inevitably come, and in supporting, not shaming, each other when they do.

While this may not qualify for being strictly optimistic, as it does entail a certain degree of pragmatic fatalism in accepting the realities of disabled life, it is the closest I have yet come to optimism. It is a determination that even if things will not be good, they will at least be better. This mindset, unlike rooting for a cure, does not require constant fanatical dedication to fundraising, nor does it breed innovation fatigue from watching the scientific media like a hawk, because it prioritizes the imminent, material, incremental progress of today over the faraway promises of tomorrow.


[1] Footnote: I credit the proximal cause of this cognitive shift in the conference to the progressive aging of the attendee population, and more broadly, to the aging and expanding afflicted population. As more people find themselves in the situation of a “tradeoff” as described above, the focus of care inevitably shifts from disciplinarian deterrence and prevention to one of harm reduction. This is especially true of those coming into the 13-25 demographic, who seem most likely to undertake such acts of “rebellion”. This is, perhaps unsurprisingly, one of the fastest growing demographics for attendance at this particular conference over the last several years, as patients who began attending in childhood come of age.

Incremental Progress Part 3 – For Science!

Previously, I have talked some of the ways that patients of chronic health issues and medical disabilities feel impacted by the research cycle. Part one of this ongoing series detailed a discussion I participated in at an ad-hoc support group of 18-21 year olds at a major health conference. Part two detailed some of the things I wish I had gotten a chance to add, based on my own experiences and the words of those around me, but never got the chance to due to time constraints.

After talking at length about the patient side of things, I’d like to pivot slightly to the clinical side. If we go by what most patients know about the clinical research process, here is a rough picture of how things work:

First, a conclave of elite doctors and professor gather in secret, presumably in a poorly lit conference room deep beneath the surface of the earth, and hold a brainstorming session of possible questions to study. Illicit substances may or not be involved in this process, as the creativity required to come up with such obscure and esoteric concerns as “why do certain subspecies of rats have funny looking brains?” and “why do stressful things make people act stressed out?” is immense. At the end of the session, all of the ideas are written down on pieces of parchment, thrown inside a hat, and drawn randomly to decide who will study what.

Second, money is extracted from the public at large by showing people on the street pictures of cute, sad looking children being held at needle-point by an ominously dressed person in a lab coat, with the threat that unless that person hands over all of their disposable income, the child will be forced to receive several injections per day. This process is repeated until a large enough pile of cash is acquired. The cash is then passed through a series of middlemen in dark suits smoking cigars, who all take a small cut for all their hard work of carrying the big pile of cash.

At this point, the cash is loaded onto a private jet and flown out to the remote laboratories hidden deep in the Brazilian rainforests, the barren Australian deserts, the lost islands of the arctic and Antarctic regions, and inside the active volcanoes of the pacific islands. These facilities are pristine, shining snow white and steel grey, outfitted with all the latest technology from a mid-century science fiction film. All of these facilities are outfitted either by national governments, or the rich elite of major multinational corporations, who see to all of the upkeep and grant work, leaving only the truly groundbreaking work to the trained scientists.

And who are the scientists? The scientist is a curious creature. First observed in 1543 naturalists hypothesized scientists to be former humans transmogrified by the devil himself in a Faustian bargain whereby the subject loses most interpersonal skills and material wealth in exchange for incredible intelligence a steady, monotonous career playing with glassware and measuring equipment. No one has ever seen a scientist in real life, although much footage exists of the scientist online, usually flaunting its immense funding and wearing its trademark lab coat and glasses. Because of the abundance of such footage, yet lack of real-life interactions, it has been speculated that scientists may possess some manner of cloaking which renders them invisible and inaudible outside of their native habitat.

The scientists spend their time exchanging various colored fluid between Erlenmeyer flasks and test tubes, watching to see which produces the best colors. When the best colors are found, a large brazier is lit with all of the paper currency acquired earlier. The photons from the fire reaction may, if the stars are properly aligned, hit the colored fluid in such a way as to cause the fluid to begin to bubble and change into a different color. If this happens often enough, the experiment is called a success.

The scientists spend the rest of their time meticulously recording the precise color that was achieved, which will provide the necessary data for analyst teams to divine the answers to the questions asked. These records are kept not in English, or any other commonly spoken language, but in Scientific, which is written and understood by only a handful of non-scientists, mainly doctors, teachers, and engineers. The process of translation is arduous, and in order to be fully encrypted requires several teams working in tandem. This process is called peer review, and, at least theoretically, this method makes it far more difficult to publish false information, because the arduousness of the process provides an insurmountable barrier to those motivated by anything other than the purest truth.

Now, obviously all of this is complete fiction. But the fact that I can make all of this up with a straight face speaks volumes, both about the lack of public understanding of how modern clinical research works, and the lack of transparency of the research itself. For as much as we cheer on the march of scientific advancement and technological development, for as much media attention is spent on new results hot off the presses, and for as much as the stock images and characters of the bespectacled expert adorned in a lab coat and armed with test tubes resounds in both popular culture and the popular consciousness, the actual details of what research is being done, and how it is being executed, is notably opaque.

Much of this is by design, or is a direct consequence of how research is structured. The scientific method by which we separate fact from fiction demands a level of rigor that is often antithetical to human nature, which requires extreme discipline and restraint. A properly organized double-blind controlled trial, the cornerstone of true scientific research, requires that the participants and even the scientists measuring results be kept in the dark as to what they are looking for, to prevent even the subtlest of unconscious biases from interfering. This approach, while great at testing hypotheses, means that the full story is only known to a handful of supervisors until the results are ready to be published.

The standard of scientific writing is also incredibly rigorous. In professional writing, a scientist is not permitted to make any claims or assumptions unless either they have just proven it themselves, in which case they are expected to provide full details of their data and methodology, or can directly cite a study that did so. For example, a scientist cannot simply say that the sky is blue, no matter how obvious this may seem. Nor even can a scientist refer to some other publication in which the author agreed that the sky is blue, like a journalist might while providing citations for a story. A scientist must find the original data proving that the sky is blue, that it is consistently blue, and so forth, and provide the documentation for others to cross check the claims themselves.

These standards are not only obligatory for those who wish to receive recognition and funding, but they are enforced for accreditation and publication in the first place. This mindset has only become more entrenched as economic circumstances have caused funding to become more scarce, and as political and cultural pressure have cast doubts on “mainstream institutions” like academia and major research organizations. Scientists are trained to only give the most defensible claims, in the most impersonal of words, and only in the narrow context for which they are responsible for studying. Unfortunately, although this process is unquestionably effective at testing complex hypotheses, it is antithetical to the nature of everyday discourse.

It is not, as my colleague said during our conference session said, that “scientists suck at marketing”, but rather that marketing is fundamentally incongruous with the mindset required for scientific research. Scientific literature ideally attempts to lay out the evidence with as little human perspective as possible, and let the facts speak for themselves, while marketing is in many respects the art of conjuring and manipulating human perspective, even where such perspectives may diverge from reality.

Moreover, the consumerist mindset of our capitalist society amplifies this discrepancy. The constant arms race between advertisers, media, and political factions means that we are awash in information. This information is targeted to us, adjusted to our preferences, and continually served up on a silver platter. We are taught that our arbitrary personal views are fundamentally righteous, that we have no need to change our views unless it suits us, and that if there is really something that requires any sort of action or thought on our part, that it will be similarly presented in a pleasant, custom tailored way. In essence, we are taught to ignore things that require intellectual investment, or challenge our worldview.

There is also the nature of funding. Because it is so difficult to ensure that trials are actually controlled, and to write the results in such a counterintuitive way, the costs of good research can be staggering, and finding funding can be a real struggle. Scientists may be forced to work under restrictions, or to tailor their research to only the most profitable applications. Results may not be shared to prevent infringement, or to ensure that everyone citing the results is made to pay a fee first. I could spend pages on different stories of technologies that could have benefited humanity, but were kept under wraps for commercial or political reasons.

But of course, it’s easy to rat on antisocial scientists and pharmaceutical companies. And it doesn’t really get to the heart of the problem. The problem is that, for most patients, especially those who aren’t enrolled in clinical trials, and don’t necessarily have access to the latest devices, the whole world of research is a black hole into which money is poured with no apparent benefit in return. Maybe if they follow the news, or hear about it from excited friends and relations (see previous section), they might be aware of a few very specific discoveries, usually involving curing one or two rats out of a dozen tries.

Perhaps, if they are inclined towards optimism, they will be able to look at the trend over the last several decades towards better technology and better outcomes. But in most cases, the truly everyday noticeable changes seem to only occur long after they have long been obvious to the users. The process from patient complaints with a medical device, especially in a non-critical area like usability and quality of life, that does not carry the same profit incentive for insurers to apply pressure, to a market product, is agonizingly slow.

Many of these issues aren’t research problems so much as manufacturing and distribution problems. The bottleneck in making most usability tweaks, the ones that patients notice and appreciate, isn’t in research, or even usually in engineering, but in getting a whole new product approved by executives, shareholders, and of course, regulatory bodies. (Again, this is another topic that I could, and probably will at some future date, rant on about for several pages, but suffice it to say that when US companies complain about innovation being held up by the FDA, their complaints are not entirely without merit).

Even after such processes are eventually finished, there is the problem of insurance. Insurance companies are, naturally, incredibly averse to spending money on anything unless and until it has been proven beyond a shadow of a doubt that it is not only safe, but cost effective. Especially for basic, low income plans, change can come at a glacial pace, and for state-funded services, convincing legislators to adjust statutes to permit funding for new innovations can be a major political battle. This doesn’t even begin to take into account the various negotiated deals and alliances between certain providers and manufacturers that make it harder for new breakthroughs to gain traction (Another good topic for a different post).

But these are economic problems, not research. For that matter, most of the supposed research problems are simply perception problems. Why am I talking about markets and marketing when I said I was going to talk about research?

Because for most people, the notions of “science” and “progress” are synonymous. We are constantly told, by our politicians, by our insurers, by our doctors, and by our professors that not only do we have the very best level of care that has ever been available in human history, but that we also have the most diligent, most efficient, most powerful organizations and institutions working tirelessly on our behalf to constantly push forward the frontier. If we take both of these statements at face value, then it follows that anything that we do not already have is a research problem.

For as much talk as there was during our conference sessions about how difficult life was, how so very badly we all wanted change, and how disappointed and discouraged we have felt over the lack of apparent progress, it might be easy to overlook the fact that far better technologies than are currently used by anyone in that room already exist. At this very moment, there are patients going about their lives using systems that amount to AI-controlled artificial organs. These systems react faster and more accurately than humans could ever hope to, and the clinical results are obvious.

The catch? None of these systems are commercially available. None of them have even been submitted to the FDA. A handful of these systems are open source DIY projects, and so can be cobbled together by interested patients, though in many cases this requires patients to go against medical advice, and take on more engineering and technical responsibility than is considered normal for a patient. Others are in clinical trials, or more often, have successfully completed their trials and are waiting for manufacturers to begin the FDA approval process.

This bottleneck, combined with the requisite rigor of clinical trials themselves, is what has given rise to the stereotype that modern research is primarily chasing after its own tail. This perception makes even realistic progress seem far off, and makes it all the more difficult to appreciate what incremental improvements are released.

Incremental Progress Part 2 – Innovation Fatigue

This is part two of a multi-part perspective on patient engagement in charity and research. Though not strictly required, it is strongly recommended that you read part one before continuing.


The vague pretense of order in the conversation, created by the presence of the few convention staff members, broke all at once, as several dozen eighteen to twenty one year olds all rushed to get in their two cents on the topic of fundraising burnout (see previous section). Naturally this was precisely the moment where I struck upon what I wanted to say. The jumbled thoughts and feelings, that had hinted at something to add while other people were talking, suddenly crystallized into a handful of points I wanted to make, all clustered around a phrase I had heard a few years earlier.

Not one to interrupt someone else, and also wanting to have undivided attention in making my point, I attempted to wait until the cacophony of discordant voices became more organized. And, taking example from similar times earlier in my life when I had something I wished to contribute before a group, I raised my hand and waited for silence.

Although the conversation was eventually brought back under control by some of the staff, I never got a chance to make my points. The block of time we had been allotted in the conference room ran out, and the hotel staff were anxious to get the room cleared and organized for the next group.

And yet, I still had my points to make. They still resonated within me, and I honestly believed that they might be both relevant and of interest to the other people who were in that room. I took out my phone and jotted down the two words which I had pulled from the depths of my memory: Innovation Fatigue.

That phrase has actually come to mean several different things to different groups, and so I shall spend a moment on etymology before moving forward. In research groups and think tanks, the phrase is essentially a stand in for generic mental and psychological fatigue. In the corporate world, it means a phenomenon of diminishing returns on creative, “innovative” projects, that often comes about as a result of attempts to force “innovation” on a regular schedule. More broadly in this context, the phrase has come to mean an opposition to “innovation” when used as a buzzword similar to “synergy” and “ideate”.

I first came across this term in a webcomic of all places, where it was used in a science fiction context to explain why the society depicted, which has advanced technology such as humanoid robots, neurally integrated prostheses, luxury commercial space travel, and artificial intelligence, is so similar to our own, at least culturally. That is to say, technology continues to advance at the exponential pace that it has across recorded history, but in a primarily incremental manner, and therefore most people, either out of learned complacency or a psychological defense mechanism to avoid constant hysteria, act as though all is as it always has been, and are not impressed or excited by the prospects of the future.

In addition to the feeling of fundraising burnout detailed in part one, I often find that I suffer from innovation fatigue as presented in the comic, particularly when it comes to medical research that ought to directly affect my quality of life, or promises to in the future. And what I heard from other patients during our young adults sessions has led me to believe that this is a fairly common feeling.

It is easy to be pessimistic about the long term outlook with chronic health issues. Almost definitionally, the outlook is worse than average, and the nature of human biology is such that the long term outlook is often dictated by the tools we have today. After all, even if the messianic cure arrives perfectly on schedule in five to ten years (for the record, the cure has been ten years away for the last half-century), that may not matter if things take a sharp turn for the worse six months from now. Everyone already knows someone for whom the cure came too late. And since the best way to predict future results, we are told, is from past behavior, then it would be accurate to say that no serious progress is likely to be made before it is too late.

This is not to say that progress is not being made. On the contrary, scientific progress is continuous and universal across all fields. Over the past decade alone, there has been consistent, exponential progress in not only quality of care, and quality of health outcomes, but quality of life. Disease, where it is not less frequent, but it is less impactful. Nor is this progress being made in secret. Indeed, amid all the headlines about radical new treatment options, it can be easy to forget that the diseases they are made to treat still have a massive impact. And this is precisely part of the problem.

To take an example that will be familiar to a wider audience, take cancer. It seems that in a given week, there is at least one segment on the evening TV news about some new treatment, early detection method, or some substance or habit to avoid in order to minimize one’s risk. Sometimes these segments play every day, or even multiple times per day. In checking my online news feed, one of every four stories was something regarding improvements in the state of cancer; to be precise, one was a list of habits to avoid, while one was about a “revolutionary treatment [that] offers new hope to patients”.

If you had just been diagnosed with cancer, you would be forgiven for thinking that with all this seemingly daily progress, that the path forward would be relatively simple and easy to understand. And it would be easy for one who knows nothing else to get the impression that cancer treatment is fundamentally easy nowadays. This is obviously untrue, or at least, grossly misleading. Even as cancer treatments become more effective and better targeted, the impact to life and lifestyle remains massive.

It is all well and good to be optimistic about the future. For my part, I enjoy tales about the great big beautiful tomorrow shining at the end of the day as much as anyone. In as much as I have a job, it is talking to people about new and exciting innovations in their medical field, and how they can best take advantage of them as soon as possible for the least cost possible. I don’t get paid to do this; I volunteer because I am passionate about keeping progress moving forward, and because some people have found that my viewpoint and manner of expression are uniquely helpful.

However, this cycle of minor discoveries, followed by a great deal of public overstatement and media excitement, which never (or at least, so seldom as to appear never) quite lives up to the hype, is exhausting. Active hoping, in the short term, as distinct from long term hope for future change, is acutely exhausting. Moreover, the routine of having to answer every minor breakthrough with some statement to interested, but not personally-versed friends and relations, who see media hyperbole about (steps towards) a cure and immediately begin rejoicing, is quite tiring.

Furthermore, these almost weekly interactions, in addition to carrying all of the normal pitfalls of socio-familial transactions, have a unique capability to color the perceptions of those who are closest to oneself. The people who are excited about these announcements because they know, or else believe, it represents an end, or at least, decrease, to one’s medical burden, are often among those who one wishes least to alienate with causal pessimism.

For indeed, failing to respond with appropriate zeal to each and every announcement does lead to public branding of pessimism, even depression. Or worse: it suggests that one is not taking all appropriate actions to combat one’s disease, and therefore is undeserving of sympathy and support. After all, if the person on the TV says that cancer is curable nowadays, and your cancer hasn’t been cured yet, it must be because you’re not trying hard enough. Clearly you don’t deserve my tax dollars and donations to fund your treatment and research. After all, you don’t really need it anymore. Possibly you are deliberately causing harm to yourself, and therefore are insane, and I needn’t listen to anything you say to the contrary. Hopefully, it is easy to see how frustrating this dynamic can become, even when it is not quite so exaggerated to the point of satire.

One of the phrases that I heard being repeated at the conference a lot was “patient investment in research and treatment”. When patients aren’t willing to invest emotionally and mentally in their own treatment; in their own wellbeing, the problems are obvious. To me, the cause, or at least, one of the causes, is equally obvious. Patients aren’t willing to invest because it is a risky investment. The up front cost of pinning all of the hopes and dreams for one’s future on a research hypothesis is enormous. The risk is high, as anyone who has stupefied the economics of research and development knows. Payouts aren’t guaranteed, and when they do come, they will be incremental.

Patients who aren’t “investing” in state of the art care aren’t doing so because they don’t want to get better care. They aren’t investing because they either haven’t been convinced that it is a worthwhile investment, or are emotionally and psychologically spent. They have tried investing, and have lost out. They have developed innovation fatigue. Tired of incremental progress which does not offer enough payback to earnestly plan for a better future, they turn instead to what they know to be stable: the pessimism here and now. Pessimism isn’t nearly as shiny or enticing, and it doesn’t offer the slim chance of an enormous payout, but it is reliable and predictable.

This is the real tragedy of disability, and I am not surprised in the slightest that now that sufficient treatments have been discovered to enable what amounts to eternally repeatable stopgaps, but not a full cure, that researchers, medical professionals, and patients themselves, have begun to encounter this problem. The incremental nature of progress, the exaggeratory nature of popular media, and the basic nature of humans in society amplify this problem and cause it to concentrate and calcify into the form of innovation fatigue.

Why I Fight

Yes, I know I said that I would continue with the Incremental Progress series with my next post. It is coming, probably over or near the weekend, as that seems to be my approximate unwritten schedule. But I would be remiss if I failed to mark today of all days somehow on here.


The twentieth of July, two thousand and seven. A date which I shall be reminded of for as long as I live. The date that I define as the abrupt end of my childhood and the beginning of my current identity. The date which is a strong contender for the absolute worst day of my life, and would win hands down save for the fact that I slipped out of consciousness due to overwhelming pain, and remained in a coma through the next day.

It is the day that is marked in my calendar simply as “Victory Day”, because on that day, I did two things. First, I beat the odds on what was, according to my doctors, a coin toss over whether I would live or die. Second, it was the day that I became a survivor, and swore to myself that I would keep surviving.

I was in enough pain and misery that day, that I know I could have very easily given up. My respiratory system was already failing, and it would have been easy enough to simply stop giving the effort to keep breathing. It might have even been the less painful option. But as close as I already felt to the abyss, I decided I would go no further. I kept fighting, as I have kept fighting ever since.

I call this date Victory Day in my calendar, partly because of the victory that I won then, but also because each year, each annual observance, is another victory in itself. Each year still alive is a noteworthy triumph. I am still breathing, and while that may not mean much for people who have never had to endure as I have endured, it is certainly not nothing.

I know it’s not nothing, partly because this year I got a medal for surviving ten years. The medals are produced by one of the many multinational pharmaceutical corporations on which I depend upon for my continued existence, and date back to a few decades ago, when ten years was about the upper bound for life expectancy with this disease.

Getting a medal for surviving provokes a lot of bizarre feelings. Or perhaps I should say, it amplifies them, since it acts as a physical token of my annual Victory Day observances. This has always been a bittersweet occasion. It reminds me of what my life used to be like before the twentieth July two thousand and seven, and of the pain that I endured that day I nearly died, that I work so diligently to avoid. In short, it reminds me why I fight.

Angry in May

I am angry today. I don’t like feeling generally angry, because it’s usually quite draining without being actually fulfilling. Yet I feel rather compelled to be angry. I know several people who feel near or on the brink of desperation because of recent events regarding healthcare in particular and politics in general. I want to help, but there seems to be increasingly little I can do. I myself am somewhat worried about the future. In the wake of all of this I feel that I have the choice between being paralyzed by fear or being motivated by anger. The latter seems like an obvious choice.

The beginning of May is a time of a number of small holidays. April 30th marks the real end of the end of World War II, with the suicides of Hitler and company in Berlin and the transfer of governmental power to Reichpräsident (formerly admiral) Karl Dönitz, who would authorize the unconditional surrender of Nazi Germany on May 7th, known as VE Day in the west, and celebrated as Victory Day in the now-former soviet bloc on May 8th, due to the time difference between London and Moscow (and a few mishaps regarding paperwork and general distrust of the Soviets). Depending on where you live, this is either interesting trivia, or a very big deal.

Victory Day in Russia is one of the really big political occasions and is celebrated with an accordingly large show of military force. These parades are a chance for Russia to show off all the fancy toys that it will use to annihilate any future such invaders, for ordinary people to honor those they lost during the war, for old people and leftists to pine nostalgically for the halcyon days when the Soviet Union was strong and whippersnappers knew their place, and for western intelligence organizations to update their assessments on Russian military hardware. This last one has caused problems in the past, as miscounts to the number of bombers and missile launchers (the soviets were cycling them to up their numbers) led to the impression that a bomber and later missile gap existed between the Soviets and the US for most of the Cold War.

Speaking of bombastic parades, the First of May is either known as an occasion for maypole dancing, or for massive demonstrations with masses of red flags. Prior to the 1800s, May Day was something of a spring festival, likely originally associated with the Roman festival for the goddess of flowers, Flora, which took place on the first official day of summer. As Roman paganism fell out of fashion, the festival became a more secular celebration of springtime.

In 1904, the Sixth Conference of the Second Internationale declared that the first of May would be a day of protest for labor organizations to demonstrate, in memory of the May 4th, 1886 Haymarket Affair in Chicago. Subsequently, May Day became something of a major event for labor and workers’ rights groups. This was solidified after the formation of the Soviet Union (they seem to be a recurring element here), which, as a self-styled “workers’ state”, made May Day celebrations a big deal within its borders, and used the occasion to further sympathetic causes abroad.

This caused something of a feedback look, as governments taken in by anti-communist hysteria sought to either suppress (and thus, in many ways, legitimize) May Day demonstrations, or to control such demonstrations by making them official. Thus, in many countries, 1st May is celebrated as Labour Day (generally with the ‘u’). In 1955, Pope Pius XII declared May Day to be a feast day for Saint Joseph the Worker, in counter-celebration to labor celebrations.

May the Fourth, is, of course, celebrated as Star Wars Day, for obvious reasons. Historically it has been the day that I dress up in full character costume for school. Unfortunately, this year I was too sick to actually attend school, in costume or not. I was also recently informed that in Ohio in particular, 4th May is recognized primarily as the anniversary of the Kent State Massacre during the Vietnam War. To quote the friend who explained it to me:

So today is May 4th, affectionally known by most as Star Wars Day. That is what it used to be for me until I went to Kent State. Now May 4th is a day of remembrance. Because today in 1970, the National Guard opened fire on a group of students peacefully protesting the Vietnam War and killed 4. It has become a day for the entire campus to go silent, to walk the memorial, to relect on how important it is to speak up about what you believe is wrong. Politics is not always elections. Sometimes it is holding a candle at a memorial of people killed by the government. Sometimes it is remembering and refusing to forget. Either way, it is action. That is one of the most important lessons I have learned at Kent State.

The opening days of May have for some time now been a time of year when I typically pause and reflect. Having several small holidays- that is, holidays well known enough that I am reminded of their passing, without necessarily needing to go out of my way to prepare in advance -have helped add to this. Early May is typically long enough after cold and flu season that even if I’m not back in the thick of things, I’m usually on my feet. It’s also after midterms and standardized testing, while not being yet close enough to final exams that I can feel the weight of all my unfinished work bearing down on me in full force. Early May is a lull when I can get my bearings before hunkering down for the last act of the school year and hit the ground running for summer.

So, where am I? How am I doing? How am I going to come back into school roaring?

I don’t know the answer to any of these questions. There are too many things up in the air in my life, both at the micro and macro level. I feel uncertain and a little scared. And I feel angry.

Inasmuch as I have any real self confidence and self worth, I pride myself on my intelligence. I like that I can recall off the top of my head several different holiday occasions in the space of a fortnight, and succinctly explain their historical and cultural context. I enjoy being a know-it-all. I loath the unknown, and I detest the substitution of hard facts for squishy feelings. I consider these principles integral to my identity and personal value, and find it difficult and troubling to envision any future where I do not possess these traits, or where these merits are not accepted.

The Antibiotic Apocalypse and You

Following up on the theme established inadvertently last week: I’m still sick, though on the whole, I’m probably not feeling worse, and possibly arguably marginally better. In an effort to avoid the creativity-shattering spiral that happens when I stop writing altogether, this week I will endeavor to present some thoughts on a subject which I have been compelled to be thinking about anyway: Antibiotics.

A lot of concerns have been raised, rightfully, over the appearance of drug-resistant pathogens, with some going so far as to dub the growing appearances of resistant bacteria “the antibiotic apocalypse”. While antibiotic resistance isn’t a new problem per se, the newfound resistance to our more powerful “tiebreaker” drugs is certainly a cause for concern.

In press releases from groups such as the World Health Organization and the Centers for Disease Control and Prevention, much of the advice, while sound, has been concentrated on government organizations and healthcare providers. And while these people certainly have more responsibility and ability to react, this does not mean that ordinary concerned citizens cannot make a difference. Seeing as I am a person who relies on antibiotics a great deal, I figured I’d share some of the top recommendations for individuals to help in the global effort to ward off antibiotic resistance.

Before going further, I am compelled to restate what should be common sense: I don’t have actual medical qualifications, and thus what follows is pretty much a re-hash of what other experts have given as general, nonspecific information. With this in mind, my ramblings are no substitute for actual, tailored medical advice, and shouldn’t be treated as such.

Before you’re put on antibiotics

1) Stay home when you’re sick

This one is going to be repeated, because it bears repeating. Antibiotic resistant strains spread like any other illness, and the single best way to avoid spreading illness it to minimize contact with other people. Whether or not you are currently infected with antibiotic-resistant illness; in fact, whether or not you even have an illness that is treatable by antibiotics; staying at home when you’re sick will help you get better sooner, and is the single most important thing for public health in general.

2) Wash hands, take your vitamins, etcetera.

So obviously the best way to deal with illness is to avoid spreading it in the first place. This means washing your hands frequently (and properly! Sprinkling on some room temperature water like a baptism for your hands isn’t going to kill any germs), preparing food to proper standards, avoiding contact with sick people and the things they come in contact with, eating all of your vegetables, getting your vaccinations, you get the picture. Even if this doesn’t prevent you from getting sick, it will ensure that your immune system is in fighting shape for if you do.

3) Know how antibiotics work, and how resistance spreads

Remember high school biology? This is where all that arcana comes together. Antibiotics aren’t a magical cure-all. They use specific biological and chemical mechanisms to target specific kinds of organisms inside you. Antibiotics don’t work on viruses because they aren’t living organisms, and different kinds of antibiotics work against different diseases because of the biological and chemical distinctions.

Understanding the differences involved when making treatment decisions can be the difference between getting effective treatment and walking away unharmed, and spending time in the hospital to treat a resistant strain. Antibiotic resistance is a literally textbook example of evolution, so understanding how evolution works will help you combat it.

Public understanding of antibiotics and antibiotic resistance is such a critical part of combating resistance that it has been named by the World Health Organization as one of the key challenges in preventing a resistant superbug epidemic.

4) Treat anyone who is on antibiotics as if they were sick

If someone is on antibiotics and still doesn’t feel or seem well (and isn’t at home, for some reason), you’re going to want to take that at face value and keep your distance. You can also kindly suggest that they consider going home and resting. If you become sick after contact with such persons, be sure to mention it to your doctor.

If they’re feeling otherwise fine, you want to treat them as if they were immunocompromised. In other words, think of how you would conduct yourself health-wise around a newborn, or an elderly person. Extra hand-washing, making sure to wipe down surfaces, you get the picture. If they’re on antibiotics preventatively for a chronic immunodeficiency, they will appreciate the gesture. If they’re recovering from an acute illness, taking these extra precautions will help ensure that they don’t transmit pathogens and that their immune system has time to finish the job and recover.

5) Never demand antibiotics

I’ll admit, I’m slightly guilty of this one myself. I deal with a lot of doctors, and sometimes when I call in for a sick-day consult, I get paired with a GP who isn’t quite as experienced with my specific medical history, who may not have had time to go through my whole file, and who hasn’t been in close contact with my other dozen specialist doctors. Maybe they don’t recognize which of my symptoms are telltale signs for one diagnosis or another, or how my immunology team has a policy of escalating straight to a fourteen day course, or whatever.

I sympathize with the feeling of just wanting to get the doctor to write the stupid prescription like last time so one can get back to the important business of wasting away in bed. However, this is a problem. Not everyone is as familiar with how antibiotics work and with the intricacies of prescribing them, and so too often when patients ask for antibiotics, it ends up being the wrong call. This problem is amplified in countries such as the United States where economics and healthcare policies make it more difficult for doctors to refuse. This is also a major issue with prescription painkillers in the United States. So, listen to your doctor, and if they tell that you don’t need antibiotics, don’t pressure them.

Bear in mind that if a doctor says you don’t need antibiotics, it probably means that antibiotics won’t help or make you feel any better by taking them either, and could cause serious harm. For reference, approximately one in five of all hospital visits for drug side effects and overdoses are related to antibiotics.

It should go without saying that you should only get antibiotics (or any medication, really) via a prescription from your doctor, but apparently this is a serious enough problem that both the World Health Organization and the Centers for Disease Control and Prevention feel the need to mention this on their patient websites. So, yeah. Only take the drugs your doctor tells you to. Never take antibiotics left over from previous treatment, or from friends. If you have antibiotics left over from previous treatment, find your local government’s instructions for proper disposal.

If you are prescribed antibiotics

1) Take your medication on schedule, preferably with meals

Obviously, specific dosing instructions overrule this, but generally speaking, antibiotics are given a certain number of times per day, spaced a certain number of hours apart, and on a full stomach. Aside from helping to ensure that you will remember to take all of your medication, keeping to a schedule that coincides with mealtimes will help space dosages out and ensure that the antibiotics are working at maximum efficiency.

Skipping doses, or taking doses improperly vastly increases both the likelihood of developing resistant pathogens, and the risk of side effects.

2) Take probiotics between dosages

Antibiotics are fairly indiscriminate in their killing of anything it perceives as foreign. Although this makes them more effective against pathogens, it can also be devastating to the “helpful bacteria” that line your digestive tract. To this end, most gastroenterologists recommend taking a probiotic in between dosages of antibiotic. Aside from helping your body keep up it’s regular processes and repair collateral damage faster, this also occupies space and resources that would otherwise be ripe for the taking by the ones making you sick.

3) Keep taking your antibiotics, even if you feel well again

You can feel perfectly fine even while millions of hostile cells linger in your body. Every hostile cell that survives treatment is resistant, and can go in to start the infection all over again, only this time the antibiotic will be powerless to halt it. Only by taking all of your antibiotics on the schedule prescribed can you ensure that the infection is crushed the first time.

Furthermore, even though you may feel fine, your immune system has been dealt a damaging blow, and needs time to rebuild its forces. Continuing to take your antibiotics will help ensure that your weakened immune system does not let potentially deadly secondary infections slip through and wreak havoc.

4) Stay Home and Rest

Is this message getting through yet?

If you are on antibiotics, it means your body is engaged in a struggle, and it needs all of your resources focused on supporting that fight. Even the most effective antibiotics cannot eliminate every hostile cell. You immune system plays a vital role in hunting down and eliminating the remaining pathogens and preventing these resistant strains from multiplying and taking hold. In the later stages of this fight, you may not even feel sick, as there are too few resistant cells to cause serious damage. However, unless all of them are exterminated, the fight will continue and escalate.

Ideally, you should stay at home and rest for as long as you are taking antibiotics. However, since antibiotics are often given in courses of fourteen and twenty one days, this is impossible for most adults. At a barest minimum, you should stay home until you feel completely better, or until you are halfway done with your course of antibiotics, whichever is longer.

If you do return to your normal routine while taking antibiotics, keep in mind that you are still effectively sick. You should therefore take all of the normal precautions: extra hand washing, wiping down surfaces, extra nutrition and rest, and the like.

5) If you don’t feel better, contact your doctor immediately

Remember: Antibiotics are fairly all or nothing, and once an illness has developed a resistance to a specific treatment, continuing that line of treatment is unlikely to yield positive results and extremely likely to cause increased resistance to future treatment. Obviously, antibiotics, like any course of treatment, take some time to take effect, and won’t make you feel suddenly completely better overnight. However, if you are more than halfway through your treatment course and see no improvement, or feel markedly worse, this could be a sign that you require stronger medication.

This does not mean that you should stop taking your current medication, nor should you take this opportunity to demand stronger medication (both of these are really, colossally bad ideas). However, you should contact your doctor and let them know what’s going on. Your doctor may prescribe stronger antibiotics to replace your current treatment, or they may suggest additional adjunctive therapy to support you current treatment.

Works Consulted

“Antibiotic resistance.” World Health Organization. World Health Organization, n.d. Web. 28 Apr. 2017. <http://www.who.int/mediacentre/factsheets/antibiotic-resistance/en/>.

Freuman, Tamara Duker. “How (and Why) to Take Probiotics When Using Antibiotics.” U.S. News & World Report. U.S. News & World Report, 29 July 2014. Web. 28 Apr. 2017. .

“About Antibiotic Use and Resistance.” Centers for Disease Control and Prevention. Centers for Disease Control and Prevention, 16 Nov. 2016. Web. 28 Apr. 2017. <https://www.cdc.gov/getsmart/community/about/index.html>.

Commissioner, Office Of the. “Consumer Updates – How to Dispose of Unused Medicines.” U S Food and Drug Administration Home Page. Office of the Commissioner, n.d. Web. 28 Apr. 2017. <https://www.fda.gov/forconsumers/consumerupdates/ucm101653.htm>.

NIH-NIAID. “Antimicrobial (Drug) Resistance.” National Institutes of Health. U.S. Department of Health and Human Services, n.d. Web. 28 Apr. 2017. <https://www.niaid.nih.gov/research/antimicrobial-resistance>.

Me vs. Ghost Me

My recent attempts to be a bit more proactive in planning my life have yielded an interesting unexpected result. It appears that trying to use my own My Disney Experience account in planning my part of our family vacation has unleashed a ghost version of myself that is now threatening to undo all of my carefully laid plans, steal my reservations, and wreck my family relationships.

Context: Last summer, I was at Disney World for a conference, which included a day at the park. Rather than go through the huff and puff of getting a disability pass to avoid getting trapped in lines and the medical havoc that could wreak, I opted instead to simply navigate the park with fastpasses. Doing this effectively required that I have a My Disney Experience account in order to link my conference-provided ticket and book fastpasses from my phone. So I created one. For the record, the system worked well over the course of that trip.

Fast forward to the planning for this trip. Given my historical track record with long term planning, and the notable chaos of my family’s collective schedule, it is generally my mother who takes point on the strategic end (I like to believe that I pick up the slack in tactical initiative, but that’s neither here nor there). Booking our room and acquiring our Magic Bands naturally required to put names down for each of our family members, which, evidently, spawned “ghost” accounts in the My Disney Experience system.

This is not a particularly large concern for my brother or father, both of whom are broadly nonplussed with such provincial concerns as being in the right place at the right time, at least while on vacation. For me, however, as one who has to carefully judge medication doses based on expected activity levels over the next several hours, and more generally, a perpetual worrier, being able to access and, if necessary, change my plans on the fly is rather crucial. In the case of Disney, this means having my own account rather than my “ghost” be listed for all pertinent reservations and such.

The solution is clear: I must hunt down my ghostly doppelgänger and eliminate him. The problem is that doing so would cancel all of the current reservations. So before killing my ghost, I first have to steal his reservations. As a side note: It occurs to me belatedly that this dilemma would make an interesting and worthwhile premise for a sci-fi thriller set in a dystopia where the government uses digital wearable technology to track and control its population.

All of this has served as an amusing distraction from the latest sources of distress in my life, namely: Having to sequester myself in my home and attend meetings with the school administrators by telephone because of a whooping cough outbreak, the escalating raids against immigrant groups in my community, neo-fascist graffiti at my school, and having to see people I despise be successful in ways that I never could. Obviously, not all of these are equal. But they all contribute to a general feeling that I have been under siege of late.

While reasonable people can disagree over whether the current problems I face are truly new, they certainly seem to have taken on a new urgency. Certainly this is the first time since I arrived back in the United States that immigrant communities in my local community have been subject to ICE raids. Although this is not the first time that my school has experienced fascist graffiti, it is the largest such incident. The political situation, which was previously an abstract thing which was occasionally remarked upon during conversation has become far more tangible. I can see the results in the streets and in my communications with my friends as clearly as I can see the weather.

I might have been able to move past these incidents and focus on other areas of my life, except that other areas of my life have also come under pressure, albeit for different reasons. The school nurse’s office recently disclosed that there has been at least one confirmed case of Whooping Cough. As I have written about previously, this kind of outbreak is a major concern for me, and means in practice that I cannot put myself at risk by going into school until this is resolved. Inconveniently, this announcement came only days before I was due to have an important meeting with school administrators (something which is nerve wracking at the best of times, and day-ruining at others). The nature of the meeting meant that it could not be postponed, and so had to be conducted by telephone.

At the same time, events in my personal life have conspired to force me to confront an uncomfortable truth: People I despise on a personal level are currently more successful and happier than me. I have a strong sense of justice, and so seeing people whom I know have put me and others down in the past be rewarded, while I myself yet struggle to achieve my goals, is quite painful. I recognize that this is petty, but it feels like a very personal example of what seems, from where I stand, to be an acutely distressing trend: The people I consider my adversaries are ahead and in control. Policies I abhor and regard as destructive to the ideals and people I hold dear are advancing. Fear and anger are beating out hope and friendship, and allowing evil and darkness to rise.

Ghost me is winning. He has wreaked havoc in all areas of my life, so that I feel surrounded and horrifically outmatched. He has led me to believe that I am hated and unwanted by all. He has caused fissures in my self-image, making me question whether I can really claim to stand for the weak if I’m not willing to throw myself into every skirmish. He has made me doubt whether, if these people whom I consider misguided and immoral are being so successful and happy, that perhaps it is I who is the immoral one.

These are, of course, traps. Ghost me, like real me, is familiar with the Art of War, and knows that the best way to win a fight is to do so without actual physical combat. And because he knows me; because he is me, and because I am my own worst enemy, he knows how best to set up a trap that I can hardly resist walking into. He tries to convince me to squander my resources and my endurance fighting battles that are already lost. He tries to poke me everywhere at once to disorient me and make me doubt my own senses. Worst of all, he tries to set me up to question myself, making me doubt myself and why I fight, and making me want to simply capitulate.

Not likely.

What ghost me seems to forget is that I am among the most relentlessly stubborn people either of us know. I have fought continuously for a majority of my life now to survive against the odds, and against the wishes of certain aspects of my biology. And I will continue fighting, if necessary for years, if necessary, alone. I am, however, not alone. And if I feel surrounded, then ghost me is not only surrounded, but outnumbered.