There is Power in a Wristband


This post is part of the series: The Debriefing. Click to read all posts in this series.


Quick note: this post contains stuff that deals with issues of law and medical advice. While I always try to get things right, I am neither a doctor nor a lawyer, and my blog posts are not to be taken as such advice.

Among people I know for whom it is a going concern, medical identification is a controversial subject. For those not in the know, medical identification is a simple concept. The idea is to have some sort of preestablished method to convey to first responders and medical personnel the presence of a condition which may either require immediate, specific, treatment (say, a neurological issue that requires the immediate application of a specific rescue medication), or impact normal treatment (say, an allergy to a common drug) in the event that the patient is incapacitated.

The utilitarian benefits are obvious. In an emergency situation, where seconds count, making sure that this information is discovered and conveyed can, and often does, make the difference between life and death, and prevent delays and diversions that are costly in time, money, and future health outcomes. The importance of this element cannot be overstated. There are also some possible purported legal benefits to having pertinent medical information easily visible for law enforcement and security to see. On the other hand, some will tell you that this is a very bad idea, since it gives legal adversaries free evidence about your medical conditions, which is something they’d otherwise have to prove.

The arguments against are equally apparent. There are obvious ethical quandaries in compelling a group of people to identify themselves in public, especially as in this case it pertains to normally confidential information about medical and disability status. And even where the macro-scale political considerations do not enter it, there are the personal considerations. Being forced to make a certain statement in the way one dresses is never pleasant, and having that mode of personal choice and self expression can make the risk of exacerbated medical problems down the line seem like a fair trade off.

I can see both sides of the debate here. Personally, I do wear some medical identification at all times – a small bracelet around my left wrist – and have more or less continuously for the last decade. It is not so flamboyantly visible as some people would advise. I have no medical alert tattoos, nor embroidered jacket patches. My disability is not a point of pride. But it is easily discoverable should circumstances require it.

Obviously, I think that what I have done and continue to do is fundamentally correct and right, or at least, is right for me. To do less seems to me foolhardy, and to do more seems not worth the pains required. The pains it would cause me are not particularly logistical. Rather they refer to the social cost of my disability always being the first impression and first topic of conversation.

It bears repeating that, though I am an introvert in general, I am not particularly bashful about my medical situation. Provided I feel sociable, I am perfectly content to speak at length about all the nitty gritty details of the latest chapter in my medical saga. Yet even I have a point at which I am uncomfortable advertising that I have a disability. While I am not averse to inviting empathy, I do not desire others to see me as a burden, nor for my disability to define every aspect of our interactions any more than the face that I am left handed, or brown eyed, or a writer. I am perfectly content to mention my medical situation when it comes up in conversation. I do not think it appropriate to announce it every time I enter a room.

Since I feel this way, and I am also literally a spokesman and disability advocate, it is easy to understand that there are many who do not feel that it is even appropriate for them to say as much as I do. Some dislike the spotlight in general. Others are simply uncomfortable talking about a very personal struggle. Still others fear the stigma and backlash associated with any kind of imperfection and vulnerability, let alone one as significant as a bonafide disability. These fears are not unreasonable. The decision to wear medical identification, though undoubtedly beneficial to health and safety, is not without a tradeoff. Some perceive that tradeoff, rightly or wrongly, as not worth the cost.

Even though this position is certainly against standard medical advice, and I would never advocate people go against medical advice, I cannot bring myself to condemn those who go against this kind of advice with the same definitiveness with which I condemn, say, refusing to vaccinate for non-medical reasons, or insurance companies compelling patients to certain medical decisions for economic reasons. The personal reasons, even though they are personal and not medical, are too close to home. I have trouble finding fault with a child who doesn’t want to wear an itchy wristband, or a teenager who just wants to fit in and make their own decisions about appearance. I cannot fault them for wanting what by all rights should be theirs.

Yet the problem remains. Without proper identification it is impossible for first responders to identify those who have specific, urgent needs. Without having these identifiers be sufficiently obvious and present at all times, the need for security and law enforcement to react appropriately to those with special needs relies solely on their training beforehand, and on them trusting the people they have just detained.

In a perfect world, this problem would be completely moot. Even in a slightly less than perfect world, where all these diseases and conditions still existed, but police and first responder training was perfectly robust and effective, medical identification would not be needed. Likewise, in such a world, the stigma of medical identification would not exist; patients would feel perfectly safe announcing their condition to the world, and there would be no controversy in adhering to the standard medical advice.

In our world, it is a chicken-egg problem, brought on by understandable, if frustrating, human failings at every level. Trying to determine fault and blame ultimately comes down to questioning the nitty gritty of morality, ethics, and human nature, and as such, is more suited to an exercise in navel gazing than an earnest attempt to find solutions to the problems presently faced by modern patients. We can complain, justifiably and with merit, that the system is biased against us. However such complaints, cathartic though they may be, will not accomplish much.

This viscous cycle, however, can be broken. Indeed, it has been broken before, and recently. Historical examples abound of oppressed groups coming to break the stigma of an identifying symbol, and claiming it as a mark of pride. The example that comes most immediately to mind is the recent progress that has been made for LGBT+ groups in eroding the stigma of terms which quite recently were used as slurs, and in appropriating symbols such as the pink triangle as a symbol of pride. In a related vein, the Star of David, once known as a symbol of oppression and exclusion, has come to be used by the Jewish community in general, and Israel in particular, as a symbol of unity and commonality.

In contrast to such groups, the road for those requiring medical identification is comparatively straightforward. The disabled and sick are already widely regarded as sympathetic, if pitiful. Our symbols, though they may be stigmatized, are not generally reviled. When we face insensitivity, it is usually not because those we face are actively conspiring to deny us our needs, but simply because we may well be the first people they have encountered with these specific needs. As noted above, this is a chicken-egg problem, as the less sensitive the average person is, the more likely a given person with a disability that is easily hidden is to try and fly under the radar.

Imagine, then, if you can, such a world, where a medical identification necklace is as commonplace and unremarkable as a necklace with a religious symbol. Imagine seeing a parking lot with stickers announcing the medical condition of a driver or passenger with the same regularity as you see an advertisement for a political cause or a vacation destination. Try to picture a world where people are as unconcerned about seeing durable medical equipment as American flag apparel. It is not difficult to imagine. We are still a ways away from it, but it is within reach.

I know that this world is within reach, partially because I myself have seen the first inklings of it. I have spent time in this world, at conferences and meetings. At several of these conferences, wearing a colored wristband corresponding to one’s medical conditions is a requirement for entry, and here it is not seen as a symbol of stigma, but one of empowerment. Wristbands are worn in proud declaration, amid short sleeved shirts for walkathon teams, showing bare medical devices for all the world to see.

Indeed, in this world, the medical ID bracelet is a symbol of pride. It is shown off amid pictures of fists clenched high in triumph and empowerment. It is shown off in images of gentle hands held in friendship and solidarity.

It is worth mentioning with regards to this last point, that the system of wristbands is truly universal. That is to say, even those who have no medical afflictions whatsoever are issued wristbands, albeit in a different color. To those who are not directly afflicted, they are a symbol of solidarity with those who are. But it remains a positive symbol regardless.

The difference between these wristbands, which are positive symbols, and ordinary medical identification, which is at best inconvenient and at worst oppressive, has nothing to do with the physical discrepancies between them, and everything to do with the attitudes that are attached by both internal and external pressure. The wristbands, it will be seen, are a mere symbol, albeit a powerful one, onto which we project society’s collective feelings towards chronic disease and disability.

Medical identification is in itself amoral, but in its capacity as a symbol, it acts as a conduit to amplify our existing feelings and anxieties about our condition. In a world where disabled people are discriminated against, left to go bankrupt from buying medication for their survival, and even targeted by extremist groups, it is not hard to find legitimate anxieties to amplify in this manner. By contrast an environment in which the collective attitude towards these issues is one of acceptance and empowerment, these projected feelings can be equally positive.

The Moral Hazard of Hope


This post is part of the series: The Debriefing. Click to read all posts in this series.


Suppose that five years from today, you would receive an extremely large windfall. The exact number isn’t important, but let’s just say it’s large enough that you’ll have to budget things again. Not technically infinite, because that would break everything, but for the purposes of one person, basically undepletable. Let’s also assume that this money becomes yours in such a way that it can’t be taxed or swindled in getting it. This is also an alternate universe where inheritance and estates don’t exist, so there’s no scheming among family, and no point in considering them in your plans. Just roll with it.

No one else knows about it, so you can’t borrow against it, nor is anyone going to treat you differently until you have the money. You still have to be alive in five years to collect and enjoy your fortune. Freak accidents can still happen, and you can still go bankrupt in the interim, or get thrown in prison, or whatever, but as long as you’re around to cash the check five years from today, you’re in the money.

How would this change your behavior in the interim? How would your priorities change from what they are?

Well, first of all, you’re probably not going to invest in retirement, or long term savings in general. After all, you won’t need to. In fact, further saving would be foolish. You’re not going to need that extra drop in the bucket, which means saving it would be wasting it. You’re legitimately economically better off living the high life and enjoying yourself as much as possible without putting yourself in such severe financial jeopardy that you would be increasing your chances of being unable to collect your money.

If this seems insane, it’s important to remember here, that your lifestyle and enjoyment are quantifiable economic factors (the keyword is “utility”) that weigh against the (relative and ultimately arbitrary) value of your money. This is the whole reason why people buy stuff they don’t strictly need to survive, and why rich people spend more money than poor people, despite not being physiologically different. Because any money you save is basically worthless, and your happiness still has value, buying happiness, expensive and temporary though it may be, is always the economically rational choice.

This is tied to an important economic concept known as Moral Hazard, a condition where the normal risks and costs involved in a decision fail to apply, encouraging riskier behavior. I’m stretching the idea a little bit here, since it usually refers to more direct situations. For example, if I have a credit card that my parents pay for to use “for emergencies”, and I know I’m never going to see the bill, because my parents care more about our family’s credit score than most anything I would think to buy, then that’s a moral hazard. I have very little incentive to do the “right” thing, and a lot of incentive to do whatever I please.

There are examples in macroeconomics as well. For example, many say that large corporations in the United States are caught in a moral hazard problem, because they know that they are “too big to fail”, and will be bailed out by the government if they get in to serious trouble. As a result, these companies may be encouraged to make riskier decisions, knowing that any profits will be massive, and any losses will be passed along.

In any case, the idea is there. When the consequences of a risky decision become uncoupled from the reward, it can be no surprise when rational actors take more riskier decisions. If you know that in five years you’re going to be basically immune to any hardship, you’re probably not going to prepare for the long term.

Now let’s take a different example. Suppose you’re rushed to the hospital after a heart attack, and diagnosed with a heart condition. The condition is minor for now, but could get worse without treatment, and will get worse as you age regardless.

The bad news is, in order to avoid having more heart attacks, and possible secondary circulatory and organ problems, you’re going to need to follow a very strict regimen, including a draconian diet, a daily exercise routine, and a series of regular injections and blood tests.

The good news, your doctor informs you, is that the scientists, who have been tucked away in their labs and getting millions in yearly funding, are closing in on a cure. In fact, there’s already a new drug that’s worked really well in mice. A researcher giving a talk at a major conference recently showed a slide of a timeline that estimated FDA approval in no more than five years. Once you’re cured, assuming everything works as advertised, you won’t have to go through the laborious process of treatment.

The cure drug won’t help if you die of a heart attack before then, and it won’t fix any problems with your other organs if your heart gets bad enough that it can’t supply them with blood, but otherwise it will be a complete cure, as though you were never diagnosed in the first place. The nurse discharging you tells you that since most organ failure doesn’t appear until patients have been going for at least a decade, so long as you can avoid dying for half that long, you’ll be fine.

So, how are you going to treat this new chronic and life threatening disease? Maybe you will be the diligent, model patient, always deferring to the most conservative and risk averse in the medical literature, certainly hopeful for a cure, but not willing to bet your life on a grad student’s hypothesis. Or maybe, knowing nothing else on the subject, you will trust what your doctor told you, and your first impression of the disease, getting by with only as much invasive treatment as you can get away with to avoid dying and being called out by your medical team for being “noncompliant” (referred to in chronic illness circles in hushed tones as “the n-word”).

If the cure does come in five years, as happens only in stories and fantasies, then either way, you’ll be set. The second version of you might be a bit happier from having more fully sucked the marrow out of life. It’s also possible that the second version would have also had to endure another (probably non-fatal) heart attack or two, and dealt with more day to day symptoms like fatigue, pains, and poor circulation. But you never would have really lost anything for being the n-word.

On the other hand, if by the time five years have elapsed, the drug hasn’t gotten approval, or quite possibly, hasn’t gotten close after the researchers discovered that curing a disease in mice didn’t also solve it in humans, then the difference between the two versions of you are going to start to compound. It may not even be noticeable after five years. But after ten, twenty, thirty years, the second version of you is going to be worse for wear. You might not be dead. But there’s a much higher chance you’re going to have had several more heart attacks, and possibly other problems as well.

This is a case of moral hazard, plain and simple, and it does appear in the attitudes of patients with chronic conditions that require constant treatment. The fact that, in this case, the perception of a lack of risk and consequences is a complete fantasy is not relevant. All risk analyses depend on the information that is given and available, not on whatever the actual facts may be. We know that the patient’s decision is ultimately misguided because we know the information they are being given is false, or at least, misleading, and because our detached perspective allows us to take a dispassionate view of the situation.

The patient does not have this information or perspective. In all probability, they are starting out scared and confused, and want nothing more than to return to their previous normal life with as few interruptions as possible. The information and advice they were given, from a medical team that they trust, and possibly have no practical way of fact checking, has led them to believe that they do not particularly need to be strict about their new regimen, because there will not be time for long term consequences to catch up.

The medical team may earnestly believe this. It is the same problem one level up; the only difference is, their information comes from pharmaceutical manufacturers, who have a marketing interest in keeping patients and doctors optimistic about upcoming products, and researchers, who may be unfamiliar with the hurdles in getting a breakthrough from the early lab discoveries to a consumer-available product, and whose funding is dependent on drumming up public support through hype.

The patient is also complicit in this system that lies to them. Nobody wants to be told that their condition is incurable, and that they will be chronically sick until they die. No one wants to hear that their new diagnosis will either cause them to die early, or live long enough for their organs to fail, because even by adhering to the most rigid medical plan, the tools available simply cannot completely mimic the human body’s natural functions. Indeed, telling a patient that they will still suffer long term complications, whether in ten, twenty, or thirty years, almost regardless of their actions today, it can be argued, will have much the same effect as telling them that they will be healthy regardless.

Given the choice between two extremes, optimism is obviously the better policy. But this policy does have a tradeoff. It creates a moral hazard of hope. Ideally, we would be able to convey an optimistic perspective that also maintains an accurate view of the medical prognosis, and balances the need for bedside manner with incentivizing patients to take the best possible care of themselves. Obviously this is not an easy balance to strike, and the balance will vary from patient to patient. The happy-go-lucky might need to be brought down a peg or two with a reality check, while the nihilistic might need a spoonful of sugar to help the medicine go down. Finding this middle ground is not a task to be accomplished by a practitioner at a single visit, but a process to be achieved over the entire course of treatment, ideally with a diverse and well experienced team including mental health specialists.

In an effort to finish on a positive note, I will point out that this is already happening, or at least, is already starting to happen. As interdisciplinary medicine gains traction, patient mental health becomes more of a focus, and as patients with chronic conditions begin to live longer, more hospitals and practices are working harder to ensure that a positive and constructive mindset for self care is a priority, alongside educating patients on the actual logistics of self-care. Support is easier to find than ever, especially with organized patient conferences and events. This problem, much like the conditions that cause it, are chronic, but are manageable with effort.

 

Reflections on Contentedness

Contentedness is an underrated emotion. True, it doesn’t have the same electricity as joy, or the righteousness of anger. But it has the capability to be every bit as sublime. As an added bonus, contentedness seems to lean towards a more measured, reflective action as a result, rather than the rash impulsiveness of the ecstatic excitement of unadulterated joy, or the burning rage of properly kindled anger.

One of the most valuable lessons I have learned in the past decade has been how to appreciate being merely content instead of requiring utter and complete bliss. It is enough to sit in the park on a nice and sunny day, without having to frolic and chase the specter of absolute happiness. Because in truth, happiness is seldom something that can be chased.

Of course, contentedness also has its more vicious form if left unmoderated. Just as anger can beget wrath, and joy beget gluttony, greed, and lust, too much contentedness can bring about a state of sloth, or perhaps better put, complacency. Avoiding complacency has been a topic on my mind a great deal of late, as I have suddenly found myself with free time and energy, and wish to avoid squandering it as much as possible.

This last week saw a few different events of note in my life, which I will quickly recount here:

I received the notification of the death of an acquaintance and comrade of mine. While not out of the blue, or even particularly surprising, it did nevertheless catch me off guard. This news shook me, and indeed, if this latest post seems to contain an excess of navel-gazing ponderance, without much actual insight to match, that is why. I do have more thoughts and words on the subject, but am waiting for permission from the family before posting anything further on the subject.

The annual (insofar as having something two years in a row makes an annual tradition) company barbecue hosted at our house by my father took place. Such events are inevitably stressful for me, as they require me to exert myself physically in preparation for houseguests, and then to be present and sociable. Nevertheless, the event went on without major incident, which I suppose is a victory.

After much consternation, I finally picked up my diploma and finalized transcript from the high school, marking an anticlimactic end to the more than half-decade long struggle with my local public school to get me what is mine by legal right. In the end, it wasn’t that the school ever shaped up, decided to start following the law, and started helping me. Instead, I learned how to learn and work around them.

I made a quip or two about how, now that I can no longer be blackmailed with grades, I could publish my tell-all book. In truth, such a book will probably have to wait until after I am accepted into higher education, given that I will still have to work with the school administration through the application process.

In that respect, very little is changed by the receiving of my diploma. There was no great ceremony, nor parade, nor party in my honor. I am assured that I could yet have all such things if I were so motivated, but it seems duplicitous to compel others to celebrate me and my specific struggle, outside of the normal milestones and ceremonies which I have failed to qualify for, under the pretense that it is part of that same framework. Moreover, I hesitate to celebrate at all. This is a bittersweet occasion, and a large part of me wants nothing more than for this period of my life to be forgotten as quickly as possible.

Of course, that is impossible, for a variety of reasons. And even if it were possible, I’m not totally convinced it would be the right choice. It is not that I feel strongly that my unnecessary adversity has made me more resilient, or has become an integral part of my identity. It has, but this is a silver lining at best. Rather, it is because as much as I wish to forget the pains of the past, I wish even more strongly to avoid such pains in future. It is therefore necessary that I remember what happened, and bear it constantly in mind.

The events of this week, and the scattershot mix of implications they have for me, make it impossible for me to be unreservedly happy. Even so, being able to sit on my favorite park bench, loosen my metaphorical belt, and enjoy the nice, if unmemorable, weather, secure in the knowledge that the largest concerns of recent memory and foreseeable future are firmly behind me, does lend itself to a sort of contentedness. Given the turmoil and anguish of the last few weeks of scrambling to get schoolwork done, this is certainly a step up.

In other news, my gallery page is now operational, albeit incomplete, as I have yet to go through the full album of photographs that were taken but not posted, nor have I had the time to properly copy the relevant pages from my sketchbook. The fictional story which I continue to write is close to being available. In fact, it is technically online while I continue to preemptively hunt down bugs, it just doesn’t have anything linking to it. This coming weekend it slated to be quite busy, with me going to a conference in Virginia, followed by the Turtles All the Way Down book release party in New York City.

Bretton Woods

So I realized earlier this week, while staring at the return address stamped on the sign outside the small post office on the lower level of the resort my grandfather selected for us on our family trip, that we were in fact staying in the same hotel which hosted the famous Bretton Woods Conference, that resulted in the Bretton Woods System that governed post-WWII economic rebuilding around the world, and laid the groundwork for our modern economic system, helping to cement the idea of currency as we consider it today.

Needless to say, I find this intensely fascinating; both the conference itself as a gathering of some of the most powerful people at one of the major turning points in history, and the system that resulted from it. Since I can’t recall having spent any time on this subject in my high school economics course, I thought I would go over some of the highlights, along with pictures of the resort that I was able to snap.

Pictured: The Room Where It Happened

First, some background on the conference. The Bretton Woods conference took place in July of 1944, while the Second World War was still in full swing. The allied landings in Normandy, less than a month earlier, had been successful in establishing isolated beachheads, but Operation Overlord as a whole could still fail if British, Canadian, American, and Free French forces were prevented from linking up and liberating Paris.

On the Eastern European front, the Red Army had just begun Operation Bagration, the long planned grand offensive to push Nazi forces out of the Soviet Union entirely, and begin pushing offensively through occupied Eastern Europe and into Germany. Soviet victories would continue to rack up as the conference went on, as the Red Army executed the largest and most successful offensive in its history, escalating political concerns among the western allies about the role the Soviet Union and its newly “liberated” territory could play in a postwar world.

In the pacific, the Battle of Saipan was winding down towards an American victory, radically changing the strategic situation by putting the Japanese homeland in range of American strategic bombing. Even as the battles rage on, more and more leaders on both sides look increasingly to the possibility of an imminent allied victory.

As the specter of rebuilding a world ravaged by the most expensive and most devastating conflict in human history (and hopefully ever) began to seem closer, representatives of all nations in the allied powers met in a resort in Bretton Woods, New Hampshire, at the foot of Mount Washington, to discuss the economic future of a postwar world in the United Nations Monetary and Financial Conference, more commonly referred to as the Bretton Woods Conference. The site was chosen because, in addition to being vacant (since the war had effectively killed tourism), the isolation of the surrounding mountains made the site suitably defensible against any sort of attack. It was hoped that this show of hospitality and safety would assuage delegates coming from war torn and occupied parts of the world.

After being told that the hotel had only 200-odd rooms for a conference of 700-odd delegates, most delegates, naturally, decided to bring their families, an many cases bringing as many extended relatives as could be admitted on diplomatic credentials. Of course, this was probably as much about escaping the ongoing horrors in Europe and Asia as it was getting a free resort vacation.

These were just the delegates. Now imagine adding families, attachés, and technical staff.

As such, every bed within a 22 mile radius was occupied. Staff were forced out of their quarters and relocated to the stable barns to make room for delegates. Even then, guests were sleeping in chairs, bathtubs, even on the floors of the conference rooms themselves.

The conference was attended by such illustrious figures as John Maynard Keynes (yes, that Keynes) and Harry Dexter White (who, in addition to being the lead American delegate, was also almost certainly a spy for the Soviet NKVD, the forerunner to the KGB), who clashed on what, fundamentally, should be the aim of the allies to establish in a postwar economic order.

Spoiler: That guy on the right is going to keep coming up.

Everyone agreed that protectionist, mercantilist, and “economic nationalist” policies of the interwar period had contributed both to the utter collapse of the Great Depression, and the collapse of European markets, which created the socioeconomic conditions for the rise of fascism. Everyone agreed that punitive reparations placed on Germany after WWI had set up European governments for a cascade of defaults and collapses when Germany inevitably failed to pay up, and turned to playing fast and loose with its currency and trade policies to adhere to the letter of the Treaty of Versailles.

It was also agreed that even if reparations were entirely done away with, which would leave allied nations such as France, and the British commonwealth bankrupt for their noble efforts, that the sheer upfront cost of rebuilding would be nigh impossible by normal economic means, and that leaving the task of rebuilding entire continents would inevitably lead to the same kind of zero-sum competition and unsound monetary policy that had led to the prewar economic collapse in the first place. It was decided, then, that the only way to ensure economic stability through the period of rebuilding was to enforce universal trade policies, and to institute a number of centralized financial organizations under the purview of the United Nations, to oversee postwar rebuilding and monetary policy.

It was also, evidently, the beginning of the age of minituraized flags.

The devil was in the details, however. The United States, having spent the war safe from serious economic infrastructure damage, serving as the “arsenal of democracy”, and generally being the only country that had reserves of capital, wanted to use its position of relative economic supremacy to gain permanent leverage. As the host of the conference and the de-facto lead for the western allies, the US held a great deal of negotiating power, and the US delegates fully intended to use it to see that the new world order would be one friendly to American interests.

Moreover, the US, and to a lesser degree, the United Kingdom, wanted to do as much as possible to prevent the Soviet Union from coming to dominate the world after it rebuilt itself. As World War II was beginning to wind down, the Cold War was beginning to wind up. To this end, the news of daily Soviet advances, first pushing the Nazis out of its borders, and then steamrolling into Poland, Finland, and the Baltics was troubling. Even more troubling were the rumors of the ruthless NKVD suppression of non-communist partisan groups that had resisted Nazi occupation in Eastern Europe, indicating that the Soviets might be looking to establish their own postwar hegemony.

Although something tells me this friendship isn't going to last
Pictured: The beginning of a remarkable friendship between US and USSR delegates

The first major set piece of the conference agreement was relatively uncontroversial: the International Bank for Reconstruction and Development, drafted by Keynes and his committee, was established to offer grants and loans to countries recovering from the war. As an independent institution, it was hoped that the IBRD would offer flexibility to rebuilding nations that loans from other governments with their own financial and political obligations and interests could not. This was also a precursor to, and later backbone of, the Marshal Plan, in which the US would spend exorbitant amounts on foreign aid to rebuild capitalism in Europe and Asia in order to prevent the rise of communist movements fueled by lack of opportunity.

The second major set piece is where things get really complicated. I’m massively oversimplifying here, but global macroeconomic policy is inevitably complicated in places. The second major set-piece, a proposed “International Clearing Union” devised by Keynes back in 1941, was far more controversial.

The plan, as best I am able to understand it, called for all international trade to be handled through a single centralized institution, which would measure the value of all other goods and currencies relative to a standard unit, tentatively called a “bancor”. The ICU would then offer incentives to maintain trade balances relative to the size of a nation’s economy, by charging interest off of countries with a major trade surplus, and using the excess to devalue the exchange rates of countries with trade deficits, making imports more expensive and products more desirable to overseas consumers.

The Grand Ballroom was thrown into fierce debate, and the local Boy Scouts that had been conscripted to run microphones between delegates (most of the normal staff either having been drafted, or completely overloaded) struggled to keep up with these giants of economics and diplomacy.

Photo of the Grand Ballroom, slightly digitally adjusted to compensate for bad lighting during our tour

Unsurprisingly, the US delegate, White, was absolutely against Keynes’s hair brained scheme. Instead, he proposed a far less ambitious “International Monetary Fund”, which would judge trade balances, and prescribe limits for nations seeking aid from the IMF or IBRD, but otherwise would generally avoid intervening. The IMF did keep Keynes’s idea of judging trade based on a pre-set exchange rate (also obligatory for members), but avoided handing over the power to unilaterally affect the value of individual currencies to the IMF, instead leaving it in the hands of national governments, and merely insisting on certain requirements for aid and membership. It also did away with notions of an ultranational currency.

Of course, this raised the question of how to judge currency values other than against each other alone (which was still seen as a bridge too far in the eyes of many). The solution, proposed by White, was simple: judge other currencies against the US dollar. After all, the United States was already the largest and most developed economy. And since other countries had spent the duration of the war buying materiel from the US, it also held the world’s largest reserves of almost every currency, including gold and silver, and sovereign debt. The US was the only country to come out of WWII with enough gold in reserve to stay on the gold standard and also finance postwar rebuilding, which made it a perfect candidate as a default currency.

US, Canadian, and Soviet delegates discuss the merits of Free Trade

Now, you can see this move either as a sensible compromise for a world of countries that couldn’t have gone back to their old ways if they tried, or as a master stroke attempt by the US government to cement its supremacy at the beginning of the Cold War. Either way, it worked as a solution, both in the short term, and in the long term, creating a perfect balance of stability and flexibility in monetary policy for a postwar economic boom, not just in the US, but throughout the capitalist world.

The third set piece was a proposed “International Trade Organization”, which was to oversee implementation and enforcement of the sort of universal free trade policies that almost everyone agreed would be most conducive not only to prosperity, but to peace as a whole. Perhaps surprisingly, this wasn’t terribly divisive at the conference.

The final agreement for the ITO, however, was eventually shot down when the US Senate refused to ratify its charter, partly because the final conference had been administered in Havana under Keynes, who used the opportunity to incorporate many of his earlier ideas on an International Clearing Union. Much of the basic policies of the ITO, however, influenced the successful General Agreements on Tarriffs and Trade, which would later be replaced by the World Trade Organization.

Pictured: The main hallway as seen from the Grand Ballroom. Notice the moose on the right, above the fireplace.

The Bretton Woods agreement was signed by the allied delegates in the resort’s Gold Room. Not all countries that signed immediately ratified. The Soviet Union, perhaps unsurprisingly, reversed its position on the agreement, calling the new international organizations “a branch of Wall Street”, going on to found the Council for Mutual Economic Assistance, a forerunner to the Warsaw Pact, within five years. The British Empire, particularly its overseas possessions, also took time in ratifying, owing to the longstanding colonial trade policies that had to be dismantled in order for free trade requirements to be met.

The consensus of most economists is that Bretton Woods was a success. The system more or less ceased to exist when Nixon, prompted by Cold War drains on US resources, and French schemes to exchange all of its reserve US dollars for gold, suspended the Gold Standard for the US dollar, effectively ushering in the age of free-floating fiat currencies; that is, money that has value because we all collectively accept that it does; an assumption that underlies most of our modern economic thinking.

There’s a plaque on the door to the room in which the agreement was signed. I’m sure there’s something metaphorical in there.

While it certainly didn’t last forever, the Bretton Woods system did accomplish its primary goal of setting the groundwork for a stable world economy, capable of rebuilding and maintaining the peace. This is a pretty lofty achievement when one considers the background against which the conference took place, the vast differences between the players, and the general uncertainty about the future.

The vision set forth in the Bretton Woods Conference was an incredibly optimistic, even idealistic, one. It’s easy to scoff at the idea of hammering out an entire global economic system, in less than a month, at a backwoods hotel in the White Mountains, but I think it speaks to the intense optimism and hope for the future that is often left out of the narrative of those dark moments. The belief that we can, out of chaos and despair, forge a brighter future not just for ourselves, but for all, is not in itself crazy, and the relative success of the Bretton Woods System, flawed though it certainly was, speaks to that.

A beautiful picture of Mt. Washington at sunset from the hotel’s lounge

Works Consulted

IMF. “60th Anniversary of Bretton Woods.” 60th Anniversary – Background Information, what is the Bretton Woods Conference. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://external.worldbankimflib.org/Bwf/whatisbw.htm>.

“Cooperation and Reconstruction (1944-71).” About the IMF: History. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://www.imf.org/external/about/histcoop.htm>

YouTube. Extra Credits, n.d. Web. 10 Aug. 2017. <http://www.youtube.com/playlist?list=PLhyKYa0YJ_5CL-krstYn532QY1Ayo27s1>.

Burant, Stephen R. East Germany, a country study. Washington, D.C.: The Division, 1988. Library of Congress. Web. 10 Aug. 2017. <https://archive.org/details/eastgermanycount00bura_0>.

US Department of State. “Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944.” Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944 – FRASER – St. Louis Fed. N.p., n.d. Web. 10 Aug. 2017. <https://fraser.stlouisfed.org/title/430>.

Additional information provided by resort staff and exhibitions visitited in person.

Incremental Progress Part 4 – Towards the Shining Future

I have spent the last three parts of this series bemoaning various aspects of the cycle of medical progress for patients enduring chronic health issues. At this point, I feel it is only fair that I highlight some of the brighter spots.

I have long come to accept that human progress is, with the exception of the occasional major breakthrough, incremental in nature; a reorganization here paves the way for a streamlining there, which unlocks the capacity for a minor tweak here and there, and so on and so forth. However, while this does help adjust one’s day to day expectations from what is shown in popular media to something more realistic, it also risks minimizing the progress that is made over time.

To refer back to an example used in part 2 that everyone should be familiar with, let’s refer to the progress being made on cancer. Here is a chart detailing the rate of FDA approvals for new treatments, which is a decent, if oversimplified, metric for understanding how a given patient’s options have increased, and hence, how specific and targeted their treatment will be (which has the capacity to minimize disruption to quality of life), and the overall average 5-year survival rate over a ten year period.

Does this progress mean that cancer is cured? No, not even close. Is it close to being cured? Not particularly.

It’s important to note that even as these numbers tick up, we’re not intrinsically closer to a “cure”. Coronaviruses, which cause the common cold, have a mortality rate pretty darn close to zero, at least in the developed world, and that number gets a lot closer if we ignore “novel” coronaviruses like SARS and MERS, and focus only on the rare person who has died as a direct result of the common cold. Yet I don’t think anyone would call the common cold cured. Coronaviruses, like cancer, aren’t cured, and there’s a reasonable suspicion on the part of many that they aren’t really curable in the sense that we’d like.

“Wait,” I hear you thinking, “I thought you were going to talk about bright spots”. Well, yes, while it’s true that progress on a full cure is inconclusive at best, material progress is still being made every day, for both colds and cancer. While neither is at present curable, they are, increasingly treatable, and this is where the real progress is happening. Better treatment, not cures, is from whence all the media buzz is generated, and why I can attend a conference about my disease year after year, hearing all the horror stories of my comrades, and still walk away feeling optimistic about the future.

So, what am I optimistic about this time around, even when I know that progress is so slow coming? Well, for starters, there’s life expectancy. I’ve mentioned a few different times here that my projected lifespan is significantly shorter than the statistical average for someone of my lifestyle, medical issues excluded. While this is still true, this is becoming less true. The technology which is used for my life support is finally reaching a level of precision, in both measurement and dosing, where it can be said to genuinely mimic natural bodily functions instead of merely being an indefinite stopgap.

To take a specific example, new infusion mechanisms now allow dosing precision down to the ten-thousandth of a milliliter. For reference, the average raindrop is between 0.5 and 4 milliliters. Given that a single thousandth of a milliliter in either direction at the wrong time can be the difference between being a productive member of society and being dead, this is a welcome improvement.

Such improvements in delivery mechanisms has also enabled innovation on the drugs themselves by making more targeted treatments wth a smaller window for error viable to a wider audience, which makes them more commercially viable. Better drugs and dosaging has likewise raised the bar for infusion cannulas, and at the conference, a new round of cannulas was already being hyped as the next big breakthrough to hit the market imminently.

In the last part I mentioned, though did not elaborate at length on, the appearance of AI-controlled artificial organs being built using DIY processes. These systems now exist, not only in laboratories, but in homes, offices, and schools, quietly taking in more data than the human mind can process, and making decisions with a level of precision and speed that humans cannot dream of achieving. We are equipping humans as cyborgs with fully autonomous robotic parts to take over functions they have lost to disease. If this does not excite you as a sure sign of the brave new future that awaits all of us, then frankly I am not sure what I can say to impress you.

Like other improvements explored here, this development isn’t so much a breakthrough as it is a culmination. After all, all of the included hardware in these systems has existed for decades. The computer algorithms are not particularly different from the calculations made daily by humans, except that they contain slightly more data and slightly fewer heuristic guesses, and can execute commands faster and more precisely than humans. The algorithms are simple enough that they can be run on a cell phone, and have an effectiveness on par with any other system in existence.

These DIY initiatives have already caused shockwaves throughout the medical device industry, for both the companies themselves, and the regulators that were previously taking their sweet time in approving new technologies, acting as a catalyst for a renewed push for commercial innovation. But deeper than this, a far greater change is also taking root: a revolution not so much in technology or application, but in thought.

If my memory and math are on point, this has been the eighth year since I started attending this particular conference, out of ten years dealing with the particular disease that is the topic of this conference, among other diagnoses. While neither of these stretches are long enough to truly have proper capital-H historical context, in the span of a single lifetime, especially for a relatively young person such as myself, I do believe that ten or even eight years is long enough to reflect upon in earnest.

Since I started attending this conference, but especially within the past three years, I have witnessed, and been the subject of, a shift in tone and demeanor. When I first arrived, the tone at this conference seemed to be, as one might expect one primarily of commiseration. Yes, there was solidarity, and all the positive emotion that comes from being with people like oneself, but this was, at best, a bittersweet feeling. People were glad to have met each other, but still nevertheless resentful to have been put in the unenviable circumstances that dictated their meeting.

More recently, however, I have seen and felt more and more an optimism accompanying these meetings. Perhaps it is the consistently record-breaking attendance that demonstrates, if nothing else, that we stand united against the common threat to our lives, and against the political and corporate forces that would seek to hold up our progress back towards being normal, fully functioning humans. Perhaps it is merely the promise of free trade show goodies and meals catered to a medically restricted diet. But I think it is something different.

While a full cure, of the sort that would allow me and my comrades to leave the life support at home, serve in the military, and the like, is still far off, today more than ever before, the future looks, if not bright, then at least survivable.

In other areas of research, one of the main genetic research efforts, which has maintained a presence at the conference, is now closing in on the genetic and environmental triggers that cause the elusive autoimmune reaction which has been known to cause the disease, and on various methods to prevent and reverse it. Serious talk of future gene therapies, the kind of science fiction that has traditionally been the stuff of of comic books and film, is already ongoing. It is a strange and exciting thing to finish an episode of a science-fiction drama television series focused on near-future medical technology (and how evil minds exploit it) in my hotel room, only to walk into the conference room to see posters advertising clinical trial sign ups and planned product releases.

It is difficult to be so optimistic in the face of incurable illness. It is even more difficult to remain optimistic after many years of only incremental progress. But pessimism too has its price. It is not the same emotional toll as the disappointment which naive expectations of an imminent cure are apt to bring; rather it is an opportunity cost. It is the cost of missing out on adventures, on missing major life milestones, on being conservative rather than opportunistic.

Much of this pessimism, especially in the past, has been inspired and cultivated by doctors themselves. In a way, this makes sense. No doctor in their right mind is going to say “Yes, you should definitely take your savings and go on that cliff diving excursion in New Zealand.” Medicine is, by its very nature, conservative and risk averse. Much like the scientist, a doctor will avoid saying anything until after it has been tested and proven beyond a shadow of a doubt. As noted previously, this is extremely effective in achieving specific, consistent, and above all, safe, treatment results. But what about when the situation being treated is so all-encompassing in a patient’s life so as to render specificity and consistency impossible?

Historically, the answer has been to impose restrictions on patients’ lifestyles. If laboratory conditions don’t align with real life for patients, then we’ll simply change the patients. This approach can work, at least for a while. But patients are people, and people are messy. Moreover, when patients include children and adolescents, who, for better or worse, are generally inclined to pursue short term comfort over vague notions of future health, patients will rebel. Thus, eventually, trading ten years at the end of one’s life for the ability to live the remainder more comfortably seems like a more balanced proposition.

This concept of such a tradeoff is inevitably controversial. I personally take no particular position on it, other than that it is a true tragedy of the highest proportion that anyone should be forced into such a situation. With that firmly stated, many of the recent breakthroughs, particularly in new delivery mechanisms and patient comfort, and especially in the rapidly growing DIY movement, have focused on this tradeoff. The thinking has shifted from a “top-down” approach of finding a full cure, to a more grassroots approach of making life more livable now, and making inroads into future scientific progress at a later date. It is no surprise that many of the groups dominating this new push have either been grassroots nonprofits, or, where they have been commercial, have been primarily from silicon valley style, engineer-founded, startups.

This in itself is already a fairly appreciable and innovative thesis on modern progress, yet one I think has been tossed around enough to be reasonably defensible. But I will go a step further. I submit that much of the optimism and positivity; the empowerment and liberation which has been the consistent takeaway of myself and other authors from this and similar conferences, and which I believe has become more intensely palpable in recent years than when I began attending, has been the result of this same shift in thinking.

Instead of competing against each other and shaming each other over inevitable bad blood test results, as was my primary complaint during conferences past, the new spirit is one of camaraderie and solidarity. It is now increasingly understood at such gatherings, and among medical professionals in general, that fear and shame tactics are not effective in the long run, and do nothing to mitigate the damage of patients deciding that survival at the cost of living simply isn’t worth it [1]. Thus the focus has shifted from commiseration over common setbacks, to collaboration and celebration over common victories.

Thus it will be seen that the feeling of progress, and hence, of hope for the future, seems to lie not so much in renewed pushes, but in more targeted treatments, and better quality of life. Long term patients such as myself have largely given up hope in the vague, messianic cure, to be discovered all at once at some undetermined future date. Instead, our hope for a better future; indeed, for a future at all; exists in the incremental, but critically, consistent, improvement upon the technologies which we are already using, and which have already been proven. Our hope lies in understanding that bad days and failures will inevitably come, and in supporting, not shaming, each other when they do.

While this may not qualify for being strictly optimistic, as it does entail a certain degree of pragmatic fatalism in accepting the realities of disabled life, it is the closest I have yet come to optimism. It is a determination that even if things will not be good, they will at least be better. This mindset, unlike rooting for a cure, does not require constant fanatical dedication to fundraising, nor does it breed innovation fatigue from watching the scientific media like a hawk, because it prioritizes the imminent, material, incremental progress of today over the faraway promises of tomorrow.


[1] Footnote: I credit the proximal cause of this cognitive shift in the conference to the progressive aging of the attendee population, and more broadly, to the aging and expanding afflicted population. As more people find themselves in the situation of a “tradeoff” as described above, the focus of care inevitably shifts from disciplinarian deterrence and prevention to one of harm reduction. This is especially true of those coming into the 13-25 demographic, who seem most likely to undertake such acts of “rebellion”. This is, perhaps unsurprisingly, one of the fastest growing demographics for attendance at this particular conference over the last several years, as patients who began attending in childhood come of age.

A Hodgepodge Post

This post is a bit of a hodgepodge hot mess, because after three days of intense writers’ block, I realized at 10:00pm, that there were a number of things that, in fact, I really did need to address today, and that being timely in this case was more important than being perfectly organized in presentation.

First, Happy Esther Day. For those not well versed on internet age holidays, Esther Day, August 3rd, so chosen by the late Esther Earl (who one may know as the dedicatee of and partial inspiration for the book The Fault In Our Stars), is a day on which to recognize all the people one loves in a non-romantic way. This includes family, but also friends, teachers, mentors, doctors, and the like; basically it is a day to recognize all important relationships not covered by Valentine’s Day.

I certainly have my work cut out for me, given that I have received a great deal of love and compassion throughout my life, and especially during my darker hours. In fact, it would not be an exaggeration to say that on several occasions, I would not have survived but for the love of those around me.

Of course, it’s been oft-noted that, particularly in our western culture, this holiday creates all manner of awkward moments, especially where it involves gender. A man is expected not to talk at great length about his feelings in general, and trying to tell one of the opposite gender that one loves the other either creates all sort of unhelpful ambiguity from a romantic perspective, or, if clarified, opens up a whole can of worms involving relationship stereotypes that no one, least of all a socially awkward writer like myself, wants to touch with a thirty nine and a half foot pole. So I won’t.

I do still want to participate in Esther Day, as uncomfortable as the execution makes me, because I believe in its message, and I believe in the legacy that Esther Earl left us. So, to people who read this, and participate in this blog by enjoying it, especially those who have gotten in touch specifically to say so, know this; to those of you who I have had the pleasure of meeting in person, and to those who I’ve never met but by proxy: I love you. You are an important part of my life, and the value you (hopefully) get from being here adds value to my life.

In tangentially related news…

Earlier this week this blog passed an important milestone: We witnessed the first crisis that required me to summon technical support. I had known that this day would eventually come, though I did not expect it so soon, nor to happen the way it did.

The proximal cause of this minor disaster was apparently a fault in an outdated third-party plugin I had foolishly installed and activated some six weeks ago, because it promised to enable certain features which would have made the rollout of a few of my ongoing projects for this place easier and cleaner. In my defense, the reviews prior to 2012, when the code author apparently abandoned the plugin, were all positive, and the ones after were scarce enough that I reckoned the chances of such a problem occurring to me were acceptably low.

Also, for the record, when I cautiously activated the plugin some six weeks ago during a time of day when visitors are relatively few and far between, it did seem to work fine. Indeed, it did work perfectly fine, right up until Monday, when it suddenly didn’t. Exactly what caused the crash to happen precisely then and not earlier (or never) wasn’t explained to me, presumably because it involves far greater in depth understanding of the inner workings of the internet than I am able to parse at this time.

The distal cause of this whole affair is that, with computers as with many aspects of my life, I am just savvy enough to get myself into trouble, without having the education nor the training to get myself out of it. This is a recurring theme in my life, to a point where it has become a default comment by teachers on my report cards. Unfortunately, being aware of this phenomenon does little to help me avoid it. Which is to say, I expect that similar server problems for related issues are probably also in the future, at least until such time as I actually get around to taking courses in coding, or find a way to hire someone to write code for me.

On the subject of milestones and absurdly optimistic plans: after much waffling back and forth, culminating in an outright dare from my close friends, I launched an official patreon page for this blog. Patreon, for those not well acquainted with the evolving economics of online content creation, is a service which allows creators (such as myself) to accept monthly contributions from supporters. I have added a new page to the sidebar explaining this in more detail.

I do not expect that I shall make a living off this. In point of fact, I will be pleasantly surprised if the site hosting pays for itself. I am mostly setting this up now so that it exists in the future on the off chance that some future post of mine is mentioned somewhere prominent, attracting overnight popularity. Also, I like having a claim, however tenuous, to being a professional writer like Shakespeare or Machiavelli.

Neither of these announcements changes anything substantial on this website. Everything will continue to be published on the same (non-)schedule, and will continue to be publicly accessible as before. Think of the Patreon page like a tip jar; if you like my stuff and want to indulge me, you can, but you’re under no obligation.

There is one thing that will be changing soon. I intend to begin publishing some of my fictional works in addition to my regular nonfiction commentary. Similar to the mindset behind my writing blog posts in the first place, this is partially at the behest of those close to me, and partially out of a Pascal’s Wager type logic that, even if only one person enjoys what I publish, with no real downside to publishing, that in itself makes the utilitarian calculation worth it.

Though I don’t have a planned release date or schedule for this venture, I want to put it out as something I’m planning to move forward with, both in order to nail my colors to the mast to motivate myself, and also to help contextualize the Patreon launch.

The first fictional venture will be a serial story, which is the kind of venture that having a Patreon page already set up is useful for, since serial stories can be discovered partway through and gain mass support overnight more so than blogs usually do. Again, I don’t expect fame and fortune to follow my first venture into serial fiction. But I am willing to leave the door open for them going forward.

Incremental Progress Part 1 – Fundraising Burnout

Today we’re trying something a little bit different. The conference I recently attended has given me lots of ideas along similar lines for things to write about, mostly centered around the notion of medical progress, which incidentally seems to have become a recurring theme on this blog. Based on several conversations I had at the conference, I know that this topic is important to a lot of people, and I have been told that I would be a good person to write about it.

Rather than waiting several weeks in order to finish one super-long post, and probably forget half of what I intended to write, I am planning to divide this topic into several sections. I don’t know whether this approach will prove better or worse, but after receiving much positive feedback on my writing in general and this blog specifically, it is something I am willing to try. It is my intention that these will be posted sequentially, though I reserve the right to Mix that up if something pertinent crops up, or if I get sick of writing about the same topic. So, here goes.


“I’m feeling fundraising burnout.” Announced one of the boys in our group, leaning into the rough circle that our chairs had been drawn into in the center of the conference room. “I’m tired of raising money and advocating for a cure that just isn’t coming. It’s been just around the corner since I was diagnosed, and it isn’t any closer.”

The nominal topic of our session, reserved for those aged 18-21 at the conference, was “Adulting 101”, though this was as much a placeholder name as anything. We were told that we were free to talk about anything that we felt needed to be said, and in practice this anarchy led mostly to a prolonged ritual of denouncing parents, teachers, doctors, insurance, employers, lawyers, law enforcement, bureaucrats, younger siblings, older siblings, friends both former and current, and anyone else who wasn’t represented in the room. The psychologist attached to the 18-21 group tried to steer the discussion towards the traditional topics; hopes, fears, and avoiding the ever-looming specter of burnout.

For those unfamiliar with chronic diseases, burnout is pretty much exactly what it sounds like. When someone experiences burnout, their morale is broken. They can no longer muster the will to fight; to keep to the strict routines and discipline that is required to stay alive despite medical issues. Without a strong support system to fall back on while recovering, this can have immediate and deadly consequences, although in most cases the effects are not seen until several years later, when organs and nervous tissue begin to fail prematurely.

Burnout isn’t the same thing as surrendering. Surrender happens all at once, whereas burnout can occur over months or even years. People with burnout don’t necessarily have to be suicidal or even of a mind towards self harm, even if they are cognizant of the consequences of their choices. Burnout is not the commander striking their colors, but the soldiers themselves gradually refusing to follow tough orders, possibly refusing to obey at all. Like the gradual loss of morale and organization by units in combat, burnout is considered in many respects to be inevitable to some degree or another.

Because of the inherent stigma attached to medical complications, it is always a topic of discussion at large gatherings, though often not one that people are apt to openly admit to. Fundraising burnout, on the other hand, proved a fertile ground for an interesting discussion.

The popular conception of disabled or medically afflicted people, especially young people, as being human bastions of charity and compassion, has come under a great deal of critique recently (see The Fault in Our Stars, Speechless, et al). Despite this, it remains a popular trope.

For my part, I am ambivalent. There are definitely worse stereotypes than being too humanitarian, and, for what it is worth, there does seem to be some correlation between medical affliction and medical fundraising. Though I am inclined to believe that attributing this correlation to the inherent or acquired surplus of human spirit in afflicted persons is a case of reverse causality. That is to say, disabled people aren’t more inclined to focus on charity, but rather that charity is more inclined to focus on them.

Indeed, for many people, myself included, ostensibly charitable acts are often taken with selfish aims. Yes, there are plenty of incidental benefits to curing a disease, any disease, that happens to affect millions in addition to oneself. But mainly it is about erasing the pains which one feels on a daily basis.

Moreover, the fact that such charitable organizations will continue to advance progress largely regardless of the individual contributions of one or two afflicted persons, in addition to the popular stereotype that disabled people ought naturally to actively support the charities that claim to represent them, has created, according to the consensus of our group, at least, a feeling of profound guilt among those who fail to make a meaningful contribution. Which, given the scale on which these charities and research organizations operate, generally translates to an annual contribution of tens or even hundreds of thousands of dollars, plus several hours of public appearances, constant queries to political representatives, and steadfast mental and spiritual commitment. Thus, those who fail to contribute on this scale are left with immense feelings of guilt for benefiting from research which they failed to contribute towards in any meaningful way. Paradoxically, these feelings are more rather than less likely to appear when giving a small contribution rather than no contribution, because, after all, out of sight, out of mind.

“At least from a research point of view, it does make a difference.” A second boy, a student working as a lab technician in one of the research centers in question, interjected. “If we’re in the lab, and testing ten samples for a reaction, that extra two hundred dollars can mean an extra eleventh sample gets tested.”

“Then why don’t we get told that?” The first boy countered. “If I knew my money was going to buy another extra Petri dish in a lab, I might be more motivated than just throwing my money towards a cure that never gets any closer.”

The student threw up his hands in resignation. “Because scientists suck at marketing.”

“It’s to try and appeal to the masses.” Someone else added, the cynicism in his tone palpable. “Most people are dumb and won’t understand what that means. They get motivated by ‘finding the cure’, not paying for toilet paper in some lab.”

Everyone in that room admitted that they had felt some degree of guilt over not fundraising more, myself included. This seemed to remain true regardless of whether the person in question was themselves disabled or merely related to one who was, or how much they had done for ‘the cause’ in recent memory. The fact that charity marketing did so much to emphasize how even minor contributions were relevant to saving lives only increased these feelings. The terms “survivor’s guilt” and “post-traumatic stress disorder” got tossed around a lot.

The consensus was that rather than act as a catalyst for further action, these feelings were more likely to lead to a sense of hopelessness in the future, which is amplified by the continuously disappointing news on the research front. Progress continues, certainly, and this important point of order was brought up repeatedly; but never a cure. Despite walking, cycling, fundraising, hoping, and praying for a cure, none has materialized, and none seem particularly closer than a decade ago.

This sense of hopelessness has lead, naturally, to disengagement and resentment, which in turn leads to a disinclination to continue fundraising efforts. After all, if there’s not going to be visible progress either way, why waste the time and money? This is, of course, a self-fulfilling prophecy, since less money and engagement leads to less research, which means less progress, and so forth. Furthermore, if patients themselves, who are seen, rightly or wrongly, as the public face of, and therefore most important advocate of, said organizations, seem to be disinterested, what motivation is there for those with no direct connection to the disease to care? Why should wealthy donors allocate large but sill limited donations to a charity that no one seems interested in? Why should politicians bother keeping up research funding, or worse, funding for the medical care itself?

Despite having just discussed at length the dangers of fundraising burnout, I have yet to find a decent resolution for it. The psychologist on hand raised the possibility of non-financial contributions, such as volunteering and engaging in clinical trials, or bypassing charity research and its false advertising entirely, and contributing to more direct initiatives to improve quality of life, such as support groups, patient advocacy, and the like. Although decent ideas on paper, none of these really caught the imagination of the group. The benefit which is created from being present and offering solidarity during support sessions, while certainly real, isn’t quite as tangible as donating a certain number of thousands of dollars to charity, nor is it as publicly valued and socially rewarded.

It seems that fundraising, and the psychological complexities that come with it, are an inevitable part of how research, and hence progress, happens in our society. This is unfortunate, because it adds an additional stressor to patients, who may feel as though the future of the world, in addition to their own future, is resting on their ability to part others from their money. This obsession, even if it does produce short term results, cannot be healthy, and the consensus seems to be that it isn’t. However, this seems to be part of the price of progress nowadays.

This is the first part of a multi-part commentary on patient perspective (specifically, my perspective) on the fundraising and research cycle, and more specifically how the larger cause of trying to cure diseases fits in with a more individual perspective, which I have started writing as a result of a conference I attended recently. Additional segments will be posted at a later date.

Prophets and Fortune-Tellers

I have long thought about how my life would be pitched if it were some manner of story. The most important thing which I have learned from these meditations is that I am probably not the protagonist, or at least, not the main protagonist. This is an important distinction, and a realization which is mainly the product of my reflections on the general depravity of late middle and early high school.

A true protagonist, by virtue of being the focus of the story, is both immune to most consequences of the plot, and, with few deliberate exceptions, unquestionably sympathetic. A protagonist can cross a lot of lines, and get off scot free because they’re the protagonist. This has never been my case. I get called out on most everything, and I can count on one hand the number of people who have been continually sympathetic through my entire plight.

But I digress from my primary point. There are moments when I am quite sure, or at least, seriously suspect, that I am in the midst of an important plot arc. One such moment happened earlier this week, one day before I was to depart on my summer travels. My departure had already been pushed back by a couple of days due to a family medical emergency (for once, it wasn’t me this time), and so I was already on edge.

Since New Year’s, but especially since spring, I have been making a conscious effort to take walks, ideally every day, with a loose goal of twenty thousand steps a week. This program serves three purposes. First, it provides much needed exercise. Second, it has helped build up stamina for walking while I am traveling, which is something I have struggled with in the recent past. Third, it ensures that I get out of the house instead of rotting at home, which adds to the cycle of illness, fatigue, and existential strife.

I took my walk that day earlier than usual, with the intention that I would take my walk early, come home, help with my share of the packing, and have enough time to shower before retiring early. As it were, my normal route was more crowded than I had come to expect, with plenty of fellow pedestrians.

As I was walking through the park, I was stopped by a young man, probably about my age. He was dressed smartly in a short sleeve polo and khaki cargo shorts, and had one of those faces that seems to fit too many names to be properly remembered in any case.

“Sir, could I have just a moment of your time?” He stammered, seemingly unsure of himself even as he spoke.

I was in a decent enough mood that I looked upon this encounter as a curiosity rather than a nuisance. I slid off my noise-cancelling headphones and my hat, and murmured assent. He seemed to take a moment to try and gather his thoughts, gesturing and reaching his arms behind his neck as he tried to come up with the words. I waited patiently, being quite used to the bottleneck of language myself.

“Okay, just,” he gestured as a professor might while instructing students in a difficult concept, “light switch.”

I blinked, not sure I had heard correctly.

“Just, light switch.” He repeated.

“Oh…kay?”

“I know it’s a lot to take in right now.” He continued, as though he had just revealed some crucial revelation about life, the universe, and everything, and I would require time for the full implications of this earth-shattering idea to take hold. Which, in a way, he wasn’t wrong. I stood there, confused, suspicious, and a little bit curious.

“Look, just,” He faltered, returning to his gesturing, which, combined with his tone, seemed designed to impress upon me a gravity that his words lacked, “Be yourself this summer. Use it to mould yourself into your true self.”

I think I nodded. This was the kind of advice that was almost axiomatic, at least as far as vacations were concerned. Though it did make me wonder if it was possible that this person was aware that I was departing on the first of several summer trips the following day, for which I had already resolved to attempt to do precisely that. It was certainly possible to imagine that he was affiliated with someone whom I or my family had informed of our travel plans. He looked just familiar enough that I might have even met him before, and mentioned such plans in passing.

I stared at him blankly for several seconds, anticipating more. Instead, he smiled at me, as though he expected me to recognize something in what he was saying and to thank him.

“I’m literally hiding in plain sight I can’t control what I do.” He added, in one single run-on sentence, grinning and gesturing wildly in a way that made me suddenly question his sanity and my safety. He backed away, in a manner that led me to believe that our conversation was over.

My life support sensors informed me that I needed to sit down and eat within the next five minutes, or I would face the possibility of passing into an altered state of consciousness. I decided to take my leave, heading towards a park bench. I heard the command “Remember!” shouted in my general direction, which gave me an eerie tingling in the back of my neck and spine, more so than the rest of that conversation.

By the time I sat down and handled the life support situation, the strange young man had seemingly vanished. I looked for him, even briefly walking back to where we had stood, but he was gone. I tried to write down what I could of the exchange, thinking that there was a possibility that this could be part of some guerrilla advertising campaign, or social experiment. Or maybe something else entirely.

Discussing the whole encounter later, my brother and I came up with three main fields of possibilities. The first is simply that going up to strangers and giving cryptic messages is someone’s idea of a prank, performance art piece, or marketing campaign. This seems like the most likely scenario, although I have to admit that it would be just a little disappointing.

The second is that this one particular person is simply a nutter, and that I merely happened to be in the right time and place to be on the receiving end of their latest ramblings. Perhaps to them, the phrase “light switch” is enough of a revelation to win friends and influence people. This has a bit more of a poetic resonance to it, though it is still disappointing in its own way.

The third possibility, which is undoubtedly the least likely, but which the author and storyteller in me nevertheless gravitates towards, is that this is only the beginning of some much grander plot; that the timing is not mere coincidence, but that this new journey will set in motion the chain of events in which everything he mentioned will be revealed as critical to overcoming the obstacles in my path.

The mythos of the oracle offering prophecy before the hero’s journey is well-documented and well-entrenched in both classic and modern stories. Just as often as not, the prophecy turns out to be self fulfilling to one degree or another. In more contemporary stories, this is often explained by time travel, faster than light communication, future viewing, or some other advanced technological phenomenon. In older stories, it is usually accommodated by oracles, prophets, and magicians, often working on behalf of the fates, or even the gods themselves, who, just like humans, love a good hero’s story. It certainly seems like the kind of thing that would fit into my life’s overall plot arc.

In any case, we arrived at our first destination, Disney World, without incident, even discovering a lovely diner, the Highway Diner, in Rocky Mount, NC, along the way. I won’t delve into too many details about it on the grounds that I am considering writing a future post on a related subject, but suffice it to say, the food and service were top notch for an excellent price. We also discovered that electrical storms, as are a daily occurrence in Florida, interfere with my life support sensors, though we are working through this. I have been working the speech I am to give at the conference we are attending, and I expect, with or without prophecy, that things will go reasonably well.

On 3D Printing

As early as 2007, futurists were already prophesying about how 3D-printers would be the next big thing, and how the world was only months away from widespread deployment. Soon, we were told, we would be trading files for printable trinkets over email as frequently as we then did recipes and photographs. Replacing broken or lost household implements would be as simple as a few taps on a smartphone and a brief wait. It is ten years later, and the results are mixed.

The idea of fabricating things out of plastics for home use is not new. The Disney home of the future featured custom home fabrication heavily, relying on the power of plastics. This was in 1957*. Still, the truly revolutionary part of technological advancement has never been the limited operation of niche appliances, but the shift that occurs after a given technology becomes widely available. After all, video conferencing in the loosest sense has been used by military, government, and limited commercial services since as early as World War II, yet was still considered suitably futuristic in media up until the early years of the new millennium.

So, how has 3D-printing fared as far as mass accessibility is concerned? The surface answer seems to be: not well. After all, most homes, my own included, do not have 3D printers in them. 3D-printed houses and construction materials, although present around the world, have not shaken up the industry and ended housing shortages; though admittedly these were ambitious visions to begin with. The vast majority of manufacturing is still done in faraway factories rather than in the home itself.

On the other hand, perhaps we’re measuring to the wrong standard. After all, even in the developed world, not everyone has a “regular” printer. Not everyone has to. Even when paper documents were the norm rather than online copies, printers were not universal for every household. Many still used communal library or school facilities, or else used commercial services. The point, as far as technological progress is concerned, is not to hit an arbitrary number, or even percentage of homes with 3D printers in them, but to see that a critical mass of people have access to the products of 3D printing.

Taking this approach, let’s go back to using my own home as an example. Do I have access to the products of 3D printing? Yes, I own a handful of items made by 3D printers. If I had an idea or a need for something, could I gain access to a 3D printer? Yes, both our local library, and our local high school have 3D printers available for public use (at cost of materials). Finally, could I, if I were so disposed, acquire a 3D printer to call my own? Slightly harder to answer, given the varying quality and cost, but the general answer is yes, beginner 3D printers can now be purchased alongside other hardware at office supply stores.

What, then, have been the results of this quiet revolution? One’s answer will probably vary wildly depending on where one works and what one reads, but from where I stand, the answer as been surprisingly little. The trend in omnipresent availability and endless customizability for items ordered on the internet has intensified, and the number of people I know who make income by selling handicrafts has increased substantially, but these are hardly effects of 3D printing so much as the general effects of the internet era. 3D printing has enabled me to acquire hard protective cases for my medical equipment. In commercial matters, it would seem that 3D printing has become a buzzword, much like “sustainable” and “organic”.

Regarding the measuring of expectations for 3D printing, I am inclined to believe that the technology has been somewhat undermined by the name it got. 3D printers are not nearly as ubiquitous as printers still are, let alone in their heyday, and I do not expect they will become so, at least not in the foreseeable future. Tying them to the idea of printing, while accurate in a technical sense, limits thinking and chains our expectations.

3D printers are not so much the modern equivalent to paper printers so much as the modern equivalents of fax machines. Schools, libraries, and (certain) offices will likely continue to acquire 3D printers for the community, and certain professionals will have 3D printers, but home 3D printing will be the exception rather than the rule.

The appearance of 3D printing provides an interesting modern case study for technologies that catch the public imagination before being fully developed. Like the atomic future of the 1950s and 1960s, there was a vision of a glorious utopian future which would be made possible in our lifetimes by a technology already being deployed. Both are still around, and do provide very useful services, but neither fully upended life as we know it and brought about the revolutionary change we expected, or at least, hoped for.

Despite my skepticism, I too hope, and honestly believe, that the inexorable march of technology will bring about a better tomorrow. That is, after all, the general trend of humanity over the last 10,000 years. The progress of technology is not the sudden and shiny prototypes, but the widespread accessibility of last year’s innovations. 3D printing will not singlehandedly change the world, nor will whatever comes after it. With luck, however, it may give us the tools and the ways of thinking to do it ourselves.

* I vaguely recall having seen ideas at Disney exhibits for more specific 3D-printing for dishes and tableware. However, despite searching, I can’t find an actual source. Even so, the idea of customized printing is definitely present in Monsanto’s House of the Future sales pitch, even if it isn’t developed to where we think of 3D-printing today.

Me vs. Ghost Me

My recent attempts to be a bit more proactive in planning my life have yielded an interesting unexpected result. It appears that trying to use my own My Disney Experience account in planning my part of our family vacation has unleashed a ghost version of myself that is now threatening to undo all of my carefully laid plans, steal my reservations, and wreck my family relationships.

Context: Last summer, I was at Disney World for a conference, which included a day at the park. Rather than go through the huff and puff of getting a disability pass to avoid getting trapped in lines and the medical havoc that could wreak, I opted instead to simply navigate the park with fastpasses. Doing this effectively required that I have a My Disney Experience account in order to link my conference-provided ticket and book fastpasses from my phone. So I created one. For the record, the system worked well over the course of that trip.

Fast forward to the planning for this trip. Given my historical track record with long term planning, and the notable chaos of my family’s collective schedule, it is generally my mother who takes point on the strategic end (I like to believe that I pick up the slack in tactical initiative, but that’s neither here nor there). Booking our room and acquiring our Magic Bands naturally required to put names down for each of our family members, which, evidently, spawned “ghost” accounts in the My Disney Experience system.

This is not a particularly large concern for my brother or father, both of whom are broadly nonplussed with such provincial concerns as being in the right place at the right time, at least while on vacation. For me, however, as one who has to carefully judge medication doses based on expected activity levels over the next several hours, and more generally, a perpetual worrier, being able to access and, if necessary, change my plans on the fly is rather crucial. In the case of Disney, this means having my own account rather than my “ghost” be listed for all pertinent reservations and such.

The solution is clear: I must hunt down my ghostly doppelgänger and eliminate him. The problem is that doing so would cancel all of the current reservations. So before killing my ghost, I first have to steal his reservations. As a side note: It occurs to me belatedly that this dilemma would make an interesting and worthwhile premise for a sci-fi thriller set in a dystopia where the government uses digital wearable technology to track and control its population.

All of this has served as an amusing distraction from the latest sources of distress in my life, namely: Having to sequester myself in my home and attend meetings with the school administrators by telephone because of a whooping cough outbreak, the escalating raids against immigrant groups in my community, neo-fascist graffiti at my school, and having to see people I despise be successful in ways that I never could. Obviously, not all of these are equal. But they all contribute to a general feeling that I have been under siege of late.

While reasonable people can disagree over whether the current problems I face are truly new, they certainly seem to have taken on a new urgency. Certainly this is the first time since I arrived back in the United States that immigrant communities in my local community have been subject to ICE raids. Although this is not the first time that my school has experienced fascist graffiti, it is the largest such incident. The political situation, which was previously an abstract thing which was occasionally remarked upon during conversation has become far more tangible. I can see the results in the streets and in my communications with my friends as clearly as I can see the weather.

I might have been able to move past these incidents and focus on other areas of my life, except that other areas of my life have also come under pressure, albeit for different reasons. The school nurse’s office recently disclosed that there has been at least one confirmed case of Whooping Cough. As I have written about previously, this kind of outbreak is a major concern for me, and means in practice that I cannot put myself at risk by going into school until this is resolved. Inconveniently, this announcement came only days before I was due to have an important meeting with school administrators (something which is nerve wracking at the best of times, and day-ruining at others). The nature of the meeting meant that it could not be postponed, and so had to be conducted by telephone.

At the same time, events in my personal life have conspired to force me to confront an uncomfortable truth: People I despise on a personal level are currently more successful and happier than me. I have a strong sense of justice, and so seeing people whom I know have put me and others down in the past be rewarded, while I myself yet struggle to achieve my goals, is quite painful. I recognize that this is petty, but it feels like a very personal example of what seems, from where I stand, to be an acutely distressing trend: The people I consider my adversaries are ahead and in control. Policies I abhor and regard as destructive to the ideals and people I hold dear are advancing. Fear and anger are beating out hope and friendship, and allowing evil and darkness to rise.

Ghost me is winning. He has wreaked havoc in all areas of my life, so that I feel surrounded and horrifically outmatched. He has led me to believe that I am hated and unwanted by all. He has caused fissures in my self-image, making me question whether I can really claim to stand for the weak if I’m not willing to throw myself into every skirmish. He has made me doubt whether, if these people whom I consider misguided and immoral are being so successful and happy, that perhaps it is I who is the immoral one.

These are, of course, traps. Ghost me, like real me, is familiar with the Art of War, and knows that the best way to win a fight is to do so without actual physical combat. And because he knows me; because he is me, and because I am my own worst enemy, he knows how best to set up a trap that I can hardly resist walking into. He tries to convince me to squander my resources and my endurance fighting battles that are already lost. He tries to poke me everywhere at once to disorient me and make me doubt my own senses. Worst of all, he tries to set me up to question myself, making me doubt myself and why I fight, and making me want to simply capitulate.

Not likely.

What ghost me seems to forget is that I am among the most relentlessly stubborn people either of us know. I have fought continuously for a majority of my life now to survive against the odds, and against the wishes of certain aspects of my biology. And I will continue fighting, if necessary for years, if necessary, alone. I am, however, not alone. And if I feel surrounded, then ghost me is not only surrounded, but outnumbered.