The Panopticon Effect


This post is part of the series: The Debriefing. Click to read all posts in this series.


So at my most recent conference there were a lot of research presentations. One of the fascinating things that comes up in clinical studies of diseases that are self-managed, and which was highlighted on several slides, is something I’ve come to call the panopticon effect. It might have a proper name, but if so, I haven’t heard it. The idea is fairly simple, and fairly obvious. For every study that has a control group, almost always, the control group shows better outcomes than the statistical averages.

In cases where control groups receive a placebo treatment, this discrepancy can be attributed to the placebo effect. But the effect persists even when there is no intervention whatsoever. It seems that merely being enrolled in a study is enough to create an increase in whatever outcome is being measured over what would normally be expected.
This could be a subtler extension of the placebo effect. We are constantly finding that placebo, mindfulness, and the like, while never substitutes for actual treatment, do have a measurable positive impact. But there is probably a simpler solution: these people know they are being watched. Even when data is anonomized, and there are no consequences for bad outcomes, there is still the pressure of being under surveillance.   And I suspect it has to do with an obligation that study participants feel to be worthy of the research being conducted.
I have heard variations on this theme slipped subtly in to enough different discussions that I have started to cue in on it lately. It is an idea similar to the ones raised over the obligations that patients often feel to fundraise and advocate on behalf of the organizations that bankroll research for their diseases; not mere camaraderie between people with shared experiences, but a sense of guilt for receiving tangential benefits from others’ work.
To briefly repeat what I have said in previous Debriefing articles: this mindset is embedded deep in the collective psyche of the communities with which I have experience, and in some instances is actively exploited by charity and advocacy organizations. The stereotype of sick and disabled being egregiously kindhearted and single-mindedly dedicated to fundraising and/or advocacy is both a cause and effect of this cycle. The same is naturally true of attention from healthcare professionals and researchers.
Frequent patients, especially in the United States, are constantly reminded of the scarcity of help. In every day-long phone call with insurance, in every long wait in the triage room, and every doctor visit cut short because appointments are scheduled back to back months in advance, we are reminded that what we need is in high demand and short supply. We are lucky to be able to get what we need, and there are plenty of others that are not so fortunate. Perhaps, on paper, we are entitled to life liberty, and the pursuit of happiness; to a standard of healthcare and quality of life; but in reality, we are privileged to get even as little as we do.
There is therefore great pressure to be deserving of the privileges we have received. To be worthy of great collective effort that has gone into keeping us alive. This is even more true where research is concerned; where the attention of the world’s brightest minds and taxpayer dollars are being put forth in a gamble to advance the frontiers of humanity. Being part of these efforts is something that is taken extremely seriously by many patients. For many of them, who are disqualified from military service and unable to perform many jobs unaided, contributing to scientific research is the highest calling they can answer.
This pressure manifests itself in many different ways. In many, it inspires an almost religious zeal; in others, it is a subtler, possibly even unconscious, response. In some cases, this pressure to live up to the help given by others stokes rebellion, displayed either as antisocial antipathy or even self harming tendencies. No one I have ever spoken to on the matter has yet failed to describe this pressure or agree that it exists in their life.
Moreover, the effect seems to be self reinforcing; the more attention a person receives, the more they feel an obligation to repay it, often through volunteering in research. This in turn increases the amount of attention received, and so on. As noted, participation in these studies seems to produce a statistically significant positive impact in whatever is being measured, completely divorced from any intervention or placebo effect.
We know that people behave differently when they feel they are being watched, and even more so when they feel that the people watching have expectations. We also know that prolonged stress, such as the stress of having to keep up external appearances over an extended period, take a toll, both psychologically and physiologically, on the patient. We must therefore ask at what cost this additional scrutiny, and the marginal positive impact on health results, comes.
We will probably never have a definitive answer to these sorts of questions. The  intersection of chronic physical conditions and mental health is convoluted, to say the least. Chronic health issues can certainly add additional stress and increase risk of mental illness, yet at the same time, make it harder to isolate and treat. After all, can you really say a person is unreasonably anxious when they worry about a disease that is currently killing them? In any case, if we are not likely to ever know for sure the precise effects of these added stresses, then we should at least commit to making them a known unknown.

A Witch’s Parable

Addendum: Oh good grief. This was supposed to go up at the beginning of the week, but something went awry. Alas! Well, it’s up now.


Suppose we live in colonial times, in a town on an archipelago. The islands are individually small and isolated, but their position relative to the prevailing winds and ocean currents mean that different small islands can grow a wide variety of crops that are normally only obtainable by intercontinental trade. The presence of these crops, and good, predictable winds and currents, has made those islands that don’t grow food into world renowned trade hubs, and attracted overseas investment.

With access to capital and a wide variety goods, the archipelago has boomed. Artisans, taking advantage of access to exotic painting supplies, have taken to the islands, and scientists of all stripes have flocked to the archipelago, both to study the exotic flora and fauna, and to set up workshops and universities in this rising world capital. As a result of this local renaissance, denizens of the islands enjoy a quality of life hitherto undreamt of, and matched only in the palaces of Europe.

The archipelago is officially designated as a free port, open to ships from across the globe, but most of daily life on the islands is managed by the Honorable South India Trading Company, who collect taxes and manage infrastructure. Nobody likes the HSITC, whose governor is the jealous brother of the king, and is constantly appropriating funds meant for infrastructure investment to spend on court intrigue.

Still, the HSITC is entrenched in the islands, and few are willing to risk jeopardizing what they’ve accomplished by attempting insurrection. The cramped, aging vessels employed by the HSITC as ferries between the islands pale in comparison to the new, foreign ships that dock at the harbors, and their taxes seem to grow larger each year, but as long as the ferry system continues to function, there is little more than idle complaint.

In this town, a local woman, who let’s say is your neighbor, is accused of witchcraft. After the debacle at Salem, the local magistrates are unwilling to prosecute her without absolute proof, which obviously fails to materialize. Nevertheless, vicious rumors about men being transmogrified into newts, and satanic rituals conducted at night, spread. Local schoolchildren and off duty laborers congregate around your house, hoping to get a glimpse of the hideous wretch that legend tells dwells next door.
For your part, you carry on with your daily business as best you can, until one day, while waiting at the docks to board a ferry to the apothecary, a spat erupts between the woman in question and the dock guard, who insists that he shan’t allow her to board, lest her witchery cause them to become shipwrecked. The woman is denied boarding, and since the HSITC run all the ferries, this now means that she’s effectively cut off from rest of the world, not by any conviction, but because there were not adequate safeguards against the whims of an unaccountable monopoly.
As you’ve probably guessed, this is a parable about the dangers posed by the removal of net neutrality regulations. The internet these days is more than content. We have banks, schools, even healthcare infrastructure that exist solely online. In my own case, my life support systems rely on internet connectivity, and leverage software and platforms that are distributed through open source code sharing. These projects are not possible without a free and open internet.
Others with more resources than I have already thoroughly debunked the claims made by ISPs against net neutrality. The overwhelming economic consensus is that the regulations on the table will only increase economic growth, and will have no impact on ISP investment. The senate has already passed a bill to restore the preexisting regulations that were rescinded under dubious circumstances, and a house vote is expected soon.
I would ask that you contact your elected representatives, but this issue requires more than that. Who has access to the internet, and under what terms, may well be the defining question of this generation, and regardless of how the vote in the house goes, this issue and variants of it will continue to crop up. I therefore ask instead that you become an active participant in the discussion, wherever it takes us. Get informed, stay informed, and use your information to persuade others.
I truly believe that the internet, and its related technologies, have the potential to bring about a new renaissance. But this can only happen if all of us are aware and active in striving for the future we seek. This call to arms marks the beginning of a story that in all likelihood will continue for the duration of most of our lifetimes. We must consult with each other, and our elected representatives, and march, and rally, and vote, by all means, vote. Vote for an open internet, for equal access, for progress, and for the future.

Soda Cans

One of the first life changes I made after I began to spend a great deal of time in hospitals was giving myself permission to care about the small things. As a person who tends to get inside my own head, sometimes to a fault, the notion of, for example, finding joy in a beautiful sunset, has often seemed trite and beneath me, as though the only thoughts worthy of my contemplation are deep musings and speculations on the hows and whys of life, the universe, and everything.

This line of thinking is, of course hogwash. After all, even if one aims to ask the big questions, doing so is not in any way mutually exclusive with finding meaning in the little things. Indeed, on the contrary, it is often by exploration of such matters more easily grasped that we are able to make headway towards a more complete picture. And besides that, getting to enjoy the little things is quite nice.

With this background in mind, I have been thinking lately about the can design for the new round of Diet Coke flavors. So far I have only tried the twisted mango flavor. On the whole, I like it. I do not think it will supplant Coke Zero Vanilla as my default beverage option (the reasoning behind this being the default is far too elaborate to go into here). The twisted mango flavor is more novel, and hence is more appropriate on occasion than as a default option. I can imagine myself sipping it on vacation, or even at a party, but not on a random occasion when I happen to need a caffeinated beverage to dull a mild headache.

I do not, however, like the can that it comes in.

For some reason, the Coca-Cola company thought it necessary to mess with success, and change the shape of the can the new line of flavors come in. The volume is the same, but the shape is taller, with a shorter circumference, similar to the cans used by some beer and energy drink brands. I can only assume that this is the aesthetic that Coca-Cola was aiming for; that their intention is to obfuscate and confuse, by creating a can better able to camouflage among more hardcore party drinks.

If this is the reason for the redesign, I can understand, but cannot approve. Part of the reason that I have such strong feelings about various Coke products (or indeed, have feelings at all) is precisely because I cannot drink. Legally, I am not old enough in the United States (not that this has ever stopped my friends, or would stop me while traveling abroad), and moreover even if I was old enough, my medical condition and medications make alcohol extremely ill-advised.

Coke is a stand-in, in this regard. I can be fussy about my Coke products in the way that others fuss over beers. And because I have a drink over which I am seen to be fussing, it becomes common knowledge that I enjoy this very particular product. As a result, when it comes to that kind of person that is only satisfied when there is a (hard) drink in every hand, they can rest easy seeing that I have my preferred beverage, even if mine happens to be non-alcoholic. It is a subtle maneuver that satisfies everyone without anyone having to lose face or endure a complex explanation of my medical history. Coke undercuts this maneuver by making their product look more like beer. It sends the subtle subconscious message that the two are interchangeable, which in my case is untrue.

But this is hardly my primary complaint. After all, if my main problem was social camouflage, I could always, as my medical team have suggested, use camouflage, and simply during my beverage of choice out of some other container. It worked well enough for Zhukov, who naturally couldn’t be seen publicly drinking decadent western capitalism distilled in his capacity as leader of the Red Army, and so took to drinking a custom-ordered clear formulation of Coke in a bottle design to mimic those of the Soviet state vodka monopoly. It shouldn’t be my problem in the first place, but I could deal with mere cosmetic complaints.

No, what frustrates me about the can is its functionality. Or rather, its lack thereof. I’ve turned the problem over in my head, and from an engineering standpoint, I can’t fathom how the new design is anything but a step backwards. I assume that a megacorporation like Coca-Cola went through a design process at least as rigorous the one we employed in our Introduction to Engineering Design class. I would hope that they have spent at least as much time thinking about the flaws of the new design. In case they haven’t, here are my notes:

1) The can is too long for straws.
Some people prefer to drink out of a glass. For me, having to drink cold fluid in this way hurts my teeth. And if there is ice in the glass, I have to worry about accidentally swallowing the ice cubes, turning the whole experience into a struggle. Plus, then I have to deal with washing the empty glass afterwards. Drinking straight out of the can is better, but tipping a can back to take a sip makes one look like an uncivilized glutton who hasn’t been introduced to the technological marvel of the bendy straw. And conveniently, the opener on most cans can also be adjusted to secure a straw from bobbing up and down. Alas, the new can design is too long to comfortably accommodate a standard bendy straw.

2) The can doesn’t stand up as well
The fact that the can is taller, with a smaller base means that does not fit comfortably in most cup holders. Moreover, the smaller base area means that it is less stable standing upright. It does take up less space on the table, but that doesn’t matter when it falls over because I sneezed.

3) The shape makes for poor insulation
Alright, this part involves some math and physics, so bear with me. The speed at which a chilled cylindrical object, such as a soda can, will warm to room temperature is governed by the amount of surface area, because, the greater the surface area, the more direct contact with the surroundings, and the more conduction of heat. The can is taller, but the volume is the same, so the surface area must be greater to compensate. The conclusion is intuitively obvious if one remembers that a circle is the most efficient way to contain area on a 2D plane (and by extension, a sphere is most efficient for 3D, but we use cylinders and boxes for the sake of manufacturing and storage).

Consequently, the greater surface area of the can means that it comes in contact with more of the surrounding air. This increased contact results in increased conduction of heat from the air into the can, and proportionally faster warming. So my nice, refreshing, cold soda becomes room temperature and flat in a hurry. Sure, this means it also gets colder faster, and so perhaps it is a feature for that peculiar brand of soul that doesn’t keep soda refrigerated beforehand, but insists on waiting to chill it immediately before drinking out of the can, but I have no concern for such eccentrics.

I could go on, but I’m belaboring the point even now. The new can design is a step backwards. I just can’t help but feel like Coca-Cola tried to reinvent the wheel here, and decided to use Reulaux rotors instead of circles. Now, onto the important question: does it matter? Well, it clearly matters to Coca-Cola, seeing as they say fit to make the change. And, despite being objectively petty, it does matter to me, because it impacts my life, albeit in a relatively small way. Denying that I have strong feelings about this matter in favor of appearing to focus only on high minded ideals helps no one. And, as I learned in my time in the hospital, when the big picture looks bleak and can’t be changed, the small things start to matter a lot more.

The Lego Census

So the other day I was wondering about the demographics of Lego mini figures. I’m sure we’re all at least vaguely aware of the fact that Lego minifigs tend to be, by default, adult, male, and yellow-skinned. This wasn’t terribly worthy of serious thought back when Lego had only a handful of different minifigure designs existed. Yet nowadays Lego has thousands, if not millions of different minifigure permutations. Moreover, the total number of minifigures in circulation is set to eclipse the number of living humans within a few years.

Obviously, even with a shift towards trying to be more representative, the demographics of Lego minifigures are not an accurate reflection of the demographics of humankind. But just how out of alignment are they? Or, to ask it another way, could the population of a standard Lego city exist in real life without causing an immediate demographic crisis?

This question has bugged me enough that I decided to conduct an informal study based on a portion of my Lego collection, or rather, a portion of it that I reckon is large enough to be vaguely representative of a population. I have chosen to conduct my counts based on the central district of the Lego city that exists in our family basement, on the grounds that it includes a sizable population from across a variety of different sets.

With that background in mind, I have counted roughly 154 minifigures. The area of survey is defined as the city central district, which for our purposes is defined by the largest tables with the greatest number of buildings and skyscrapers, and so presumably the highest population density.

Because Lego minifigures don’t have numerical ages attached to them, I counted ages by dividing minifigures into four categories. The categories are: Children, Young Adults, Middle Aged, and Elderly. Obviously these categories are qualitative and subject to some interpretation. Children are fairly obvious for their different sized minifigures. An example of adult categories follows.

The figure on the left would be a young adult. The one in the middle would be classified as middle aged, and the one on the right, elderly.

Breakdown by age

Children (14)
Lego children are the most distinct category because, in addition to childish facial features and clothes, they are given shorter leg pieces. This is the youngest category, as Lego doesn’t include infant Lego minifigures in their sets. I would guess that this age includes years 5-12.

Young Adults (75)
Young adults encompasses a fairly wide range, from puberty to early middle age. This group is the largest, partially because it includes the large contingent of conscripts serving in the city. An age range would be roughly 12-32.

Middle Aged (52)
Includes visibly older adults that do not meet the criteria for elderly. This group encompasses most of the city’s administration and professionals.

Elderly (13)
The elderly are those that stand out for being old, including those with features such as beards, wrinkled skin, or off-color hair.

Breakdown by industry

Second is occupations. Again, since minifigures cant exactly give their own occupations, and since most jobs happen indoors where I can’t see, I was forced to make some guesses based on outfits and group them into loose collections.

27 Military
15 Government administration
11 Entertainment
9 Law enforcement
9 Transport / Shipping
9 Aerospace industries
8 Heavy industry
6 Retail / services
5 Healthcare
5 Light Industry

An unemployment rate would be hard to gauge, because most of the time the unemployment rate is adjusted to omit those who aren’t actively seeking work, such as students, retired persons, disabled persons, homemakers, and the like. Unfortunately for our purposes, a minifigure who is transitionally unemployed looks pretty much identical to one who has decided to take an early retirement.

What we can take a stab at is a workforce participation rate. This is a measure of what percentage of the total number of people eligible to be working are doing so. So, for our purposes, this means tallying the total number of people assigned jobs and dividing by the total number of people capable of working, which we will assume means everyone except children. This gives us a ballpark of about 74%, decreasing to 68% if we exclude military to look only at the civilian economy. Either of these numbers would be somewhat high, but not unexplainably so.

Breakdown by sex

With no distinction between the physical form of Lego bodies, the differences between sexes in minifigure is based purely on cosmetic details such as hair type, the presence of eyelashes, makeup, or lipstick on a face, and dresses. This is obviously based on stereotypes, and makes it tricky to tease apart edge cases. Is the figure with poorly-detailed facial features male or female? What about that faceless conscript marching in formation with their helmet and combat armor? Does dwelling on this topic at length make me some kind of weirdo?

The fact that Lego seems to embellish characters that are female with stereotypical traits suggests that the default is male. Operating on this assumption gives you somewhere between 50 and 70 minifigures with at least one distinguishing female trait depending on how particular you get with freckles and other minute facial details.

That’s a male to female ratio somewhere between 2.08:1 and 1.2:1. The latter would be barely within the realm of ordinary populations, and even then would be highly suggestive of some kind of artificial pressure such as sex selective abortion, infanticide, widespread gender violence, a lower standard of medical care for girls, or some kind of widespread exposure, whether to pathogens or pollutants, that causes a far higher childhood fatality rate for girls than would be expected. And here you were thinking that a post about Lego minifigures was going to be a light and gentle read.

The former ratio is completely unnatural, though not completely unheard of in real life under certain contrived circumstances: certain South Asian and Middle Eastern countries have at times had male to female ratios of as high as two owing to the presence of large numbers of guest workers. In such societies, female breadwinners, let alone women traveling alone to foreign countries to send money home, is unheard of.

Such an explanation might be conceivable given a look at the lore of the city. The city is indeed a major trade port and center of commerce, with a non-negligible transient population, and also hosts a sizable military presence. By a similar token, I could simply say that there are more people that I’m not counting hiding inside all those skyscrapers that make everything come out even. Except this kind of narrative explanation dodges the question.

The strait answer is that, no, Lego cities are not particularly accurate reflections of our real life cities. This lack of absolute realism does not make Lego bad toys. Nor does it detract from their value as an artistic and storytelling medium; nor either the benefits for play therapy for patients affected with neuro-cognitive symptoms, my original reason for starting my Lego collection.

 

Technological Milestones and the Power of Mundanity

When I was fairly little, probably seven or so, I devised a short list of technologies based on what I had seen on television that I reckoned were at least plausible, and which I earmarked as milestones of sorts to measure how far human technology would progress during my lifetime. I estimated that if I was lucky, I would be able to have my hands on half of them by the time I retired. Delightfully, almost all of these have in fact already been achieved, less than fifteen years later.

Admittedly, all of these technologies that I picked were far closer than I had envisioned at the time. Living in Australia, which seemed to be the opposite side of the world from where everything happened, and living outside of the truly urban areas of Sydney which, as a consequence of international business, were kept up to date, it often seems that even though I technically grew up after the turn of the millennium, that I was raised in a place and culture that was closer to the 90s.

For example, as late as 2009, even among adults, not everyone I knew had a mobile phone. Text messaging was still “SMS”, and was generally regarded with suspicion and disdain, not least of all because not all phones were equipped to handle them, and not all phone plans included provisions for receiving them. “Smart” phones (still two words) did exist on the fringes; I know exactly one person who owned an iPhone, and two who owned a BlackBerry, at that time. But having one was still an oddity. Our public school curriculum was also notably skeptical, bordering on technophobic, about the rapid shift towards Broadband and constant connectivity, diverting much class time to decrying the evils of email and chat rooms.

These were the days when it was a moral imperative to turn off your modem at night, lest the hacker-perverts on the godless web wardial a backdoor into your computer, which weighed as much as the desk it was parked on, or your computer overheat from being left on, and catch fire (this happened to a friend of mine). Mice were wired and had little balls inside them that you could remove in order to sabotage them for the next user. Touch screens might have existed on some newer PDA models, and on some gimmicky machines in the inner city, but no one believed that they were going to replace the workstation PC.

I chose my technological milestones based on my experiences in this environment, and on television. Actually, since most of our television was the same shows that played in the United States, only a few months behind their stateside premier, they tended to be more up to date with the actual state of technology, and depictions of the near future which seemed obvious to an American audience seemed terribly optimistic and even outlandish to me at the time. So, in retrospect, it is not surprising that after I moved back to the US, I saw nearly all of my milestones commercially available within half a decade.

Tablet Computers
The idea of a single surface interface for a computer in the popular consciousness dates back almost as far as futuristic depictions of technology itself. It was an obvious technological niche that, despite numerous attempts, some semi-successful, was never truly cracked until the iPad. True, plenty of tablet computers existed before the iPad. But these were either klunky beyond use, incredibly fragile to the point of being unusable in practical circumstances, or horrifically expensive.

None of them were practical for, say, completing homework for school on, which at seven years old was kind of my litmus test for whether something was useful. I imagined that if I were lucky, I might get to go tablet shopping when it was time for me to enroll my own children. I could not imagine that affordable tablet computers would be widely available in time for me to use them for school myself. I still get a small joy every time I get to pull out my tablet in a productive niche.

Video Calling
Again, this was not a bolt from the blue. Orwell wrote about his telescreens, which amounted to two-way television, in the 1940s. By the 70s, NORAD had developed a fiber-optic based system whereby commanders could conduct video conferences during a crisis. By the time I was growing up, expensive and klunky video teleconferences were possible. But they had to be arranged and planned, and often required special equipment. Even once webcams started to appear, lessening the equipment burden, you were still often better off calling someone.

Skype and FaceTime changed that, spurred on largely by the appearance of smartphones, and later tablets, with front-facing cameras, which were designed largely for this exact purpose. Suddenly, a video call was as easy as a phone call; in some cases easier, because video calls are delivered over the Internet rather than requiring a phone line and number (something which I did not foresee).

Wearable Technology (in particular smartwatches)
This was the one that I was most skeptical of, as I got this mostly from the Jetsons, a show which isn’t exactly renowned for realism or accuracy. An argument can be made that this threshold hasn’t been fully crossed yet, since smartwatches are still niche products that haven’t caught on to the same extent as either of the previous items, and insofar as they can be used for communication like in The Jetsons, they rely on a smartphone or other device as a relay. This is a solid point, to which I have two counterarguments.

First, these are self-centered milestones. The test is not whether an average Joe can afford and use the technology, but whether it has an impact on my life. And indeed, my smart watch, which was enough and functional enough for me to use in an everyday role, does indeed have a noticeable positive impact. Second, while smartwatches may not be as ubiquitous as once portrayed, they do exist, and are commonplace enough to be largely unremarkable. The technology exists and is widely available, whether or not consumers choose to use it.

These were my three main pillars of the future. Other things which I marked down include such milestones as:

Commercial Space Travel
Sure, SpaceX and its ilk aren’t exactly the same as having shuttles to the ISS departing regularly from every major airport, with connecting service to the moon. You can’t have a romantic dinner rendezvous in orbit, gazing at the unclouded stars on one side, and the fragile planet earth on the other. But we’re remarkably close. Private sector delivery to orbit is now cheaper and more ubiquitous than public sector delivery (admittedly this has more to do with government austerity than an unexpected boom in the aerospace sector).

Large-Scale Remotely Controlled or Autonomous Vehicles
This one came from Kim Possible, and a particular episode in which our intrepid heroes got to their remote destination by a borrowed military helicopter flown remotely from a home computer. Today, we have remotely piloted military drones, and early self-driving vehicles. This one hasn’t been fully met yet, since I’ve never ridden in a self-driving vehicle myself, but it is on the horizon, and I eagerly await it.

Cyborgs
I did guess that we’d have technologically altered humans, both for medical purposes, and as part of the road to the enhanced super-humans that rule in movies and television. I never guessed at seven that in less than a decade, that I would be one of them, relying on networked machines and computer chips to keep my biological self functioning, plugging into the wall to charge my batteries when they run low, studiously avoiding magnets, EMPs, and water unless I have planned ahead and am wearing the correct configuration and armor.

This last one highlights an important factor. All of these technologies were, or at least, seemed, revolutionary. And yet today they are mundane. My tablet today is only remarkable to me because I once pegged it as a keystone of the future that I hoped would see the eradication of my then-present woes. This turned out to be overly optimistic, for two reasons.

First, it assumed that I would be happy as soon as the things that bothered me then no longer did, which is a fundamental misunderstanding of human nature. Humans do not remain happy the same way than an object in motion remains in motion until acted upon. Or perhaps it is that as creatures of constant change and reecontextualization, we are always undergoing so much change that remaining happy without constant effort is exceedingly rare. Humans always find more problems that need to be solved. On balance, this is a good thing, as it drives innovation and advancement. But it makes living life as a human rather, well, wanting.

Which lays the groundwork nicely for the second reason: novelty is necessarily fleeting. What advanced technology today marks the boundary of magic will tomorrow be a mere gimmick, and after that, a mere fact of life. Computers hundreds of millions more times more powerful than those used to wage World War II and send men to the moon are so ubiquitous that they are considered a basic necessity of modern life, like clothes, or literacy; both of which have millennia of incremental refinement and scientific striving behind them on their own.

My picture of the glorious shining future assumed that the things which seemed amazing at the time would continue to amaze once they had become commonplace. This isn’t a wholly unreasonable extrapolation on available data, even if it is childishly optimistic. Yet it is self-contradictory. The only way that such technologies could be harnessed to their full capacity would be to have them become so widely available and commonplace that it would be conceivable for product developers to integrate them into every possible facet of life. This both requires and establishes a certain level of mundanity about the technology that will eventually break the spell of novelty.

In this light, the mundanity of the technological breakthroughs that define my present life, relative to the imagined future of my past self, is not a bad thing. Disappointing, yes; and certainly it is a sobering reflection on the ungrateful character of human nature. But this very mundanity that breaks our predictions of the future (or at least, our optimistic predictions) is an integral part of the process of progress. Not only does this mundanity constantly drive us to reach for ever greater heights by making us utterly irreverent of those we have already achieved, but it allows us to keep evolving our current technologies to new applications.

Take, for example, wireless internet. I remember a time, or at least, a place, when wireless internet did not exist for practical purposes. “Wi-Fi” as a term hadn’t caught on yet; in fact, I remember the publicity campaign that was undertaken to educate our technologically backwards selves about what term meant, about how it wasn’t dangerous, and about how it would make all of our lives better, as we could connect to everything. Of course, at that time I didn’t know anyone outside of my father’s office who owned a device capable of connecting to Wi-Fi. But that was beside the point. It was the new thing. It was a shiny, exciting novelty.

And then, for a while, it was a gimmick. Newer computers began to advertise their Wi-Fi antennae, boasting that it was as good as being connected by cable. Hotels and other establishments began to advertise Wi-Fi connectivity. Phones began to connect to Wi-Fi networks, which allowed phones to truly connect to the internet even without a data plan.

Soon, Wi-Fi became not just a gimmick, but a standard. First computers, then phones, without internet began to become obsolete. Customers began to expect Wi-Fi as a standard accommodation wherever they went, for free even. Employers, teachers, and organizations began to assume that the people they were dealing with would have Wi-Fi, and therefore everyone in the house would have internet access. In ten years, the prevailing attitude around me went from “I wouldn’t feel safe having my kid playing in a building with that new Wi-Fi stuff” to “I need to make sure my kid has Wi-Fi so they can do their schoolwork”. Like television, telephones, and electricity, Wi-Fi became just another thing that needed to be had in a modern home. A mundanity.

Now, that very mundanity is driving a second wave of revolution. The “Internet of Things” as it is being called, is using the Wi-Fi networks that are already in place in every modern home to add more niche devices and appliances. We are told to expect that soon that every major device in our house will be connected to out personal network, controllable either from our mobile devices, or even by voice, and soon, gesture, if not through the devices themselves, then through artificially intelligent home assistants (Amazon echo, Google Home, and related).

It is important to realize that this second revolution could not take place while Wi-Fi was still a novelty. No one who wouldn’t otherwise buy into Wi-Fi at the beginning would have bought it because it could also control the sprinklers, or the washing machine, or what have you. Wi-Fi had to become established as a mundane building block in order to be used as the cornerstone of this latest innovation.

Research and development may be focused on the shiny and novel, but technological process on a species-wide scale depends just as much on this mundanity. Breakthroughs have to not only be helpful and exciting, but useful in everyday life, and cheap enough to be usable by everyday consumers. It is easy to get swept up in the exuberance of what is new, but the revolutionary changes happen when those new things are allowed to become mundane.

The Moral Hazard of Hope


This post is part of the series: The Debriefing. Click to read all posts in this series.


Suppose that five years from today, you would receive an extremely large windfall. The exact number isn’t important, but let’s just say it’s large enough that you’ll have to budget things again. Not technically infinite, because that would break everything, but for the purposes of one person, basically undepletable. Let’s also assume that this money becomes yours in such a way that it can’t be taxed or swindled in getting it. This is also an alternate universe where inheritance and estates don’t exist, so there’s no scheming among family, and no point in considering them in your plans. Just roll with it.

No one else knows about it, so you can’t borrow against it, nor is anyone going to treat you differently until you have the money. You still have to be alive in five years to collect and enjoy your fortune. Freak accidents can still happen, and you can still go bankrupt in the interim, or get thrown in prison, or whatever, but as long as you’re around to cash the check five years from today, you’re in the money.

How would this change your behavior in the interim? How would your priorities change from what they are?

Well, first of all, you’re probably not going to invest in retirement, or long term savings in general. After all, you won’t need to. In fact, further saving would be foolish. You’re not going to need that extra drop in the bucket, which means saving it would be wasting it. You’re legitimately economically better off living the high life and enjoying yourself as much as possible without putting yourself in such severe financial jeopardy that you would be increasing your chances of being unable to collect your money.

If this seems insane, it’s important to remember here, that your lifestyle and enjoyment are quantifiable economic factors (the keyword is “utility”) that weigh against the (relative and ultimately arbitrary) value of your money. This is the whole reason why people buy stuff they don’t strictly need to survive, and why rich people spend more money than poor people, despite not being physiologically different. Because any money you save is basically worthless, and your happiness still has value, buying happiness, expensive and temporary though it may be, is always the economically rational choice.

This is tied to an important economic concept known as Moral Hazard, a condition where the normal risks and costs involved in a decision fail to apply, encouraging riskier behavior. I’m stretching the idea a little bit here, since it usually refers to more direct situations. For example, if I have a credit card that my parents pay for to use “for emergencies”, and I know I’m never going to see the bill, because my parents care more about our family’s credit score than most anything I would think to buy, then that’s a moral hazard. I have very little incentive to do the “right” thing, and a lot of incentive to do whatever I please.

There are examples in macroeconomics as well. For example, many say that large corporations in the United States are caught in a moral hazard problem, because they know that they are “too big to fail”, and will be bailed out by the government if they get in to serious trouble. As a result, these companies may be encouraged to make riskier decisions, knowing that any profits will be massive, and any losses will be passed along.

In any case, the idea is there. When the consequences of a risky decision become uncoupled from the reward, it can be no surprise when rational actors take more riskier decisions. If you know that in five years you’re going to be basically immune to any hardship, you’re probably not going to prepare for the long term.

Now let’s take a different example. Suppose you’re rushed to the hospital after a heart attack, and diagnosed with a heart condition. The condition is minor for now, but could get worse without treatment, and will get worse as you age regardless.

The bad news is, in order to avoid having more heart attacks, and possible secondary circulatory and organ problems, you’re going to need to follow a very strict regimen, including a draconian diet, a daily exercise routine, and a series of regular injections and blood tests.

The good news, your doctor informs you, is that the scientists, who have been tucked away in their labs and getting millions in yearly funding, are closing in on a cure. In fact, there’s already a new drug that’s worked really well in mice. A researcher giving a talk at a major conference recently showed a slide of a timeline that estimated FDA approval in no more than five years. Once you’re cured, assuming everything works as advertised, you won’t have to go through the laborious process of treatment.

The cure drug won’t help if you die of a heart attack before then, and it won’t fix any problems with your other organs if your heart gets bad enough that it can’t supply them with blood, but otherwise it will be a complete cure, as though you were never diagnosed in the first place. The nurse discharging you tells you that since most organ failure doesn’t appear until patients have been going for at least a decade, so long as you can avoid dying for half that long, you’ll be fine.

So, how are you going to treat this new chronic and life threatening disease? Maybe you will be the diligent, model patient, always deferring to the most conservative and risk averse in the medical literature, certainly hopeful for a cure, but not willing to bet your life on a grad student’s hypothesis. Or maybe, knowing nothing else on the subject, you will trust what your doctor told you, and your first impression of the disease, getting by with only as much invasive treatment as you can get away with to avoid dying and being called out by your medical team for being “noncompliant” (referred to in chronic illness circles in hushed tones as “the n-word”).

If the cure does come in five years, as happens only in stories and fantasies, then either way, you’ll be set. The second version of you might be a bit happier from having more fully sucked the marrow out of life. It’s also possible that the second version would have also had to endure another (probably non-fatal) heart attack or two, and dealt with more day to day symptoms like fatigue, pains, and poor circulation. But you never would have really lost anything for being the n-word.

On the other hand, if by the time five years have elapsed, the drug hasn’t gotten approval, or quite possibly, hasn’t gotten close after the researchers discovered that curing a disease in mice didn’t also solve it in humans, then the difference between the two versions of you are going to start to compound. It may not even be noticeable after five years. But after ten, twenty, thirty years, the second version of you is going to be worse for wear. You might not be dead. But there’s a much higher chance you’re going to have had several more heart attacks, and possibly other problems as well.

This is a case of moral hazard, plain and simple, and it does appear in the attitudes of patients with chronic conditions that require constant treatment. The fact that, in this case, the perception of a lack of risk and consequences is a complete fantasy is not relevant. All risk analyses depend on the information that is given and available, not on whatever the actual facts may be. We know that the patient’s decision is ultimately misguided because we know the information they are being given is false, or at least, misleading, and because our detached perspective allows us to take a dispassionate view of the situation.

The patient does not have this information or perspective. In all probability, they are starting out scared and confused, and want nothing more than to return to their previous normal life with as few interruptions as possible. The information and advice they were given, from a medical team that they trust, and possibly have no practical way of fact checking, has led them to believe that they do not particularly need to be strict about their new regimen, because there will not be time for long term consequences to catch up.

The medical team may earnestly believe this. It is the same problem one level up; the only difference is, their information comes from pharmaceutical manufacturers, who have a marketing interest in keeping patients and doctors optimistic about upcoming products, and researchers, who may be unfamiliar with the hurdles in getting a breakthrough from the early lab discoveries to a consumer-available product, and whose funding is dependent on drumming up public support through hype.

The patient is also complicit in this system that lies to them. Nobody wants to be told that their condition is incurable, and that they will be chronically sick until they die. No one wants to hear that their new diagnosis will either cause them to die early, or live long enough for their organs to fail, because even by adhering to the most rigid medical plan, the tools available simply cannot completely mimic the human body’s natural functions. Indeed, telling a patient that they will still suffer long term complications, whether in ten, twenty, or thirty years, almost regardless of their actions today, it can be argued, will have much the same effect as telling them that they will be healthy regardless.

Given the choice between two extremes, optimism is obviously the better policy. But this policy does have a tradeoff. It creates a moral hazard of hope. Ideally, we would be able to convey an optimistic perspective that also maintains an accurate view of the medical prognosis, and balances the need for bedside manner with incentivizing patients to take the best possible care of themselves. Obviously this is not an easy balance to strike, and the balance will vary from patient to patient. The happy-go-lucky might need to be brought down a peg or two with a reality check, while the nihilistic might need a spoonful of sugar to help the medicine go down. Finding this middle ground is not a task to be accomplished by a practitioner at a single visit, but a process to be achieved over the entire course of treatment, ideally with a diverse and well experienced team including mental health specialists.

In an effort to finish on a positive note, I will point out that this is already happening, or at least, is already starting to happen. As interdisciplinary medicine gains traction, patient mental health becomes more of a focus, and as patients with chronic conditions begin to live longer, more hospitals and practices are working harder to ensure that a positive and constructive mindset for self care is a priority, alongside educating patients on the actual logistics of self-care. Support is easier to find than ever, especially with organized patient conferences and events. This problem, much like the conditions that cause it, are chronic, but are manageable with effort.

 

Eclipse Reactions

People have been asking since I announced that I would be chasing the eclipse for he to try and summarize my experience here. So, without further delay, here are my thoughts on the subject, muddled and disjointed though they may be.

It’s difficult to describe what seeing an eclipse feels like. A total eclipse, that is. A partial eclipse actually isn’t that noticeable until you get up to about 80% totality. You might feel slightly cooler than you’d otherwise expect for the middle of the day, and the shade of blue might look just slightly off for mid day sky, but unless you knew to get a pair of viewing glasses and look at the sun, it’d be entirely possible to miss it entirely.

A total eclipse is something else entirely. The thing that struck me the most was how sudden it all was. Basically, try to imagine six hours of sunset and twilight crammed into two minutes. Except, there isn’t a horizon that the sun is disappearing behind. The sun is still in the sky. It’s still daytime, and the sun is still there. It’s just not shining. This isn’t hard conceptually, but seeing it in person still rattles something very primal.

The regular cycle of day and night is more or less hardwired into human brains. It isn’t perfect, not by a long shot, but it is a part of normal healthy human function. We’re used to having long days and nights, with a slow transition. Seeing it happen all at once is disturbing in a primeval way. You wouldn’t even have to be looking at the sun to know that something is wrong. It just is.

For reference: this was the beginning of totality.
This was exactly 30 seconds later.

I know this wasn’t just me. The rest of the crowd felt it as well. The energy of the crowd in the immediate buildup to totality was like an electric current. It was an energy which could have either came out celebratory and joyous, or descended into riotous pandemonium. It was the kind of energy that one expects from an event of astronomical proportions. Nor was this reaction confined to human beings; the crickets began a frenzied cacophony chirping more intense than I have yet otherwise heard, and the flying insects began to confusedly swarm, unsure of what to make of the sudden and unplanned change of schedule.

It took me a while to put my finger on why this particular demonstration was so touching in a way that garden variety meteor showers, or even manmade light shows just aren’t. After all, it’s not like we don’t have the technology to create similarly dazzling displays. I still don’t think I’ve fully nailed it, but here’s my best shot.

All humans to some degree are aware of how precarious our situation is. We know that life, both in general, but also for each of us in particular, is quite fragile. We know that we rely on others and on nature to supplement our individual shortcomings, and to overcome the challenges of physical reality. An eclipse showcases this vulnerability. We all know that if the sun ever failed to come back out of an eclipse, that we would be very doomed.

Moreover, there’s not a whole lot we could do to fix the sun suddenly not working. A handful of humans might be able to survive for a while underground using nuclear reactors to mimic the sun’s many functions for a while, but that would really just be delaying the inevitable.

With the possible exception of global thermonuclear war, there’s nothing humans could do to each other or to this planet that would be more destructive than an astronomical event like an eclipse (honorable mention to climate change, which is already on track to destroy wide swaths of civilization, but ultimately falls short because it does so slowly enough that humans can theoretically adapt, if we get our act together fast). Yet, this is a completely natural, even regular occurrence. Pulling the rug from out under humanity’s feet is just something that the universe does from time to time.

An eclipse reminds us that our entire world, both literally and figuratively, is contained on a single planet; a single pale blue dot, and that our fate is inextricably linked to the fate of our planet. For as much as we boast about being masters of nature, and eclipse reminds us that there is still a great deal over which we have no control. It reminds us of this in a way that is subtle enough to be lost in translation if one does not experience it firsthand, but one which is nevertheless intuitable even if one is not consciously aware of the reasons.

None of this negates the visual spectacle; and indeed, it is quite a spectacle. Yet while it is a spectacle, it is not a show, and this is an important distinction. It is not a self-contained item of amusement, but rather a sudden, massive, and all enclumpassing change in the very environment. It’s not just that something appears in the sky, but that interferes with the sun, and by extension, the sky itself. It isn’t just that something new has appeared, but that all of the normal rules seem to be being rewritten. It is mind boggling.

As footage and images have emerged, particularly as video featuring the reactions of crowds of observers have begun to circulate, there have been many comments to the effect that the people acting excited, to the point of cheering and clapping, are overreacting, and possibly need to be examined, . I respectfully disagree. To see in person a tangible display of the the size and grandeur of the cosmos that surround us, is deeply impressive; revelatory even. On the contrary, I submit that between two people that have borne witness to our place in the universe, the one who fails to react immediately and viscerally is the one who needs to be examined.

Incremental Progress Part 4 – Towards the Shining Future

I have spent the last three parts of this series bemoaning various aspects of the cycle of medical progress for patients enduring chronic health issues. At this point, I feel it is only fair that I highlight some of the brighter spots.

I have long come to accept that human progress is, with the exception of the occasional major breakthrough, incremental in nature; a reorganization here paves the way for a streamlining there, which unlocks the capacity for a minor tweak here and there, and so on and so forth. However, while this does help adjust one’s day to day expectations from what is shown in popular media to something more realistic, it also risks minimizing the progress that is made over time.

To refer back to an example used in part 2 that everyone should be familiar with, let’s refer to the progress being made on cancer. Here is a chart detailing the rate of FDA approvals for new treatments, which is a decent, if oversimplified, metric for understanding how a given patient’s options have increased, and hence, how specific and targeted their treatment will be (which has the capacity to minimize disruption to quality of life), and the overall average 5-year survival rate over a ten year period.

Does this progress mean that cancer is cured? No, not even close. Is it close to being cured? Not particularly.

It’s important to note that even as these numbers tick up, we’re not intrinsically closer to a “cure”. Coronaviruses, which cause the common cold, have a mortality rate pretty darn close to zero, at least in the developed world, and that number gets a lot closer if we ignore “novel” coronaviruses like SARS and MERS, and focus only on the rare person who has died as a direct result of the common cold. Yet I don’t think anyone would call the common cold cured. Coronaviruses, like cancer, aren’t cured, and there’s a reasonable suspicion on the part of many that they aren’t really curable in the sense that we’d like.

“Wait,” I hear you thinking, “I thought you were going to talk about bright spots”. Well, yes, while it’s true that progress on a full cure is inconclusive at best, material progress is still being made every day, for both colds and cancer. While neither is at present curable, they are, increasingly treatable, and this is where the real progress is happening. Better treatment, not cures, is from whence all the media buzz is generated, and why I can attend a conference about my disease year after year, hearing all the horror stories of my comrades, and still walk away feeling optimistic about the future.

So, what am I optimistic about this time around, even when I know that progress is so slow coming? Well, for starters, there’s life expectancy. I’ve mentioned a few different times here that my projected lifespan is significantly shorter than the statistical average for someone of my lifestyle, medical issues excluded. While this is still true, this is becoming less true. The technology which is used for my life support is finally reaching a level of precision, in both measurement and dosing, where it can be said to genuinely mimic natural bodily functions instead of merely being an indefinite stopgap.

To take a specific example, new infusion mechanisms now allow dosing precision down to the ten-thousandth of a milliliter. For reference, the average raindrop is between 0.5 and 4 milliliters. Given that a single thousandth of a milliliter in either direction at the wrong time can be the difference between being a productive member of society and being dead, this is a welcome improvement.

Such improvements in delivery mechanisms has also enabled innovation on the drugs themselves by making more targeted treatments wth a smaller window for error viable to a wider audience, which makes them more commercially viable. Better drugs and dosaging has likewise raised the bar for infusion cannulas, and at the conference, a new round of cannulas was already being hyped as the next big breakthrough to hit the market imminently.

In the last part I mentioned, though did not elaborate at length on, the appearance of AI-controlled artificial organs being built using DIY processes. These systems now exist, not only in laboratories, but in homes, offices, and schools, quietly taking in more data than the human mind can process, and making decisions with a level of precision and speed that humans cannot dream of achieving. We are equipping humans as cyborgs with fully autonomous robotic parts to take over functions they have lost to disease. If this does not excite you as a sure sign of the brave new future that awaits all of us, then frankly I am not sure what I can say to impress you.

Like other improvements explored here, this development isn’t so much a breakthrough as it is a culmination. After all, all of the included hardware in these systems has existed for decades. The computer algorithms are not particularly different from the calculations made daily by humans, except that they contain slightly more data and slightly fewer heuristic guesses, and can execute commands faster and more precisely than humans. The algorithms are simple enough that they can be run on a cell phone, and have an effectiveness on par with any other system in existence.

These DIY initiatives have already caused shockwaves throughout the medical device industry, for both the companies themselves, and the regulators that were previously taking their sweet time in approving new technologies, acting as a catalyst for a renewed push for commercial innovation. But deeper than this, a far greater change is also taking root: a revolution not so much in technology or application, but in thought.

If my memory and math are on point, this has been the eighth year since I started attending this particular conference, out of ten years dealing with the particular disease that is the topic of this conference, among other diagnoses. While neither of these stretches are long enough to truly have proper capital-H historical context, in the span of a single lifetime, especially for a relatively young person such as myself, I do believe that ten or even eight years is long enough to reflect upon in earnest.

Since I started attending this conference, but especially within the past three years, I have witnessed, and been the subject of, a shift in tone and demeanor. When I first arrived, the tone at this conference seemed to be, as one might expect one primarily of commiseration. Yes, there was solidarity, and all the positive emotion that comes from being with people like oneself, but this was, at best, a bittersweet feeling. People were glad to have met each other, but still nevertheless resentful to have been put in the unenviable circumstances that dictated their meeting.

More recently, however, I have seen and felt more and more an optimism accompanying these meetings. Perhaps it is the consistently record-breaking attendance that demonstrates, if nothing else, that we stand united against the common threat to our lives, and against the political and corporate forces that would seek to hold up our progress back towards being normal, fully functioning humans. Perhaps it is merely the promise of free trade show goodies and meals catered to a medically restricted diet. But I think it is something different.

While a full cure, of the sort that would allow me and my comrades to leave the life support at home, serve in the military, and the like, is still far off, today more than ever before, the future looks, if not bright, then at least survivable.

In other areas of research, one of the main genetic research efforts, which has maintained a presence at the conference, is now closing in on the genetic and environmental triggers that cause the elusive autoimmune reaction which has been known to cause the disease, and on various methods to prevent and reverse it. Serious talk of future gene therapies, the kind of science fiction that has traditionally been the stuff of of comic books and film, is already ongoing. It is a strange and exciting thing to finish an episode of a science-fiction drama television series focused on near-future medical technology (and how evil minds exploit it) in my hotel room, only to walk into the conference room to see posters advertising clinical trial sign ups and planned product releases.

It is difficult to be so optimistic in the face of incurable illness. It is even more difficult to remain optimistic after many years of only incremental progress. But pessimism too has its price. It is not the same emotional toll as the disappointment which naive expectations of an imminent cure are apt to bring; rather it is an opportunity cost. It is the cost of missing out on adventures, on missing major life milestones, on being conservative rather than opportunistic.

Much of this pessimism, especially in the past, has been inspired and cultivated by doctors themselves. In a way, this makes sense. No doctor in their right mind is going to say “Yes, you should definitely take your savings and go on that cliff diving excursion in New Zealand.” Medicine is, by its very nature, conservative and risk averse. Much like the scientist, a doctor will avoid saying anything until after it has been tested and proven beyond a shadow of a doubt. As noted previously, this is extremely effective in achieving specific, consistent, and above all, safe, treatment results. But what about when the situation being treated is so all-encompassing in a patient’s life so as to render specificity and consistency impossible?

Historically, the answer has been to impose restrictions on patients’ lifestyles. If laboratory conditions don’t align with real life for patients, then we’ll simply change the patients. This approach can work, at least for a while. But patients are people, and people are messy. Moreover, when patients include children and adolescents, who, for better or worse, are generally inclined to pursue short term comfort over vague notions of future health, patients will rebel. Thus, eventually, trading ten years at the end of one’s life for the ability to live the remainder more comfortably seems like a more balanced proposition.

This concept of such a tradeoff is inevitably controversial. I personally take no particular position on it, other than that it is a true tragedy of the highest proportion that anyone should be forced into such a situation. With that firmly stated, many of the recent breakthroughs, particularly in new delivery mechanisms and patient comfort, and especially in the rapidly growing DIY movement, have focused on this tradeoff. The thinking has shifted from a “top-down” approach of finding a full cure, to a more grassroots approach of making life more livable now, and making inroads into future scientific progress at a later date. It is no surprise that many of the groups dominating this new push have either been grassroots nonprofits, or, where they have been commercial, have been primarily from silicon valley style, engineer-founded, startups.

This in itself is already a fairly appreciable and innovative thesis on modern progress, yet one I think has been tossed around enough to be reasonably defensible. But I will go a step further. I submit that much of the optimism and positivity; the empowerment and liberation which has been the consistent takeaway of myself and other authors from this and similar conferences, and which I believe has become more intensely palpable in recent years than when I began attending, has been the result of this same shift in thinking.

Instead of competing against each other and shaming each other over inevitable bad blood test results, as was my primary complaint during conferences past, the new spirit is one of camaraderie and solidarity. It is now increasingly understood at such gatherings, and among medical professionals in general, that fear and shame tactics are not effective in the long run, and do nothing to mitigate the damage of patients deciding that survival at the cost of living simply isn’t worth it [1]. Thus the focus has shifted from commiseration over common setbacks, to collaboration and celebration over common victories.

Thus it will be seen that the feeling of progress, and hence, of hope for the future, seems to lie not so much in renewed pushes, but in more targeted treatments, and better quality of life. Long term patients such as myself have largely given up hope in the vague, messianic cure, to be discovered all at once at some undetermined future date. Instead, our hope for a better future; indeed, for a future at all; exists in the incremental, but critically, consistent, improvement upon the technologies which we are already using, and which have already been proven. Our hope lies in understanding that bad days and failures will inevitably come, and in supporting, not shaming, each other when they do.

While this may not qualify for being strictly optimistic, as it does entail a certain degree of pragmatic fatalism in accepting the realities of disabled life, it is the closest I have yet come to optimism. It is a determination that even if things will not be good, they will at least be better. This mindset, unlike rooting for a cure, does not require constant fanatical dedication to fundraising, nor does it breed innovation fatigue from watching the scientific media like a hawk, because it prioritizes the imminent, material, incremental progress of today over the faraway promises of tomorrow.


[1] Footnote: I credit the proximal cause of this cognitive shift in the conference to the progressive aging of the attendee population, and more broadly, to the aging and expanding afflicted population. As more people find themselves in the situation of a “tradeoff” as described above, the focus of care inevitably shifts from disciplinarian deterrence and prevention to one of harm reduction. This is especially true of those coming into the 13-25 demographic, who seem most likely to undertake such acts of “rebellion”. This is, perhaps unsurprisingly, one of the fastest growing demographics for attendance at this particular conference over the last several years, as patients who began attending in childhood come of age.

Incremental Progress Part 3 – For Science!

Previously, I have talked some of the ways that patients of chronic health issues and medical disabilities feel impacted by the research cycle. Part one of this ongoing series detailed a discussion I participated in at an ad-hoc support group of 18-21 year olds at a major health conference. Part two detailed some of the things I wish I had gotten a chance to add, based on my own experiences and the words of those around me, but never got the chance to due to time constraints.

After talking at length about the patient side of things, I’d like to pivot slightly to the clinical side. If we go by what most patients know about the clinical research process, here is a rough picture of how things work:

First, a conclave of elite doctors and professor gather in secret, presumably in a poorly lit conference room deep beneath the surface of the earth, and hold a brainstorming session of possible questions to study. Illicit substances may or not be involved in this process, as the creativity required to come up with such obscure and esoteric concerns as “why do certain subspecies of rats have funny looking brains?” and “why do stressful things make people act stressed out?” is immense. At the end of the session, all of the ideas are written down on pieces of parchment, thrown inside a hat, and drawn randomly to decide who will study what.

Second, money is extracted from the public at large by showing people on the street pictures of cute, sad looking children being held at needle-point by an ominously dressed person in a lab coat, with the threat that unless that person hands over all of their disposable income, the child will be forced to receive several injections per day. This process is repeated until a large enough pile of cash is acquired. The cash is then passed through a series of middlemen in dark suits smoking cigars, who all take a small cut for all their hard work of carrying the big pile of cash.

At this point, the cash is loaded onto a private jet and flown out to the remote laboratories hidden deep in the Brazilian rainforests, the barren Australian deserts, the lost islands of the arctic and Antarctic regions, and inside the active volcanoes of the pacific islands. These facilities are pristine, shining snow white and steel grey, outfitted with all the latest technology from a mid-century science fiction film. All of these facilities are outfitted either by national governments, or the rich elite of major multinational corporations, who see to all of the upkeep and grant work, leaving only the truly groundbreaking work to the trained scientists.

And who are the scientists? The scientist is a curious creature. First observed in 1543 naturalists hypothesized scientists to be former humans transmogrified by the devil himself in a Faustian bargain whereby the subject loses most interpersonal skills and material wealth in exchange for incredible intelligence a steady, monotonous career playing with glassware and measuring equipment. No one has ever seen a scientist in real life, although much footage exists of the scientist online, usually flaunting its immense funding and wearing its trademark lab coat and glasses. Because of the abundance of such footage, yet lack of real-life interactions, it has been speculated that scientists may possess some manner of cloaking which renders them invisible and inaudible outside of their native habitat.

The scientists spend their time exchanging various colored fluid between Erlenmeyer flasks and test tubes, watching to see which produces the best colors. When the best colors are found, a large brazier is lit with all of the paper currency acquired earlier. The photons from the fire reaction may, if the stars are properly aligned, hit the colored fluid in such a way as to cause the fluid to begin to bubble and change into a different color. If this happens often enough, the experiment is called a success.

The scientists spend the rest of their time meticulously recording the precise color that was achieved, which will provide the necessary data for analyst teams to divine the answers to the questions asked. These records are kept not in English, or any other commonly spoken language, but in Scientific, which is written and understood by only a handful of non-scientists, mainly doctors, teachers, and engineers. The process of translation is arduous, and in order to be fully encrypted requires several teams working in tandem. This process is called peer review, and, at least theoretically, this method makes it far more difficult to publish false information, because the arduousness of the process provides an insurmountable barrier to those motivated by anything other than the purest truth.

Now, obviously all of this is complete fiction. But the fact that I can make all of this up with a straight face speaks volumes, both about the lack of public understanding of how modern clinical research works, and the lack of transparency of the research itself. For as much as we cheer on the march of scientific advancement and technological development, for as much media attention is spent on new results hot off the presses, and for as much as the stock images and characters of the bespectacled expert adorned in a lab coat and armed with test tubes resounds in both popular culture and the popular consciousness, the actual details of what research is being done, and how it is being executed, is notably opaque.

Much of this is by design, or is a direct consequence of how research is structured. The scientific method by which we separate fact from fiction demands a level of rigor that is often antithetical to human nature, which requires extreme discipline and restraint. A properly organized double-blind controlled trial, the cornerstone of true scientific research, requires that the participants and even the scientists measuring results be kept in the dark as to what they are looking for, to prevent even the subtlest of unconscious biases from interfering. This approach, while great at testing hypotheses, means that the full story is only known to a handful of supervisors until the results are ready to be published.

The standard of scientific writing is also incredibly rigorous. In professional writing, a scientist is not permitted to make any claims or assumptions unless either they have just proven it themselves, in which case they are expected to provide full details of their data and methodology, or can directly cite a study that did so. For example, a scientist cannot simply say that the sky is blue, no matter how obvious this may seem. Nor even can a scientist refer to some other publication in which the author agreed that the sky is blue, like a journalist might while providing citations for a story. A scientist must find the original data proving that the sky is blue, that it is consistently blue, and so forth, and provide the documentation for others to cross check the claims themselves.

These standards are not only obligatory for those who wish to receive recognition and funding, but they are enforced for accreditation and publication in the first place. This mindset has only become more entrenched as economic circumstances have caused funding to become more scarce, and as political and cultural pressure have cast doubts on “mainstream institutions” like academia and major research organizations. Scientists are trained to only give the most defensible claims, in the most impersonal of words, and only in the narrow context for which they are responsible for studying. Unfortunately, although this process is unquestionably effective at testing complex hypotheses, it is antithetical to the nature of everyday discourse.

It is not, as my colleague said during our conference session said, that “scientists suck at marketing”, but rather that marketing is fundamentally incongruous with the mindset required for scientific research. Scientific literature ideally attempts to lay out the evidence with as little human perspective as possible, and let the facts speak for themselves, while marketing is in many respects the art of conjuring and manipulating human perspective, even where such perspectives may diverge from reality.

Moreover, the consumerist mindset of our capitalist society amplifies this discrepancy. The constant arms race between advertisers, media, and political factions means that we are awash in information. This information is targeted to us, adjusted to our preferences, and continually served up on a silver platter. We are taught that our arbitrary personal views are fundamentally righteous, that we have no need to change our views unless it suits us, and that if there is really something that requires any sort of action or thought on our part, that it will be similarly presented in a pleasant, custom tailored way. In essence, we are taught to ignore things that require intellectual investment, or challenge our worldview.

There is also the nature of funding. Because it is so difficult to ensure that trials are actually controlled, and to write the results in such a counterintuitive way, the costs of good research can be staggering, and finding funding can be a real struggle. Scientists may be forced to work under restrictions, or to tailor their research to only the most profitable applications. Results may not be shared to prevent infringement, or to ensure that everyone citing the results is made to pay a fee first. I could spend pages on different stories of technologies that could have benefited humanity, but were kept under wraps for commercial or political reasons.

But of course, it’s easy to rat on antisocial scientists and pharmaceutical companies. And it doesn’t really get to the heart of the problem. The problem is that, for most patients, especially those who aren’t enrolled in clinical trials, and don’t necessarily have access to the latest devices, the whole world of research is a black hole into which money is poured with no apparent benefit in return. Maybe if they follow the news, or hear about it from excited friends and relations (see previous section), they might be aware of a few very specific discoveries, usually involving curing one or two rats out of a dozen tries.

Perhaps, if they are inclined towards optimism, they will be able to look at the trend over the last several decades towards better technology and better outcomes. But in most cases, the truly everyday noticeable changes seem to only occur long after they have long been obvious to the users. The process from patient complaints with a medical device, especially in a non-critical area like usability and quality of life, that does not carry the same profit incentive for insurers to apply pressure, to a market product, is agonizingly slow.

Many of these issues aren’t research problems so much as manufacturing and distribution problems. The bottleneck in making most usability tweaks, the ones that patients notice and appreciate, isn’t in research, or even usually in engineering, but in getting a whole new product approved by executives, shareholders, and of course, regulatory bodies. (Again, this is another topic that I could, and probably will at some future date, rant on about for several pages, but suffice it to say that when US companies complain about innovation being held up by the FDA, their complaints are not entirely without merit).

Even after such processes are eventually finished, there is the problem of insurance. Insurance companies are, naturally, incredibly averse to spending money on anything unless and until it has been proven beyond a shadow of a doubt that it is not only safe, but cost effective. Especially for basic, low income plans, change can come at a glacial pace, and for state-funded services, convincing legislators to adjust statutes to permit funding for new innovations can be a major political battle. This doesn’t even begin to take into account the various negotiated deals and alliances between certain providers and manufacturers that make it harder for new breakthroughs to gain traction (Another good topic for a different post).

But these are economic problems, not research. For that matter, most of the supposed research problems are simply perception problems. Why am I talking about markets and marketing when I said I was going to talk about research?

Because for most people, the notions of “science” and “progress” are synonymous. We are constantly told, by our politicians, by our insurers, by our doctors, and by our professors that not only do we have the very best level of care that has ever been available in human history, but that we also have the most diligent, most efficient, most powerful organizations and institutions working tirelessly on our behalf to constantly push forward the frontier. If we take both of these statements at face value, then it follows that anything that we do not already have is a research problem.

For as much talk as there was during our conference sessions about how difficult life was, how so very badly we all wanted change, and how disappointed and discouraged we have felt over the lack of apparent progress, it might be easy to overlook the fact that far better technologies than are currently used by anyone in that room already exist. At this very moment, there are patients going about their lives using systems that amount to AI-controlled artificial organs. These systems react faster and more accurately than humans could ever hope to, and the clinical results are obvious.

The catch? None of these systems are commercially available. None of them have even been submitted to the FDA. A handful of these systems are open source DIY projects, and so can be cobbled together by interested patients, though in many cases this requires patients to go against medical advice, and take on more engineering and technical responsibility than is considered normal for a patient. Others are in clinical trials, or more often, have successfully completed their trials and are waiting for manufacturers to begin the FDA approval process.

This bottleneck, combined with the requisite rigor of clinical trials themselves, is what has given rise to the stereotype that modern research is primarily chasing after its own tail. This perception makes even realistic progress seem far off, and makes it all the more difficult to appreciate what incremental improvements are released.

Incremental Progress Part 2 – Innovation Fatigue

This is part two of a multi-part perspective on patient engagement in charity and research. Though not strictly required, it is strongly recommended that you read part one before continuing.


The vague pretense of order in the conversation, created by the presence of the few convention staff members, broke all at once, as several dozen eighteen to twenty one year olds all rushed to get in their two cents on the topic of fundraising burnout (see previous section). Naturally this was precisely the moment where I struck upon what I wanted to say. The jumbled thoughts and feelings, that had hinted at something to add while other people were talking, suddenly crystallized into a handful of points I wanted to make, all clustered around a phrase I had heard a few years earlier.

Not one to interrupt someone else, and also wanting to have undivided attention in making my point, I attempted to wait until the cacophony of discordant voices became more organized. And, taking example from similar times earlier in my life when I had something I wished to contribute before a group, I raised my hand and waited for silence.

Although the conversation was eventually brought back under control by some of the staff, I never got a chance to make my points. The block of time we had been allotted in the conference room ran out, and the hotel staff were anxious to get the room cleared and organized for the next group.

And yet, I still had my points to make. They still resonated within me, and I honestly believed that they might be both relevant and of interest to the other people who were in that room. I took out my phone and jotted down the two words which I had pulled from the depths of my memory: Innovation Fatigue.

That phrase has actually come to mean several different things to different groups, and so I shall spend a moment on etymology before moving forward. In research groups and think tanks, the phrase is essentially a stand in for generic mental and psychological fatigue. In the corporate world, it means a phenomenon of diminishing returns on creative, “innovative” projects, that often comes about as a result of attempts to force “innovation” on a regular schedule. More broadly in this context, the phrase has come to mean an opposition to “innovation” when used as a buzzword similar to “synergy” and “ideate”.

I first came across this term in a webcomic of all places, where it was used in a science fiction context to explain why the society depicted, which has advanced technology such as humanoid robots, neurally integrated prostheses, luxury commercial space travel, and artificial intelligence, is so similar to our own, at least culturally. That is to say, technology continues to advance at the exponential pace that it has across recorded history, but in a primarily incremental manner, and therefore most people, either out of learned complacency or a psychological defense mechanism to avoid constant hysteria, act as though all is as it always has been, and are not impressed or excited by the prospects of the future.

In addition to the feeling of fundraising burnout detailed in part one, I often find that I suffer from innovation fatigue as presented in the comic, particularly when it comes to medical research that ought to directly affect my quality of life, or promises to in the future. And what I heard from other patients during our young adults sessions has led me to believe that this is a fairly common feeling.

It is easy to be pessimistic about the long term outlook with chronic health issues. Almost definitionally, the outlook is worse than average, and the nature of human biology is such that the long term outlook is often dictated by the tools we have today. After all, even if the messianic cure arrives perfectly on schedule in five to ten years (for the record, the cure has been ten years away for the last half-century), that may not matter if things take a sharp turn for the worse six months from now. Everyone already knows someone for whom the cure came too late. And since the best way to predict future results, we are told, is from past behavior, then it would be accurate to say that no serious progress is likely to be made before it is too late.

This is not to say that progress is not being made. On the contrary, scientific progress is continuous and universal across all fields. Over the past decade alone, there has been consistent, exponential progress in not only quality of care, and quality of health outcomes, but quality of life. Disease, where it is not less frequent, but it is less impactful. Nor is this progress being made in secret. Indeed, amid all the headlines about radical new treatment options, it can be easy to forget that the diseases they are made to treat still have a massive impact. And this is precisely part of the problem.

To take an example that will be familiar to a wider audience, take cancer. It seems that in a given week, there is at least one segment on the evening TV news about some new treatment, early detection method, or some substance or habit to avoid in order to minimize one’s risk. Sometimes these segments play every day, or even multiple times per day. In checking my online news feed, one of every four stories was something regarding improvements in the state of cancer; to be precise, one was a list of habits to avoid, while one was about a “revolutionary treatment [that] offers new hope to patients”.

If you had just been diagnosed with cancer, you would be forgiven for thinking that with all this seemingly daily progress, that the path forward would be relatively simple and easy to understand. And it would be easy for one who knows nothing else to get the impression that cancer treatment is fundamentally easy nowadays. This is obviously untrue, or at least, grossly misleading. Even as cancer treatments become more effective and better targeted, the impact to life and lifestyle remains massive.

It is all well and good to be optimistic about the future. For my part, I enjoy tales about the great big beautiful tomorrow shining at the end of the day as much as anyone. In as much as I have a job, it is talking to people about new and exciting innovations in their medical field, and how they can best take advantage of them as soon as possible for the least cost possible. I don’t get paid to do this; I volunteer because I am passionate about keeping progress moving forward, and because some people have found that my viewpoint and manner of expression are uniquely helpful.

However, this cycle of minor discoveries, followed by a great deal of public overstatement and media excitement, which never (or at least, so seldom as to appear never) quite lives up to the hype, is exhausting. Active hoping, in the short term, as distinct from long term hope for future change, is acutely exhausting. Moreover, the routine of having to answer every minor breakthrough with some statement to interested, but not personally-versed friends and relations, who see media hyperbole about (steps towards) a cure and immediately begin rejoicing, is quite tiring.

Furthermore, these almost weekly interactions, in addition to carrying all of the normal pitfalls of socio-familial transactions, have a unique capability to color the perceptions of those who are closest to oneself. The people who are excited about these announcements because they know, or else believe, it represents an end, or at least, decrease, to one’s medical burden, are often among those who one wishes least to alienate with causal pessimism.

For indeed, failing to respond with appropriate zeal to each and every announcement does lead to public branding of pessimism, even depression. Or worse: it suggests that one is not taking all appropriate actions to combat one’s disease, and therefore is undeserving of sympathy and support. After all, if the person on the TV says that cancer is curable nowadays, and your cancer hasn’t been cured yet, it must be because you’re not trying hard enough. Clearly you don’t deserve my tax dollars and donations to fund your treatment and research. After all, you don’t really need it anymore. Possibly you are deliberately causing harm to yourself, and therefore are insane, and I needn’t listen to anything you say to the contrary. Hopefully, it is easy to see how frustrating this dynamic can become, even when it is not quite so exaggerated to the point of satire.

One of the phrases that I heard being repeated at the conference a lot was “patient investment in research and treatment”. When patients aren’t willing to invest emotionally and mentally in their own treatment; in their own wellbeing, the problems are obvious. To me, the cause, or at least, one of the causes, is equally obvious. Patients aren’t willing to invest because it is a risky investment. The up front cost of pinning all of the hopes and dreams for one’s future on a research hypothesis is enormous. The risk is high, as anyone who has stupefied the economics of research and development knows. Payouts aren’t guaranteed, and when they do come, they will be incremental.

Patients who aren’t “investing” in state of the art care aren’t doing so because they don’t want to get better care. They aren’t investing because they either haven’t been convinced that it is a worthwhile investment, or are emotionally and psychologically spent. They have tried investing, and have lost out. They have developed innovation fatigue. Tired of incremental progress which does not offer enough payback to earnestly plan for a better future, they turn instead to what they know to be stable: the pessimism here and now. Pessimism isn’t nearly as shiny or enticing, and it doesn’t offer the slim chance of an enormous payout, but it is reliable and predictable.

This is the real tragedy of disability, and I am not surprised in the slightest that now that sufficient treatments have been discovered to enable what amounts to eternally repeatable stopgaps, but not a full cure, that researchers, medical professionals, and patients themselves, have begun to encounter this problem. The incremental nature of progress, the exaggeratory nature of popular media, and the basic nature of humans in society amplify this problem and cause it to concentrate and calcify into the form of innovation fatigue.