Hot Takes

It’s been a busy week, and I have not been able to finish any of the things I have started writing recently. But since I’ve made a renewed personal commitment to try and get myself to post things consistently, instead here are a collection of assorted hot-takes. The sort of things that might be tweeted, if I had any desire to tweet. 

If someone is going to require you to go watch/read/see a thing online, they should be obliged to send you a direct link to that thing. If that link doesn’t work, I am not obligated to hunt it down.
If you park in an electric vehicle spot, I will henceforth assume that your car doesn’t need gasoline. If it has any in it, I will do you the favor of siphoning it out. 
As of this year, experts estimate the number of spam and scam calls has eclipsed the number of legitimate phone calls. With this in mind, if you call me and your number does not come up on my phone, your call will be going to voicemail. 
Handicap spots are for people who need them. Your permission to park there has nothing to do with how long you’ll be parked, whether you remain with your vehicle, what kind of car you drive, how big your wallet is, or what power you have over the rest of the facilities. If I have a handicap tag and you don’t, you’re in my spot. 
On a related note: it’s true that under the Americans with Disabilities Act, accessible restrooms / stalls are not exclusive to those who rely on them. However, those that need them don’t always have a lot of options. I get that sometimes when you gotta go, there’s no time to look for other options, and okay, fine. But while I’d hope that you would clean up after yourself regardless, if you are leave a handicap stall worse than you found it, you are a horrible person.
There is a difference in a public space between giving your kids room to explore, and letting them run wild. This is up to you to find the balance. However, if your children are causing property damage, are infringing on my personal space, or are making such a ruckus that I have to turn up my music volume above that which is required by the ambient noise of the surroundings, you are failing. 

The Panopticon Effect


This post is part of the series: The Debriefing. Click to read all posts in this series.


So at my most recent conference there were a lot of research presentations. One of the fascinating things that comes up in clinical studies of diseases that are self-managed, and which was highlighted on several slides, is something I’ve come to call the panopticon effect. It might have a proper name, but if so, I haven’t heard it. The idea is fairly simple, and fairly obvious. For every study that has a control group, almost always, the control group shows better outcomes than the statistical averages.

In cases where control groups receive a placebo treatment, this discrepancy can be attributed to the placebo effect. But the effect persists even when there is no intervention whatsoever. It seems that merely being enrolled in a study is enough to create an increase in whatever outcome is being measured over what would normally be expected.
This could be a subtler extension of the placebo effect. We are constantly finding that placebo, mindfulness, and the like, while never substitutes for actual treatment, do have a measurable positive impact. But there is probably a simpler solution: these people know they are being watched. Even when data is anonomized, and there are no consequences for bad outcomes, there is still the pressure of being under surveillance.   And I suspect it has to do with an obligation that study participants feel to be worthy of the research being conducted.
I have heard variations on this theme slipped subtly in to enough different discussions that I have started to cue in on it lately. It is an idea similar to the ones raised over the obligations that patients often feel to fundraise and advocate on behalf of the organizations that bankroll research for their diseases; not mere camaraderie between people with shared experiences, but a sense of guilt for receiving tangential benefits from others’ work.
To briefly repeat what I have said in previous Debriefing articles: this mindset is embedded deep in the collective psyche of the communities with which I have experience, and in some instances is actively exploited by charity and advocacy organizations. The stereotype of sick and disabled being egregiously kindhearted and single-mindedly dedicated to fundraising and/or advocacy is both a cause and effect of this cycle. The same is naturally true of attention from healthcare professionals and researchers.
Frequent patients, especially in the United States, are constantly reminded of the scarcity of help. In every day-long phone call with insurance, in every long wait in the triage room, and every doctor visit cut short because appointments are scheduled back to back months in advance, we are reminded that what we need is in high demand and short supply. We are lucky to be able to get what we need, and there are plenty of others that are not so fortunate. Perhaps, on paper, we are entitled to life liberty, and the pursuit of happiness; to a standard of healthcare and quality of life; but in reality, we are privileged to get even as little as we do.
There is therefore great pressure to be deserving of the privileges we have received. To be worthy of great collective effort that has gone into keeping us alive. This is even more true where research is concerned; where the attention of the world’s brightest minds and taxpayer dollars are being put forth in a gamble to advance the frontiers of humanity. Being part of these efforts is something that is taken extremely seriously by many patients. For many of them, who are disqualified from military service and unable to perform many jobs unaided, contributing to scientific research is the highest calling they can answer.
This pressure manifests itself in many different ways. In many, it inspires an almost religious zeal; in others, it is a subtler, possibly even unconscious, response. In some cases, this pressure to live up to the help given by others stokes rebellion, displayed either as antisocial antipathy or even self harming tendencies. No one I have ever spoken to on the matter has yet failed to describe this pressure or agree that it exists in their life.
Moreover, the effect seems to be self reinforcing; the more attention a person receives, the more they feel an obligation to repay it, often through volunteering in research. This in turn increases the amount of attention received, and so on. As noted, participation in these studies seems to produce a statistically significant positive impact in whatever is being measured, completely divorced from any intervention or placebo effect.
We know that people behave differently when they feel they are being watched, and even more so when they feel that the people watching have expectations. We also know that prolonged stress, such as the stress of having to keep up external appearances over an extended period, take a toll, both psychologically and physiologically, on the patient. We must therefore ask at what cost this additional scrutiny, and the marginal positive impact on health results, comes.
We will probably never have a definitive answer to these sorts of questions. The  intersection of chronic physical conditions and mental health is convoluted, to say the least. Chronic health issues can certainly add additional stress and increase risk of mental illness, yet at the same time, make it harder to isolate and treat. After all, can you really say a person is unreasonably anxious when they worry about a disease that is currently killing them? In any case, if we are not likely to ever know for sure the precise effects of these added stresses, then we should at least commit to making them a known unknown.

The N-Word


This post is part of the series: The Debriefing. Click to read all posts in this series.


The worst insult that can be leveled against a person with chronic illness is, without a doubt, the n-word. Oh sure, there are those who defend its use, citing that it has, or possibly had, a proper context. That it evolved from scientific, then clinical, jargon, before finding its way into use as a common slur. They cite dozens of other slurs that are casually slung against the sick and disabled, and ask how such an innocuous phrase with a relatively short history can compare with a more traditionally vulgar term with more malicious intent. But these people are wrong. There is, in the present English lexicon, no word known to me which is worse than the n-word.

Noncompliant.

There is so much wrong with this word that’s it hard to know where to start. Much as it pains me to dwell on this phrase, I think it would be helpful for me to break it down a bit, and explain why it is such a toxic word; a radiological bomb of a slur, causing and spreading otherwise invisible pain and suffering for long after it is used.

It first assumes a moral high ground, implying that the person using it is in a position to dictate morality unto the patient. Then it assumes total control of the patient’s affairs, with the implication that the patient’s only role in their only health is to comply. As though healthcare were run by hydra.

“Your vital signs for this quarter aren’t where we want them. I want you to take a deep breath, and clear your mind. You know what’s best. What’s best is you comply.”

At best, it assumes that a failure to follow instructions is solely the fault of the patient, as though there is no force in the cosmos, let alone everyday life, that could interfere with the timely execution of a medical regimen. Never mind the fact that the kind of regimens we’re talking about- mixing chemicals into usable medicine, drawing up precise doses in syringes, and delivering them several times a day – are routines that, as a healthcare worker, require months of training at minimum, yet patients are lucky if they get half an hour of professional training before being tossed back into the wild.

No, clearly, if you can’t keep to a schedule drawn up by a pencil pusher in a lab, because when the allotted hour rolls around you’re not in a good place to be dealing with sterile medical equipment, never mind your own mental state, it’s your own fault. You clearly don’t care about your own health as much as this doctor that you see once every three months does. So childish are you that you can’t re-organize your entire life to be at the back and call of this disease.

That is the implication of noncompliance. Either a willing petulance, a childish cluelessness, or, at worst, a mental derangement. For indeed, noncompliance is often colloquially synonymous with self-harm. Well obviously we can’t let you have input on your own care if you’re suicidal. Clearly the solution here is to double down and set tighter targets. The n-word is immensely destabilizing in this way, as it insinuates that the patient is incompetent in a way that is extremely difficult to argue against, at least from the patient’s perspective.

All of this assumes that the problem is with the execution of the treatment rather than the treatment itself. For, all to often, patient noncompliance is tossed off as a face-saving excuse by doctors who aren’t getting results from the treatment they prescribed. After all, few patients will actually admit to disregarding medical advice, and so the n-word is often a deduction by doctors based off of clinical results rather than a patient’s activities. The problem is, clinical results can have multiple causes and interpretations.

These issues are not mutually exclusive. A patient may easily stop following their regimen once they find it stops working for them, or once they find they can no longer endure the problems of trying to slot their regimen into their life. And mental health issues which are preventing the execution of a patient’s medical regimen are as much a problem for the doctor as for the patient.

A doctor that leaves a patient with a treatment that does not work for them, for whatever reason, has not done their job. But the nature of the n-word is that is a patient’s problem. Or possibly, it is a problem with the patient, always outside the purview of the doctor’s job.

But too often all this is ignored. The clinician sees bad test results, and sees that they prescribed the treatment which seemed reasonable to them at the time, and so concludes that the patient is noncompliant, jots down a note to that effect, and gives the patient a stern lecture before sending them on their way and encouraging them to do better next time.

There is so much wrong with this situation, and with the dynamic it feeds, which is at best unproductive, and at worst borderline abusive. But by far the worst part is the impact on future healthcare. Because a patient that is labeled as noncompliant is marked. In the United States, this can cause serious issues with insurance and pharmacies in getting medication. The mechanisms by which these problems occur are designed to mitigate abuse of particularly dangerous prescription medications, such as opioid painkillers and antibiotics, which I suppose is fair enough, but because of how medicine in the US works, are applied to anything requiring a prescription.

For people who need their medication to survive, this can be life threatening. As noted previously, being labeled noncompliant can happen even if a patient is doing their absolute best. For those without the resources to switch doctors or fight insurance diktats, the n-word can have deadly consequences, and what’s more, can make patients think they deserve it.

To call a patient noncompliant is to, in a single word, strike at everything they have done to make their life, and to imply that they are not worthy of it. It is an awful slur borne of misguided assumptions and a perspective on healthcare that gives preference to doctors over patients. It is a case study in so many of the problems in the capitalist healthcare system. Unfortunately, this word will not simply go away simply because we all acknowledge that it is awful.

For indeed, the things that make the n-word terrible are in many cases only microcosms of the items which cause suffering to those with chronic health issues. The path to eradicating this slur, therefore, is a combination of renewed curative effort, reforms to the healthcare system, and a greater focus on the patient perspective.

Personal Surveillance – Part 2

This is the second installment in a continued multi-part series entitled Personal Surveillance. To read the other parts once they become available, click here.


Our modern surveillance system is not the totalitarian paradigm foreseen by Orwell, but a decentralized, and in the strictest sense, voluntary, though practically compulsory network. The goal and means are different, but the ends, a society with total insight into the very thoughts of its inhabitants, are the same.

Which brings me to last week. Last week, I was approached by a parent concerned about the conduct of her daughter. Specifically, her daughter has one of the same diagnoses I do, and had been struggling awfully to keep to her regimen, and suffering as a result. When I was contacted the daughter had just been admitted to the hospital to treat the acute symptoms and bring her back from the brink. This state of affairs is naturally unsustainable, in both medical and epistemological terms. I was asked if there was any advice I could provide, from my experience of dealing with my own medical situation as a teenager, and in working closely with other teenagers and young adults.

Of course, the proper response depends inextricably upon the root cause of the problem. After all, treating what may be a form of self harm, whether intentional or not, which has been noted to be endemic to adolescents who have to execute their own medical regimen, or some other mental illness, with the kind of disciplinary tactics that might be suited to the more ordinary teenage rebellion and antipathy, would be not only ineffective and counterproductive, but dangerous. There are a myriad of different potential causes, many of which are mutually exclusive, all of which require different tactics, and none of which can be ruled out without more information.

I gave several recommendations, including the one I have been turning over in my head since. I recommended that this mother look into her daughter’s digital activities; into her social media, her messages, and her browser history. I gave the mother a list of things to look out for: evidence of bullying online or at school, signs that the daughter had been browsing sites linked to mental illness, in particular eating disorders and depression, messages to her friends complaining about her illness or medical regimen, or even a confession that she was willfully going against it. The idea was to try and get more information to contextualize her actions, and that this would help her parents help her.

After reflecting for some time, I don’t feel bad about telling the mother to look through private messages. The parents are presumably paying for the phone, and it’s generally accepted that parents have some leeway to meddle in children’s private lives, especially when it involves medical issues. What bothers me isn’t any one line being crossed. What bothers me is this notion of looking into someone’s entire life like this.

That is, after all, the point here. The mother is trying to pry into her daughter’s whole life at once, into her mind, to figure out what makes her tick, why she does what she does, and what she is likely to do in the future. Based on the information I was provided, it seemed justified; even generous. As described, the daughter’s behavior towards her health is at best negligent, and at worst suggests she is unstable and a danger to herself. The tactics described, sinister though they are, are still preferable to bringing down the boot-heel of discipline or committing her to psychiatric care if neither may be warranted.

This admittedly presupposes that intervention is necessary in any case, in effect presuming guilt. In this instance, it was necessary, because the alternative of allowing the daughter to continue her conduct, which was, intentional or not, causing medical harm and caused her to be hospitalized, was untenable. At least, based on the information I had. But even such information was certainly enough to be gravely concerned, if not enough to make a decision on a course of action.

The goal, in this case, was as benevolent as possible: to help the daughter overcome whatever it was that landed her in this crisis in the first place. Sometimes such matters truly are a matter of doing something “for their own good”. But such matters have to be executed with the utmost kindness and open-mindedness. Violating someone’s privacy may or may not be acceptable under certain circumstances, but certainly never for petty vendettas.

It would not, for example, be acceptable for the mother to punish the daughter for a unkind comment made to a friend regarding the mother. Even though this might suggest that some discipline is in order to solve the original problem, as, without other evidence to the contrary, it suggests a pattern of rebellion that could reasonably be extrapolated to include willful disobedience of one’s medical regimen, such discipline needs to be meted out for the original violation, not for one that was only discovered because of this surveillance.

Mind you, I’m not just talking out of my hat here. This is not just a philosophical notion, but a legal one as well. The fifth amendment, and more broadly the protections against self-incrimination, are centered around protecting the core personhood- a person’s thoughts and soul -from what is known as inquisitorial prosecution. Better scholars than I have explained why this cornerstone is essential to our understanding of justice and morality, but, to quickly summarize: coercing a person by using their private thoughts against them deprives them of the ability to make their own moral choices, and destroys the entire notion of rights, responsibilities, and justice.

Lawyers will be quick to point out that the fifth amendment as written doesn’t apply here per se (and as a matter of law, they’d be right). But we know that our own intentions are to look into the daughter’s life as a whole, her thoughts and intentions, which is a certain kind of self incrimination, even if you would be hard pressed to write a law around it. We are doing this not to find evidence of new wrongs to right, but to gain context which is necessary for the effective remedy of problems that are already apparent, that were already proven. By metaphor: we are not looking to prosecute the drug user for additional crimes, but to complete rehabilitation treatment following a previous conviction.

In government, the state can circumvent the problems posed to fact-finding by the fifth amendment by granting immunity to the testifying witness so that anything they say can not be used against them, as though they had never said it, neutralizing self-incrimination. In our circumstances, it is imperative that the information gathered only be used as context for the behaviors we already know about. I tried to convey this point in my recommendations to the mother in a way that also avoided implying that I expected she would launch an inquisition at the first opportunity.

Of course, this line of thinking is extremely idealistic. Can a person really just ignore a social taboo, or minor breach, and carry on unbiased and impartial in digging through someone’s entire digital life? Can that person who has been exposed to everything the subject has ever done, but not lived any of it, even make an objective judgment? The law sweeps this question under the rug, because it makes law even more of an epistemological nightmare than it already is, and in practical terms probably doesn’t matter unless we are prepared to overhaul our entire constitutional system. But it is a pertinent question for understanding these tactics.

The question of whether such all-inclusive surveillance of our digital lives can be thought to constitute self-incrimination cannot be answered in a blog post, and is unlikely to be settled in the foreseeable future. The generation which is now growing up, which will eventually have grown up with nothing else but the internet, will, I am sure, be an interesting test case. It is certainly not difficult to imagine that with all the focus on privacy and manipulation of online data that we will see a shift in opinions, so that parts of one’s online presence will be thought to be included as part of one’s mind. Or perhaps, once law enforcement catches up to the 21st century, we will see a subtle uptick in the efficacy of catching minor crimes and breaches of taboo, possibly before they even happen.

Personal Surveillance – Part 1

This is the first installment in a multi-part series entitled Personal Surveillance. To read the other parts once they become available, click here.


George Orwell predicted, among many other things, a massive state surveillance apparatus. He wasn’t wrong; we certainly have that. But I’d submit that it’s also not the average person’s greatest threat to privacy. There’s the old saying that the only thing protecting citizens from government overreach is government inefficiency, and in this case there’s something to that. Surveillance programs are terrifyingly massive in their reach, but simply aren’t staffed well enough to parse everything. This may change as algorithms become more advanced in sifting through data, but at the moment, we aren’t efficient enough to have a thought police.

The real danger to privacy isn’t what a bureaucrat is able to pry from an unwilling suspect, but what an onlooker is able to discern from an average person without any special investigative tools or legal duress. The average person is generally more at risk from stalkers than surveillance. Social media is especially dangerous in this regard, and the latest scandals surrounding Cambridge Analytica, et. al. are a good example of how social media can be used for nefarious purposes.

Yet despite lofty and varied criticism, I am willing to bet the overall conclusion of this latest furor: the eventual consensus will be that, while social media may be at fault, its developers are not guilty of intentional malice, but rather of pursuing misaligned incentives, combined with an inability to keep up, whether through laziness or not grasping the complete picture soon enough, with the accelerating pace with which our lives have become digitized.

Because that is the root problem. Facebook and its ilk started as essentially decentralized contact lists and curated galleries, and twitter and its facsimiles started as essentially open-ended messaging services, but they have evolved into so much more. Life happens on the Internet nowadays.

In harkening back to the halcyon days before the scandal du jour, older people have called attention to the brief period between the widespread adoption of television and the diversification; the days when there were maybe a baker’s dozen channels. In such times, we are told, people were held together by what was on TV. The political issues of the day were chosen by journalists, and public discourse shaped almost solely by the way they were presented on those few channels. Popular culture, we are told, was shaped in much the same way, so that there was always a baseline of commonality.

Whether or not this happened in practice, I cannot say. But I think the claim about those being the halcyon days before all this divide and subdivide are backwards. On the contrary, I would submit that those halcyon days were the beginning of the current pattern, as people began to adapt to the notion that life is a collective enterprise understood through an expansive network. Perhaps that time was still a honeymoon phase of sorts. Or perhaps the nature of this emerging pattern of interconnectedness is one of constant acceleration, like a planet falling into a black hole, slowly, imperceptibly at first, but always getting faster.

But getting back to the original point, in addition to accelerating fragmentation, we are also seeing accelerated sharing of information, which is always, constantly being integrated, woven into a more complete mosaic narrative. Given this, it would be foolish to think that we could be a part of it without our own information being woven into the whole. Indeed, it would be foolish to think that we could live in a world so defined by interconnectedness and not be ourselves part of the collective.

Life, whether we like it or not, is now digital. Social media, in the broadest sense, is the lenses through which current events are now projected onto the world, regardless of whether or not social media was built for or to withstand this purpose. Participation is compulsory (that is, under compulsion, if not strictly mandatory) to be a part of modern public life. And to this point, jealous scrutiny of one’s internet presence is far more powerful than merely collecting biographical or contact information, such as looking one up in an old fashioned directory.

Yet society has not adapted to this power. We have not adapted to treat social media interactions with the same dignity with which we respect, for example, conversations between friends in public. We recognize that a person following us and listening in while we were in public would be a gross violation of our privacy, even if it might skirt by the letter of the law*. But trawling back through potentially decades of interactions online, is, well… we haven’t really formulated a moral benchmark.

This process is complicated by the legitimate uses of social media as a sort of collective memory. As more and more mental labor is unloaded onto the Internet, the importance of being able to call up some detail from several years ago becomes increasingly important. Take birthdays, for example. Hardly anyone nowadays bothers to commit birthdays to memory, and of the people I know, increasingly few keep private records, opting instead to rely on Facebook notifications to send greetings. And what about remembering other events, like who was at that great party last year, or the exact itinerary of last summer’s road trip?

Human memory fades, even more quickly now that we have machines to consult, and no longer have to exercise our own powers of recognizance. Trawling through a close friend’s feed in order to find the picture of the both of you from Turks and Caicos, so that you can get it framed as a present, is a perfectly legitimate, even beneficial, use of their otherwise private, even intimate, data, which would hardly be possible if that data were not available and accessible. The modern social system- our friendships, our jobs, our leisure -rely on this accelerating flow of information. To invoke one’s privacy even on a personal level seems now to border on the antisocial.

Heroes and Nurses

Since I published my last post about being categorically excluded from the nursing program of the university I am applying to, I have had many people insist that I ought to hold my ground on this one, even going so far as filing a legal complaint if that’s what it takes. I should say upfront that I appreciate this support. I appreciate having family and friends that are willing to stand by me, and I appreciate having allies who are willing to defend the rights of those with medical issues. It is an immense comfort to have people like this in my corner.

That firmly stated, there are a few reasons why I’m not fighting this right now. The first is pragmatic: I haven’t gotten into this university yet. Obviously, I don’t want the first impression of a school I hope to be admitted into to be a lawsuit. Moreover, there is some question of standing. Sure, I could try to argue that the fact that I was deterred from applying by their online statements on account of my medical condition constitutes discrimination in and of itself, but without a lot more groundwork to establish my case, it’s not completely open and shut. This could still be worth it if I was terribly passionate about nursing as a life path, which brings me to my second primary reason.

I’m not sure whether nursing would be right for me. Now, to be clear, I stand by my earlier statement that nursing is a career I could definitely see myself in, and which I think represents a distinct opportunity for me. But the same thing is true of several other careers: I think I would also find fulfillment as a researcher, or a policy maker, or an advocate. Nursing is intriguing and promising, but not necessarily uniquely so.

But the more salient point, perhaps, is that the very activities which are dangerous to me specifically, the reasons why I am excluded from the training program, the things which I would have to be very careful to avoid in any career as a nurse for my own safety and that of others, are the very same things that I feel attracted to in nursing.

This requires some unpacking.

Through my childhood my mother has often told me stories of my great-grandfather. To hear all of the tales, nay, legends of this man portray him as a larger than life figure with values and deeds akin to a classical hero of a bygone era. As the story goes, my great grandfather, when he was young, was taken ill with rheumatic fever. Deathly ill, in fact, to a point where the doctors told his parents that he would not survive, and the best they could do was to make him comfortable in his final days.

So weak was he that each carriage and motorcar that passed on the normally busy street outside wracked him with pain. His parents, who were wealthy and influential enough to do so, had the local government close the street. He languished this was for more than a year. And then, against all odds and expectations, he got better. It wasn’t a full recovery, as he still bore the scars on his heart and lungs from the illness. But he survived.

He was able to return back to school, albeit at the same place where he had left off, which was by now a year behind. He not only closed this gap, but in the end, actually skipped a grade and graduated early (Sidenote: If ever I have held unrealistically high academic expectations for myself, or failed to cut myself enough slack with regards to my own handicaps, this is certainly part of the reason why). After graduating, he went on to study law.

When the Second World War reared its ugly head, my great grandfather wanted to volunteer. He wanted to, but couldn’t, because of his rheumatic fever. Still, he wanted to serve his country. So he reached out to his contacts, including a certain fellow lawyer name of Bill Donovan, who had just been tasked by President Roosevelt with forming the Office of Strategic Services, a wartime intelligence agency meant to bring all the various independent intelligence and codebreaking organizations of the armed services under one roof. General Donovan saw that my great-grandfather was given an exemption from the surgeon general in order to be appointed as an officer in the OSS.

I still don’t know exactly what my great grandfather did in the war. He was close enough to Donovan, who played a large enough role in the foundation of the modern CIA, that many of the files are still classified, or at least redacted. I know that he was awarded a variety of medals, including the Legion of Merit, the Order of the British Empire, and the Order of the White Elephant. Family lore contends that the British Secret Service gave him the code number 006 for his work during allied intelligence operations.

I know from public records, among many other fascinating tidbits, that he provided information that was used as evidence at the Nuremberg Trials. I have read declassified letters that show that he maintained a private correspondence with, among other figures, a certain Allan Dulles. And old digitized congressional records show that he was well-respected enough in his field that he was called for the defense counsel in hearings before the House Un-American Activities Committee, where his word as an intelligence officer was able to vindicate former colleagues who were being implicated by the testimony of a female CPUSA organizer and admitted NKVD asset.

The point is, my great grandfather was a hero. He moved among the giants of the era. He helped to bring down the Nazis (the bad guys), bring them to justice, and to defend the innocent. Although I have no conclusive evidence that he was ever, strictly speaking, in danger, since public records are few an far between, it stands to reason that receiving that many medals requires some kind of risk. He did all this despite having no business in the military because of his rheumatic fever. Despite being exempt from the draft, he felt compelled to do his bit, and he did so.

This theme has always had an impact on me. The idea of doing my bit has has a profound, even foundational effect on my philosophy, both in my sense of personal direction, and in my larger ideals of how I think society ought work. And this idea has always been a requirement of any career that I might pursue.

To my mind, the image of nursing, the part that I feel drawn to, is that image used by the World Health Organization, the Red Cross, and the various civil defence and military auxiliary organizations, of the selfless heroine who courageously breaks with her station as a prim and proper lady in order to provide aid and comfort to the boys at the front serving valiantly Over There while the flag is raised in the background to a rising crescendo of your patriotic music of choice. Or else, of the humanitarian volunteer working in a far flung outpost, diligently healing those huddled masses yearning to breath free as they flee conflict. Or possibly of the brave health workers in the neglected tropical regions, serving as humanity’s first and most critical line of defence against global pandemic.

Now, I recognize, at least consciously, that these images are, at best, outdated romanticized images that represent only the most photogenic, if the most intense, fractions of the real work being done by nurses; and at worst are crude, harmful stereotypes that only serve to exacerbate the image problem that has contributed to the global nurse shortage. The common denominator in all of these, is that they are somehow on the “front lines”; that they are nursing as a means to save the world, if not as an individual hero, then certainly as part of a global united front. They represent the most stereotypically heroic, most dangerous aspects of the profession, and, relevant to my case, the very portions which would be prohibitively dangerous to an immunocompromised person.

This raises some deep personal questions. Obviously, I want and intend to do my bit, whatever that may come to mean in my context. But with regards to nursing, am I drawn to it because it is a means to do my bit, or because it offers the means to fit a kind of stereotypical hero archetype that I cannot otherwise by virtue or my exclusion from the military, astronaut training, etc (and probably could not as a nurse for similar reasons)? And the more salient question: if we assume that the more glamorous (for sore lack of a better word) aspects of nursing are out of the question (and given the apparent roadblocks for me to even enter the training program, it certainly seems reasonable to assume that such restrictions will be compelled regardless of my personal attitudes towards the risks involved), am I still interested in pursuing the field?

This is a very difficult question for me to answer, and the various ways in which it can be construed and interpreted make this all the more difficult. For example, my answer to the question “Would you still take this job if you knew it wasn’t as glamorous day to day as it’s presented?” would be very different from my answer to the question “Would you still be satisfied knowing that you were not helping people as much as you could be with the training you have, because your disability was holding you back from contributing in the field?” The latter question also spawns more dilemmas, such as “When faced with an obstacle related to a disability, is it preferable to take a stand on principle, or to cut losses and try to work out a minimally painful solution, even if it means letting disability and discrimination slide by?” All big thematic questions. And if they were not so relevant, I might enjoy idly pondering them.

2018 Resolution #3

2018 Resolution #3: Get back to exercising

Around spring of this past year I began, as a means of giving myself some easily-achievable goals, a loose program of regular exercise, chiefly in the form of regular walks. Although this simple routine did not give me, to borrow a phrase from the magazines I pass at the checkout counter, “a hot summer bod”, it did get me out of the house at a time where I needed to, and help build my stamina up in order to withstand our summer travel itinerary.

Despite my intentions, I fell out of this new habit after mid-November, and have not managed to get back into it. In my defense, my normal walking route from my house through town lacks sidewalks, and the lawns which I normally walk through are covered in snow. Our house is populated and organized in such a way that even if I possessed proper exercise equipment, there would be no place to put it.

Going to a gym does not strike me as a practical alternative. To put it simply, there is not a gym close by enough to drop by under casual pretenses. This is problematic for two reasons. First, an intense routine on a set schedule that requires a great deal of preparation and investment is more or less contraindicated by my medical situation, which has a distinct tendency to sabotage any such plans.

Secondly, such a routine would clash with the lies that I tell myself. In executing my more casual routine, I have found in motivating myself, it is often necessary, or at least, helpful, to have some mental pretext that does not involve exercise directly. If I can pitch getting out of the house to myself instead as a sightseeing expedition, or as a means of participating in town society by means of my presence, it is much easier to motivate myself without feeling anxious.

Accordingly, my resolution for the coming year is to exercise more later in the year when I can. Admittedly this is a weak goal, with a lot of wiggle room to get out of. And I might be more concerned about that, except that this was basically the same thing that I did last year, and at least that time, it worked.

2018 Resolution #1

2018 Resolution #1: Standardize to 24-hour time

A year or so ago, one of my resolutions was to finally iron out the problem of writing dates. For context: I grew up in Australia, where the default is DD/MM/YY. But in the US, where I now live, the default is MM/DD/YY. Now if I had to pick one of the two, I would probably lean towards the former, since it seems slightly more logical, and more natural to me personally. But since everyone around me uses the latter, taking that avenue would only cause more confusion in my life, perhaps not for me, but certainly for those around me.

For a while I would switch between the two systems depending on what purpose I was writing the date for. Items such as school assignments would be dated in the American fashion, while things for my personal consumption would be done in the commonwealth manner. Until after several years I started going throug my own files of schoolwork, particularly artwork, and encountering dates such as 9/10. What does that mean, in the context of a pencil marking in the corner of a sketch, jotted down as an afterthought? Does it mean the tenth of September, or the ninth of October? Or was it completed during the month of September, 2010? Or perhaps it is merely the ninth piece of a series of ten? Or perhaps it received a score of 90% that I wanted to record for posterity.

I knew that the dualistic system was untenable, but I also knew that I would likely fail in the mental self-discipline necessary in forcing myself into either of the two competing standards; especially given that there remain certain contexts where it is necessary that I use each. I therefore decided to adopt a whole new system, based on ISO 8601.

Henceforth, where I was given the choice, I would record all dates in YYYY-MM-DD format. This would make it abundantly obvious that I was recording the date, and the format I was using. It was also different enough that I would not confuse it. Where compelled by outside forces, such as stringent academic standards for school assignments, I would continue to use the other formats, but there it would be clear which format I was using.

Despite skepticism from those around me, this system has worked out quite well, and so I am expanding the project to include having time displayed on my devices in 24-hour time.

There is Power in a Wristband


This post is part of the series: The Debriefing. Click to read all posts in this series.


Quick note: this post contains stuff that deals with issues of law and medical advice. While I always try to get things right, I am neither a doctor nor a lawyer, and my blog posts are not to be taken as such advice.

Among people I know for whom it is a going concern, medical identification is a controversial subject. For those not in the know, medical identification is a simple concept. The idea is to have some sort of preestablished method to convey to first responders and medical personnel the presence of a condition which may either require immediate, specific, treatment (say, a neurological issue that requires the immediate application of a specific rescue medication), or impact normal treatment (say, an allergy to a common drug) in the event that the patient is incapacitated.

The utilitarian benefits are obvious. In an emergency situation, where seconds count, making sure that this information is discovered and conveyed can, and often does, make the difference between life and death, and prevent delays and diversions that are costly in time, money, and future health outcomes. The importance of this element cannot be overstated. There are also some possible purported legal benefits to having pertinent medical information easily visible for law enforcement and security to see. On the other hand, some will tell you that this is a very bad idea, since it gives legal adversaries free evidence about your medical conditions, which is something they’d otherwise have to prove.

The arguments against are equally apparent. There are obvious ethical quandaries in compelling a group of people to identify themselves in public, especially as in this case it pertains to normally confidential information about medical and disability status. And even where the macro-scale political considerations do not enter it, there are the personal considerations. Being forced to make a certain statement in the way one dresses is never pleasant, and having that mode of personal choice and self expression can make the risk of exacerbated medical problems down the line seem like a fair trade off.

I can see both sides of the debate here. Personally, I do wear some medical identification at all times – a small bracelet around my left wrist – and have more or less continuously for the last decade. It is not so flamboyantly visible as some people would advise. I have no medical alert tattoos, nor embroidered jacket patches. My disability is not a point of pride. But it is easily discoverable should circumstances require it.

Obviously, I think that what I have done and continue to do is fundamentally correct and right, or at least, is right for me. To do less seems to me foolhardy, and to do more seems not worth the pains required. The pains it would cause me are not particularly logistical. Rather they refer to the social cost of my disability always being the first impression and first topic of conversation.

It bears repeating that, though I am an introvert in general, I am not particularly bashful about my medical situation. Provided I feel sociable, I am perfectly content to speak at length about all the nitty gritty details of the latest chapter in my medical saga. Yet even I have a point at which I am uncomfortable advertising that I have a disability. While I am not averse to inviting empathy, I do not desire others to see me as a burden, nor for my disability to define every aspect of our interactions any more than the face that I am left handed, or brown eyed, or a writer. I am perfectly content to mention my medical situation when it comes up in conversation. I do not think it appropriate to announce it every time I enter a room.

Since I feel this way, and I am also literally a spokesman and disability advocate, it is easy to understand that there are many who do not feel that it is even appropriate for them to say as much as I do. Some dislike the spotlight in general. Others are simply uncomfortable talking about a very personal struggle. Still others fear the stigma and backlash associated with any kind of imperfection and vulnerability, let alone one as significant as a bonafide disability. These fears are not unreasonable. The decision to wear medical identification, though undoubtedly beneficial to health and safety, is not without a tradeoff. Some perceive that tradeoff, rightly or wrongly, as not worth the cost.

Even though this position is certainly against standard medical advice, and I would never advocate people go against medical advice, I cannot bring myself to condemn those who go against this kind of advice with the same definitiveness with which I condemn, say, refusing to vaccinate for non-medical reasons, or insurance companies compelling patients to certain medical decisions for economic reasons. The personal reasons, even though they are personal and not medical, are too close to home. I have trouble finding fault with a child who doesn’t want to wear an itchy wristband, or a teenager who just wants to fit in and make their own decisions about appearance. I cannot fault them for wanting what by all rights should be theirs.

Yet the problem remains. Without proper identification it is impossible for first responders to identify those who have specific, urgent needs. Without having these identifiers be sufficiently obvious and present at all times, the need for security and law enforcement to react appropriately to those with special needs relies solely on their training beforehand, and on them trusting the people they have just detained.

In a perfect world, this problem would be completely moot. Even in a slightly less than perfect world, where all these diseases and conditions still existed, but police and first responder training was perfectly robust and effective, medical identification would not be needed. Likewise, in such a world, the stigma of medical identification would not exist; patients would feel perfectly safe announcing their condition to the world, and there would be no controversy in adhering to the standard medical advice.

In our world, it is a chicken-egg problem, brought on by understandable, if frustrating, human failings at every level. Trying to determine fault and blame ultimately comes down to questioning the nitty gritty of morality, ethics, and human nature, and as such, is more suited to an exercise in navel gazing than an earnest attempt to find solutions to the problems presently faced by modern patients. We can complain, justifiably and with merit, that the system is biased against us. However such complaints, cathartic though they may be, will not accomplish much.

This viscous cycle, however, can be broken. Indeed, it has been broken before, and recently. Historical examples abound of oppressed groups coming to break the stigma of an identifying symbol, and claiming it as a mark of pride. The example that comes most immediately to mind is the recent progress that has been made for LGBT+ groups in eroding the stigma of terms which quite recently were used as slurs, and in appropriating symbols such as the pink triangle as a symbol of pride. In a related vein, the Star of David, once known as a symbol of oppression and exclusion, has come to be used by the Jewish community in general, and Israel in particular, as a symbol of unity and commonality.

In contrast to such groups, the road for those requiring medical identification is comparatively straightforward. The disabled and sick are already widely regarded as sympathetic, if pitiful. Our symbols, though they may be stigmatized, are not generally reviled. When we face insensitivity, it is usually not because those we face are actively conspiring to deny us our needs, but simply because we may well be the first people they have encountered with these specific needs. As noted above, this is a chicken-egg problem, as the less sensitive the average person is, the more likely a given person with a disability that is easily hidden is to try and fly under the radar.

Imagine, then, if you can, such a world, where a medical identification necklace is as commonplace and unremarkable as a necklace with a religious symbol. Imagine seeing a parking lot with stickers announcing the medical condition of a driver or passenger with the same regularity as you see an advertisement for a political cause or a vacation destination. Try to picture a world where people are as unconcerned about seeing durable medical equipment as American flag apparel. It is not difficult to imagine. We are still a ways away from it, but it is within reach.

I know that this world is within reach, partially because I myself have seen the first inklings of it. I have spent time in this world, at conferences and meetings. At several of these conferences, wearing a colored wristband corresponding to one’s medical conditions is a requirement for entry, and here it is not seen as a symbol of stigma, but one of empowerment. Wristbands are worn in proud declaration, amid short sleeved shirts for walkathon teams, showing bare medical devices for all the world to see.

Indeed, in this world, the medical ID bracelet is a symbol of pride. It is shown off amid pictures of fists clenched high in triumph and empowerment. It is shown off in images of gentle hands held in friendship and solidarity.

It is worth mentioning with regards to this last point, that the system of wristbands is truly universal. That is to say, even those who have no medical afflictions whatsoever are issued wristbands, albeit in a different color. To those who are not directly afflicted, they are a symbol of solidarity with those who are. But it remains a positive symbol regardless.

The difference between these wristbands, which are positive symbols, and ordinary medical identification, which is at best inconvenient and at worst oppressive, has nothing to do with the physical discrepancies between them, and everything to do with the attitudes that are attached by both internal and external pressure. The wristbands, it will be seen, are a mere symbol, albeit a powerful one, onto which we project society’s collective feelings towards chronic disease and disability.

Medical identification is in itself amoral, but in its capacity as a symbol, it acts as a conduit to amplify our existing feelings and anxieties about our condition. In a world where disabled people are discriminated against, left to go bankrupt from buying medication for their survival, and even targeted by extremist groups, it is not hard to find legitimate anxieties to amplify in this manner. By contrast an environment in which the collective attitude towards these issues is one of acceptance and empowerment, these projected feelings can be equally positive.

The Moral Hazard of Hope


This post is part of the series: The Debriefing. Click to read all posts in this series.


Suppose that five years from today, you would receive an extremely large windfall. The exact number isn’t important, but let’s just say it’s large enough that you’ll have to budget things again. Not technically infinite, because that would break everything, but for the purposes of one person, basically undepletable. Let’s also assume that this money becomes yours in such a way that it can’t be taxed or swindled in getting it. This is also an alternate universe where inheritance and estates don’t exist, so there’s no scheming among family, and no point in considering them in your plans. Just roll with it.

No one else knows about it, so you can’t borrow against it, nor is anyone going to treat you differently until you have the money. You still have to be alive in five years to collect and enjoy your fortune. Freak accidents can still happen, and you can still go bankrupt in the interim, or get thrown in prison, or whatever, but as long as you’re around to cash the check five years from today, you’re in the money.

How would this change your behavior in the interim? How would your priorities change from what they are?

Well, first of all, you’re probably not going to invest in retirement, or long term savings in general. After all, you won’t need to. In fact, further saving would be foolish. You’re not going to need that extra drop in the bucket, which means saving it would be wasting it. You’re legitimately economically better off living the high life and enjoying yourself as much as possible without putting yourself in such severe financial jeopardy that you would be increasing your chances of being unable to collect your money.

If this seems insane, it’s important to remember here, that your lifestyle and enjoyment are quantifiable economic factors (the keyword is “utility”) that weigh against the (relative and ultimately arbitrary) value of your money. This is the whole reason why people buy stuff they don’t strictly need to survive, and why rich people spend more money than poor people, despite not being physiologically different. Because any money you save is basically worthless, and your happiness still has value, buying happiness, expensive and temporary though it may be, is always the economically rational choice.

This is tied to an important economic concept known as Moral Hazard, a condition where the normal risks and costs involved in a decision fail to apply, encouraging riskier behavior. I’m stretching the idea a little bit here, since it usually refers to more direct situations. For example, if I have a credit card that my parents pay for to use “for emergencies”, and I know I’m never going to see the bill, because my parents care more about our family’s credit score than most anything I would think to buy, then that’s a moral hazard. I have very little incentive to do the “right” thing, and a lot of incentive to do whatever I please.

There are examples in macroeconomics as well. For example, many say that large corporations in the United States are caught in a moral hazard problem, because they know that they are “too big to fail”, and will be bailed out by the government if they get in to serious trouble. As a result, these companies may be encouraged to make riskier decisions, knowing that any profits will be massive, and any losses will be passed along.

In any case, the idea is there. When the consequences of a risky decision become uncoupled from the reward, it can be no surprise when rational actors take more riskier decisions. If you know that in five years you’re going to be basically immune to any hardship, you’re probably not going to prepare for the long term.

Now let’s take a different example. Suppose you’re rushed to the hospital after a heart attack, and diagnosed with a heart condition. The condition is minor for now, but could get worse without treatment, and will get worse as you age regardless.

The bad news is, in order to avoid having more heart attacks, and possible secondary circulatory and organ problems, you’re going to need to follow a very strict regimen, including a draconian diet, a daily exercise routine, and a series of regular injections and blood tests.

The good news, your doctor informs you, is that the scientists, who have been tucked away in their labs and getting millions in yearly funding, are closing in on a cure. In fact, there’s already a new drug that’s worked really well in mice. A researcher giving a talk at a major conference recently showed a slide of a timeline that estimated FDA approval in no more than five years. Once you’re cured, assuming everything works as advertised, you won’t have to go through the laborious process of treatment.

The cure drug won’t help if you die of a heart attack before then, and it won’t fix any problems with your other organs if your heart gets bad enough that it can’t supply them with blood, but otherwise it will be a complete cure, as though you were never diagnosed in the first place. The nurse discharging you tells you that since most organ failure doesn’t appear until patients have been going for at least a decade, so long as you can avoid dying for half that long, you’ll be fine.

So, how are you going to treat this new chronic and life threatening disease? Maybe you will be the diligent, model patient, always deferring to the most conservative and risk averse in the medical literature, certainly hopeful for a cure, but not willing to bet your life on a grad student’s hypothesis. Or maybe, knowing nothing else on the subject, you will trust what your doctor told you, and your first impression of the disease, getting by with only as much invasive treatment as you can get away with to avoid dying and being called out by your medical team for being “noncompliant” (referred to in chronic illness circles in hushed tones as “the n-word”).

If the cure does come in five years, as happens only in stories and fantasies, then either way, you’ll be set. The second version of you might be a bit happier from having more fully sucked the marrow out of life. It’s also possible that the second version would have also had to endure another (probably non-fatal) heart attack or two, and dealt with more day to day symptoms like fatigue, pains, and poor circulation. But you never would have really lost anything for being the n-word.

On the other hand, if by the time five years have elapsed, the drug hasn’t gotten approval, or quite possibly, hasn’t gotten close after the researchers discovered that curing a disease in mice didn’t also solve it in humans, then the difference between the two versions of you are going to start to compound. It may not even be noticeable after five years. But after ten, twenty, thirty years, the second version of you is going to be worse for wear. You might not be dead. But there’s a much higher chance you’re going to have had several more heart attacks, and possibly other problems as well.

This is a case of moral hazard, plain and simple, and it does appear in the attitudes of patients with chronic conditions that require constant treatment. The fact that, in this case, the perception of a lack of risk and consequences is a complete fantasy is not relevant. All risk analyses depend on the information that is given and available, not on whatever the actual facts may be. We know that the patient’s decision is ultimately misguided because we know the information they are being given is false, or at least, misleading, and because our detached perspective allows us to take a dispassionate view of the situation.

The patient does not have this information or perspective. In all probability, they are starting out scared and confused, and want nothing more than to return to their previous normal life with as few interruptions as possible. The information and advice they were given, from a medical team that they trust, and possibly have no practical way of fact checking, has led them to believe that they do not particularly need to be strict about their new regimen, because there will not be time for long term consequences to catch up.

The medical team may earnestly believe this. It is the same problem one level up; the only difference is, their information comes from pharmaceutical manufacturers, who have a marketing interest in keeping patients and doctors optimistic about upcoming products, and researchers, who may be unfamiliar with the hurdles in getting a breakthrough from the early lab discoveries to a consumer-available product, and whose funding is dependent on drumming up public support through hype.

The patient is also complicit in this system that lies to them. Nobody wants to be told that their condition is incurable, and that they will be chronically sick until they die. No one wants to hear that their new diagnosis will either cause them to die early, or live long enough for their organs to fail, because even by adhering to the most rigid medical plan, the tools available simply cannot completely mimic the human body’s natural functions. Indeed, telling a patient that they will still suffer long term complications, whether in ten, twenty, or thirty years, almost regardless of their actions today, it can be argued, will have much the same effect as telling them that they will be healthy regardless.

Given the choice between two extremes, optimism is obviously the better policy. But this policy does have a tradeoff. It creates a moral hazard of hope. Ideally, we would be able to convey an optimistic perspective that also maintains an accurate view of the medical prognosis, and balances the need for bedside manner with incentivizing patients to take the best possible care of themselves. Obviously this is not an easy balance to strike, and the balance will vary from patient to patient. The happy-go-lucky might need to be brought down a peg or two with a reality check, while the nihilistic might need a spoonful of sugar to help the medicine go down. Finding this middle ground is not a task to be accomplished by a practitioner at a single visit, but a process to be achieved over the entire course of treatment, ideally with a diverse and well experienced team including mental health specialists.

In an effort to finish on a positive note, I will point out that this is already happening, or at least, is already starting to happen. As interdisciplinary medicine gains traction, patient mental health becomes more of a focus, and as patients with chronic conditions begin to live longer, more hospitals and practices are working harder to ensure that a positive and constructive mindset for self care is a priority, alongside educating patients on the actual logistics of self-care. Support is easier to find than ever, especially with organized patient conferences and events. This problem, much like the conditions that cause it, are chronic, but are manageable with effort.