The Panopticon Effect


This post is part of the series: The Debriefing. Click to read all posts in this series.


So at my most recent conference there were a lot of research presentations. One of the fascinating things that comes up in clinical studies of diseases that are self-managed, and which was highlighted on several slides, is something I’ve come to call the panopticon effect. It might have a proper name, but if so, I haven’t heard it. The idea is fairly simple, and fairly obvious. For every study that has a control group, almost always, the control group shows better outcomes than the statistical averages.

In cases where control groups receive a placebo treatment, this discrepancy can be attributed to the placebo effect. But the effect persists even when there is no intervention whatsoever. It seems that merely being enrolled in a study is enough to create an increase in whatever outcome is being measured over what would normally be expected.
This could be a subtler extension of the placebo effect. We are constantly finding that placebo, mindfulness, and the like, while never substitutes for actual treatment, do have a measurable positive impact. But there is probably a simpler solution: these people know they are being watched. Even when data is anonomized, and there are no consequences for bad outcomes, there is still the pressure of being under surveillance.   And I suspect it has to do with an obligation that study participants feel to be worthy of the research being conducted.
I have heard variations on this theme slipped subtly in to enough different discussions that I have started to cue in on it lately. It is an idea similar to the ones raised over the obligations that patients often feel to fundraise and advocate on behalf of the organizations that bankroll research for their diseases; not mere camaraderie between people with shared experiences, but a sense of guilt for receiving tangential benefits from others’ work.
To briefly repeat what I have said in previous Debriefing articles: this mindset is embedded deep in the collective psyche of the communities with which I have experience, and in some instances is actively exploited by charity and advocacy organizations. The stereotype of sick and disabled being egregiously kindhearted and single-mindedly dedicated to fundraising and/or advocacy is both a cause and effect of this cycle. The same is naturally true of attention from healthcare professionals and researchers.
Frequent patients, especially in the United States, are constantly reminded of the scarcity of help. In every day-long phone call with insurance, in every long wait in the triage room, and every doctor visit cut short because appointments are scheduled back to back months in advance, we are reminded that what we need is in high demand and short supply. We are lucky to be able to get what we need, and there are plenty of others that are not so fortunate. Perhaps, on paper, we are entitled to life liberty, and the pursuit of happiness; to a standard of healthcare and quality of life; but in reality, we are privileged to get even as little as we do.
There is therefore great pressure to be deserving of the privileges we have received. To be worthy of great collective effort that has gone into keeping us alive. This is even more true where research is concerned; where the attention of the world’s brightest minds and taxpayer dollars are being put forth in a gamble to advance the frontiers of humanity. Being part of these efforts is something that is taken extremely seriously by many patients. For many of them, who are disqualified from military service and unable to perform many jobs unaided, contributing to scientific research is the highest calling they can answer.
This pressure manifests itself in many different ways. In many, it inspires an almost religious zeal; in others, it is a subtler, possibly even unconscious, response. In some cases, this pressure to live up to the help given by others stokes rebellion, displayed either as antisocial antipathy or even self harming tendencies. No one I have ever spoken to on the matter has yet failed to describe this pressure or agree that it exists in their life.
Moreover, the effect seems to be self reinforcing; the more attention a person receives, the more they feel an obligation to repay it, often through volunteering in research. This in turn increases the amount of attention received, and so on. As noted, participation in these studies seems to produce a statistically significant positive impact in whatever is being measured, completely divorced from any intervention or placebo effect.
We know that people behave differently when they feel they are being watched, and even more so when they feel that the people watching have expectations. We also know that prolonged stress, such as the stress of having to keep up external appearances over an extended period, take a toll, both psychologically and physiologically, on the patient. We must therefore ask at what cost this additional scrutiny, and the marginal positive impact on health results, comes.
We will probably never have a definitive answer to these sorts of questions. The  intersection of chronic physical conditions and mental health is convoluted, to say the least. Chronic health issues can certainly add additional stress and increase risk of mental illness, yet at the same time, make it harder to isolate and treat. After all, can you really say a person is unreasonably anxious when they worry about a disease that is currently killing them? In any case, if we are not likely to ever know for sure the precise effects of these added stresses, then we should at least commit to making them a known unknown.

The N-Word


This post is part of the series: The Debriefing. Click to read all posts in this series.


The worst insult that can be leveled against a person with chronic illness is, without a doubt, the n-word. Oh sure, there are those who defend its use, citing that it has, or possibly had, a proper context. That it evolved from scientific, then clinical, jargon, before finding its way into use as a common slur. They cite dozens of other slurs that are casually slung against the sick and disabled, and ask how such an innocuous phrase with a relatively short history can compare with a more traditionally vulgar term with more malicious intent. But these people are wrong. There is, in the present English lexicon, no word known to me which is worse than the n-word.

Noncompliant.

There is so much wrong with this word that’s it hard to know where to start. Much as it pains me to dwell on this phrase, I think it would be helpful for me to break it down a bit, and explain why it is such a toxic word; a radiological bomb of a slur, causing and spreading otherwise invisible pain and suffering for long after it is used.

It first assumes a moral high ground, implying that the person using it is in a position to dictate morality unto the patient. Then it assumes total control of the patient’s affairs, with the implication that the patient’s only role in their only health is to comply. As though healthcare were run by hydra.

“Your vital signs for this quarter aren’t where we want them. I want you to take a deep breath, and clear your mind. You know what’s best. What’s best is you comply.”

At best, it assumes that a failure to follow instructions is solely the fault of the patient, as though there is no force in the cosmos, let alone everyday life, that could interfere with the timely execution of a medical regimen. Never mind the fact that the kind of regimens we’re talking about- mixing chemicals into usable medicine, drawing up precise doses in syringes, and delivering them several times a day – are routines that, as a healthcare worker, require months of training at minimum, yet patients are lucky if they get half an hour of professional training before being tossed back into the wild.

No, clearly, if you can’t keep to a schedule drawn up by a pencil pusher in a lab, because when the allotted hour rolls around you’re not in a good place to be dealing with sterile medical equipment, never mind your own mental state, it’s your own fault. You clearly don’t care about your own health as much as this doctor that you see once every three months does. So childish are you that you can’t re-organize your entire life to be at the back and call of this disease.

That is the implication of noncompliance. Either a willing petulance, a childish cluelessness, or, at worst, a mental derangement. For indeed, noncompliance is often colloquially synonymous with self-harm. Well obviously we can’t let you have input on your own care if you’re suicidal. Clearly the solution here is to double down and set tighter targets. The n-word is immensely destabilizing in this way, as it insinuates that the patient is incompetent in a way that is extremely difficult to argue against, at least from the patient’s perspective.

All of this assumes that the problem is with the execution of the treatment rather than the treatment itself. For, all to often, patient noncompliance is tossed off as a face-saving excuse by doctors who aren’t getting results from the treatment they prescribed. After all, few patients will actually admit to disregarding medical advice, and so the n-word is often a deduction by doctors based off of clinical results rather than a patient’s activities. The problem is, clinical results can have multiple causes and interpretations.

These issues are not mutually exclusive. A patient may easily stop following their regimen once they find it stops working for them, or once they find they can no longer endure the problems of trying to slot their regimen into their life. And mental health issues which are preventing the execution of a patient’s medical regimen are as much a problem for the doctor as for the patient.

A doctor that leaves a patient with a treatment that does not work for them, for whatever reason, has not done their job. But the nature of the n-word is that is a patient’s problem. Or possibly, it is a problem with the patient, always outside the purview of the doctor’s job.

But too often all this is ignored. The clinician sees bad test results, and sees that they prescribed the treatment which seemed reasonable to them at the time, and so concludes that the patient is noncompliant, jots down a note to that effect, and gives the patient a stern lecture before sending them on their way and encouraging them to do better next time.

There is so much wrong with this situation, and with the dynamic it feeds, which is at best unproductive, and at worst borderline abusive. But by far the worst part is the impact on future healthcare. Because a patient that is labeled as noncompliant is marked. In the United States, this can cause serious issues with insurance and pharmacies in getting medication. The mechanisms by which these problems occur are designed to mitigate abuse of particularly dangerous prescription medications, such as opioid painkillers and antibiotics, which I suppose is fair enough, but because of how medicine in the US works, are applied to anything requiring a prescription.

For people who need their medication to survive, this can be life threatening. As noted previously, being labeled noncompliant can happen even if a patient is doing their absolute best. For those without the resources to switch doctors or fight insurance diktats, the n-word can have deadly consequences, and what’s more, can make patients think they deserve it.

To call a patient noncompliant is to, in a single word, strike at everything they have done to make their life, and to imply that they are not worthy of it. It is an awful slur borne of misguided assumptions and a perspective on healthcare that gives preference to doctors over patients. It is a case study in so many of the problems in the capitalist healthcare system. Unfortunately, this word will not simply go away simply because we all acknowledge that it is awful.

For indeed, the things that make the n-word terrible are in many cases only microcosms of the items which cause suffering to those with chronic health issues. The path to eradicating this slur, therefore, is a combination of renewed curative effort, reforms to the healthcare system, and a greater focus on the patient perspective.

There is Power in a Wristband


This post is part of the series: The Debriefing. Click to read all posts in this series.


Quick note: this post contains stuff that deals with issues of law and medical advice. While I always try to get things right, I am neither a doctor nor a lawyer, and my blog posts are not to be taken as such advice.

Among people I know for whom it is a going concern, medical identification is a controversial subject. For those not in the know, medical identification is a simple concept. The idea is to have some sort of preestablished method to convey to first responders and medical personnel the presence of a condition which may either require immediate, specific, treatment (say, a neurological issue that requires the immediate application of a specific rescue medication), or impact normal treatment (say, an allergy to a common drug) in the event that the patient is incapacitated.

The utilitarian benefits are obvious. In an emergency situation, where seconds count, making sure that this information is discovered and conveyed can, and often does, make the difference between life and death, and prevent delays and diversions that are costly in time, money, and future health outcomes. The importance of this element cannot be overstated. There are also some possible purported legal benefits to having pertinent medical information easily visible for law enforcement and security to see. On the other hand, some will tell you that this is a very bad idea, since it gives legal adversaries free evidence about your medical conditions, which is something they’d otherwise have to prove.

The arguments against are equally apparent. There are obvious ethical quandaries in compelling a group of people to identify themselves in public, especially as in this case it pertains to normally confidential information about medical and disability status. And even where the macro-scale political considerations do not enter it, there are the personal considerations. Being forced to make a certain statement in the way one dresses is never pleasant, and having that mode of personal choice and self expression can make the risk of exacerbated medical problems down the line seem like a fair trade off.

I can see both sides of the debate here. Personally, I do wear some medical identification at all times – a small bracelet around my left wrist – and have more or less continuously for the last decade. It is not so flamboyantly visible as some people would advise. I have no medical alert tattoos, nor embroidered jacket patches. My disability is not a point of pride. But it is easily discoverable should circumstances require it.

Obviously, I think that what I have done and continue to do is fundamentally correct and right, or at least, is right for me. To do less seems to me foolhardy, and to do more seems not worth the pains required. The pains it would cause me are not particularly logistical. Rather they refer to the social cost of my disability always being the first impression and first topic of conversation.

It bears repeating that, though I am an introvert in general, I am not particularly bashful about my medical situation. Provided I feel sociable, I am perfectly content to speak at length about all the nitty gritty details of the latest chapter in my medical saga. Yet even I have a point at which I am uncomfortable advertising that I have a disability. While I am not averse to inviting empathy, I do not desire others to see me as a burden, nor for my disability to define every aspect of our interactions any more than the face that I am left handed, or brown eyed, or a writer. I am perfectly content to mention my medical situation when it comes up in conversation. I do not think it appropriate to announce it every time I enter a room.

Since I feel this way, and I am also literally a spokesman and disability advocate, it is easy to understand that there are many who do not feel that it is even appropriate for them to say as much as I do. Some dislike the spotlight in general. Others are simply uncomfortable talking about a very personal struggle. Still others fear the stigma and backlash associated with any kind of imperfection and vulnerability, let alone one as significant as a bonafide disability. These fears are not unreasonable. The decision to wear medical identification, though undoubtedly beneficial to health and safety, is not without a tradeoff. Some perceive that tradeoff, rightly or wrongly, as not worth the cost.

Even though this position is certainly against standard medical advice, and I would never advocate people go against medical advice, I cannot bring myself to condemn those who go against this kind of advice with the same definitiveness with which I condemn, say, refusing to vaccinate for non-medical reasons, or insurance companies compelling patients to certain medical decisions for economic reasons. The personal reasons, even though they are personal and not medical, are too close to home. I have trouble finding fault with a child who doesn’t want to wear an itchy wristband, or a teenager who just wants to fit in and make their own decisions about appearance. I cannot fault them for wanting what by all rights should be theirs.

Yet the problem remains. Without proper identification it is impossible for first responders to identify those who have specific, urgent needs. Without having these identifiers be sufficiently obvious and present at all times, the need for security and law enforcement to react appropriately to those with special needs relies solely on their training beforehand, and on them trusting the people they have just detained.

In a perfect world, this problem would be completely moot. Even in a slightly less than perfect world, where all these diseases and conditions still existed, but police and first responder training was perfectly robust and effective, medical identification would not be needed. Likewise, in such a world, the stigma of medical identification would not exist; patients would feel perfectly safe announcing their condition to the world, and there would be no controversy in adhering to the standard medical advice.

In our world, it is a chicken-egg problem, brought on by understandable, if frustrating, human failings at every level. Trying to determine fault and blame ultimately comes down to questioning the nitty gritty of morality, ethics, and human nature, and as such, is more suited to an exercise in navel gazing than an earnest attempt to find solutions to the problems presently faced by modern patients. We can complain, justifiably and with merit, that the system is biased against us. However such complaints, cathartic though they may be, will not accomplish much.

This viscous cycle, however, can be broken. Indeed, it has been broken before, and recently. Historical examples abound of oppressed groups coming to break the stigma of an identifying symbol, and claiming it as a mark of pride. The example that comes most immediately to mind is the recent progress that has been made for LGBT+ groups in eroding the stigma of terms which quite recently were used as slurs, and in appropriating symbols such as the pink triangle as a symbol of pride. In a related vein, the Star of David, once known as a symbol of oppression and exclusion, has come to be used by the Jewish community in general, and Israel in particular, as a symbol of unity and commonality.

In contrast to such groups, the road for those requiring medical identification is comparatively straightforward. The disabled and sick are already widely regarded as sympathetic, if pitiful. Our symbols, though they may be stigmatized, are not generally reviled. When we face insensitivity, it is usually not because those we face are actively conspiring to deny us our needs, but simply because we may well be the first people they have encountered with these specific needs. As noted above, this is a chicken-egg problem, as the less sensitive the average person is, the more likely a given person with a disability that is easily hidden is to try and fly under the radar.

Imagine, then, if you can, such a world, where a medical identification necklace is as commonplace and unremarkable as a necklace with a religious symbol. Imagine seeing a parking lot with stickers announcing the medical condition of a driver or passenger with the same regularity as you see an advertisement for a political cause or a vacation destination. Try to picture a world where people are as unconcerned about seeing durable medical equipment as American flag apparel. It is not difficult to imagine. We are still a ways away from it, but it is within reach.

I know that this world is within reach, partially because I myself have seen the first inklings of it. I have spent time in this world, at conferences and meetings. At several of these conferences, wearing a colored wristband corresponding to one’s medical conditions is a requirement for entry, and here it is not seen as a symbol of stigma, but one of empowerment. Wristbands are worn in proud declaration, amid short sleeved shirts for walkathon teams, showing bare medical devices for all the world to see.

Indeed, in this world, the medical ID bracelet is a symbol of pride. It is shown off amid pictures of fists clenched high in triumph and empowerment. It is shown off in images of gentle hands held in friendship and solidarity.

It is worth mentioning with regards to this last point, that the system of wristbands is truly universal. That is to say, even those who have no medical afflictions whatsoever are issued wristbands, albeit in a different color. To those who are not directly afflicted, they are a symbol of solidarity with those who are. But it remains a positive symbol regardless.

The difference between these wristbands, which are positive symbols, and ordinary medical identification, which is at best inconvenient and at worst oppressive, has nothing to do with the physical discrepancies between them, and everything to do with the attitudes that are attached by both internal and external pressure. The wristbands, it will be seen, are a mere symbol, albeit a powerful one, onto which we project society’s collective feelings towards chronic disease and disability.

Medical identification is in itself amoral, but in its capacity as a symbol, it acts as a conduit to amplify our existing feelings and anxieties about our condition. In a world where disabled people are discriminated against, left to go bankrupt from buying medication for their survival, and even targeted by extremist groups, it is not hard to find legitimate anxieties to amplify in this manner. By contrast an environment in which the collective attitude towards these issues is one of acceptance and empowerment, these projected feelings can be equally positive.

The Moral Hazard of Hope


This post is part of the series: The Debriefing. Click to read all posts in this series.


Suppose that five years from today, you would receive an extremely large windfall. The exact number isn’t important, but let’s just say it’s large enough that you’ll have to budget things again. Not technically infinite, because that would break everything, but for the purposes of one person, basically undepletable. Let’s also assume that this money becomes yours in such a way that it can’t be taxed or swindled in getting it. This is also an alternate universe where inheritance and estates don’t exist, so there’s no scheming among family, and no point in considering them in your plans. Just roll with it.

No one else knows about it, so you can’t borrow against it, nor is anyone going to treat you differently until you have the money. You still have to be alive in five years to collect and enjoy your fortune. Freak accidents can still happen, and you can still go bankrupt in the interim, or get thrown in prison, or whatever, but as long as you’re around to cash the check five years from today, you’re in the money.

How would this change your behavior in the interim? How would your priorities change from what they are?

Well, first of all, you’re probably not going to invest in retirement, or long term savings in general. After all, you won’t need to. In fact, further saving would be foolish. You’re not going to need that extra drop in the bucket, which means saving it would be wasting it. You’re legitimately economically better off living the high life and enjoying yourself as much as possible without putting yourself in such severe financial jeopardy that you would be increasing your chances of being unable to collect your money.

If this seems insane, it’s important to remember here, that your lifestyle and enjoyment are quantifiable economic factors (the keyword is “utility”) that weigh against the (relative and ultimately arbitrary) value of your money. This is the whole reason why people buy stuff they don’t strictly need to survive, and why rich people spend more money than poor people, despite not being physiologically different. Because any money you save is basically worthless, and your happiness still has value, buying happiness, expensive and temporary though it may be, is always the economically rational choice.

This is tied to an important economic concept known as Moral Hazard, a condition where the normal risks and costs involved in a decision fail to apply, encouraging riskier behavior. I’m stretching the idea a little bit here, since it usually refers to more direct situations. For example, if I have a credit card that my parents pay for to use “for emergencies”, and I know I’m never going to see the bill, because my parents care more about our family’s credit score than most anything I would think to buy, then that’s a moral hazard. I have very little incentive to do the “right” thing, and a lot of incentive to do whatever I please.

There are examples in macroeconomics as well. For example, many say that large corporations in the United States are caught in a moral hazard problem, because they know that they are “too big to fail”, and will be bailed out by the government if they get in to serious trouble. As a result, these companies may be encouraged to make riskier decisions, knowing that any profits will be massive, and any losses will be passed along.

In any case, the idea is there. When the consequences of a risky decision become uncoupled from the reward, it can be no surprise when rational actors take more riskier decisions. If you know that in five years you’re going to be basically immune to any hardship, you’re probably not going to prepare for the long term.

Now let’s take a different example. Suppose you’re rushed to the hospital after a heart attack, and diagnosed with a heart condition. The condition is minor for now, but could get worse without treatment, and will get worse as you age regardless.

The bad news is, in order to avoid having more heart attacks, and possible secondary circulatory and organ problems, you’re going to need to follow a very strict regimen, including a draconian diet, a daily exercise routine, and a series of regular injections and blood tests.

The good news, your doctor informs you, is that the scientists, who have been tucked away in their labs and getting millions in yearly funding, are closing in on a cure. In fact, there’s already a new drug that’s worked really well in mice. A researcher giving a talk at a major conference recently showed a slide of a timeline that estimated FDA approval in no more than five years. Once you’re cured, assuming everything works as advertised, you won’t have to go through the laborious process of treatment.

The cure drug won’t help if you die of a heart attack before then, and it won’t fix any problems with your other organs if your heart gets bad enough that it can’t supply them with blood, but otherwise it will be a complete cure, as though you were never diagnosed in the first place. The nurse discharging you tells you that since most organ failure doesn’t appear until patients have been going for at least a decade, so long as you can avoid dying for half that long, you’ll be fine.

So, how are you going to treat this new chronic and life threatening disease? Maybe you will be the diligent, model patient, always deferring to the most conservative and risk averse in the medical literature, certainly hopeful for a cure, but not willing to bet your life on a grad student’s hypothesis. Or maybe, knowing nothing else on the subject, you will trust what your doctor told you, and your first impression of the disease, getting by with only as much invasive treatment as you can get away with to avoid dying and being called out by your medical team for being “noncompliant” (referred to in chronic illness circles in hushed tones as “the n-word”).

If the cure does come in five years, as happens only in stories and fantasies, then either way, you’ll be set. The second version of you might be a bit happier from having more fully sucked the marrow out of life. It’s also possible that the second version would have also had to endure another (probably non-fatal) heart attack or two, and dealt with more day to day symptoms like fatigue, pains, and poor circulation. But you never would have really lost anything for being the n-word.

On the other hand, if by the time five years have elapsed, the drug hasn’t gotten approval, or quite possibly, hasn’t gotten close after the researchers discovered that curing a disease in mice didn’t also solve it in humans, then the difference between the two versions of you are going to start to compound. It may not even be noticeable after five years. But after ten, twenty, thirty years, the second version of you is going to be worse for wear. You might not be dead. But there’s a much higher chance you’re going to have had several more heart attacks, and possibly other problems as well.

This is a case of moral hazard, plain and simple, and it does appear in the attitudes of patients with chronic conditions that require constant treatment. The fact that, in this case, the perception of a lack of risk and consequences is a complete fantasy is not relevant. All risk analyses depend on the information that is given and available, not on whatever the actual facts may be. We know that the patient’s decision is ultimately misguided because we know the information they are being given is false, or at least, misleading, and because our detached perspective allows us to take a dispassionate view of the situation.

The patient does not have this information or perspective. In all probability, they are starting out scared and confused, and want nothing more than to return to their previous normal life with as few interruptions as possible. The information and advice they were given, from a medical team that they trust, and possibly have no practical way of fact checking, has led them to believe that they do not particularly need to be strict about their new regimen, because there will not be time for long term consequences to catch up.

The medical team may earnestly believe this. It is the same problem one level up; the only difference is, their information comes from pharmaceutical manufacturers, who have a marketing interest in keeping patients and doctors optimistic about upcoming products, and researchers, who may be unfamiliar with the hurdles in getting a breakthrough from the early lab discoveries to a consumer-available product, and whose funding is dependent on drumming up public support through hype.

The patient is also complicit in this system that lies to them. Nobody wants to be told that their condition is incurable, and that they will be chronically sick until they die. No one wants to hear that their new diagnosis will either cause them to die early, or live long enough for their organs to fail, because even by adhering to the most rigid medical plan, the tools available simply cannot completely mimic the human body’s natural functions. Indeed, telling a patient that they will still suffer long term complications, whether in ten, twenty, or thirty years, almost regardless of their actions today, it can be argued, will have much the same effect as telling them that they will be healthy regardless.

Given the choice between two extremes, optimism is obviously the better policy. But this policy does have a tradeoff. It creates a moral hazard of hope. Ideally, we would be able to convey an optimistic perspective that also maintains an accurate view of the medical prognosis, and balances the need for bedside manner with incentivizing patients to take the best possible care of themselves. Obviously this is not an easy balance to strike, and the balance will vary from patient to patient. The happy-go-lucky might need to be brought down a peg or two with a reality check, while the nihilistic might need a spoonful of sugar to help the medicine go down. Finding this middle ground is not a task to be accomplished by a practitioner at a single visit, but a process to be achieved over the entire course of treatment, ideally with a diverse and well experienced team including mental health specialists.

In an effort to finish on a positive note, I will point out that this is already happening, or at least, is already starting to happen. As interdisciplinary medicine gains traction, patient mental health becomes more of a focus, and as patients with chronic conditions begin to live longer, more hospitals and practices are working harder to ensure that a positive and constructive mindset for self care is a priority, alongside educating patients on the actual logistics of self-care. Support is easier to find than ever, especially with organized patient conferences and events. This problem, much like the conditions that cause it, are chronic, but are manageable with effort.

 

The Debriefing

Earlier this month was another disability conference. Another exchange of ideas, predictions, tips, tricks, jokes, and commiseration. Another meticulously apportioned, carb-counted buffet of food for thought, and fodder for posts.

As my comrades working on the scientific research tell me, two points of data is still just anecdotal. Even so, this is the second time out of two conferences that I’ve come back with a lot to say. Last time, these mostly revolved around a central theme of sorts, enough so that I could structure them in a sequential series. This time there were still lots of good ideas, but they’re a little more scattershot, and harder to weave into a consistent narrative. So I’m going to try something different, again.

I’m starting a new category of semi-regular posts, called “The Debriefing” (name subject to change), to be denoted with a special title, and possibly fancy graphics. These will focus on topics which were points of discussion or interest at conferences, events, and such, that aren’t part of another series, and which have managed to capture my imagination. Topics which I’m looking forward to (hopefully) exploring include things like:

– The moral hazard of hoping for a cure: how inspiring hope for a cure imminently, or at least in a patient’s lifetime, can have perverse effects on self-care

– Controversy over medical identification: the current advice on the subject, and the legal, political, social, and psychological implications of following it

– Medical disclosure solidarity: suggestions for non-disabled job applicants to help strengthen the practical rights of disabled coworkers

– The stigma of longevity: when and why the chronically ill don’t go to the doctor

– Why I speak: how I learned to stop worrying and love public speaking

At least a couple of these ideas are already in the pipe, and are coming up in the next few days. The rest, I plan to write at some point. I feel reasonably confident listing these topics, despite my mixed record on actually writing the things I say I’m going to write mostly because these are all interesting topics that keep coming up, and given that I plan to attend several more conferences and events in the near future, even if I don’t get them soon, I fully expect they will come up again.