Real Men Aren’t Scared of Needles

Since most of my readers access this site from countries where the COVID-19 vaccine is now available, I’m here to remind you to get vaccinated when it’s your turn. If you’re over twelve in the United States, you are eligible now. While there are many things in life that can be safely postponed or procrastinated, this isn’t one of them. Getting as many people vaccinated as quickly as possible is humanity’s last best chance to quash this virus before it becomes endemic, which would make it impossible to go back to normal. 

You’ve probably already heard this argument from better qualified sources than me. And let’s be real, if you haven’t listened to epidemiological statistics or long term morbidity case studies coming from the CDC, you have no reason to listen to them coming from me. So instead, I’m going to present an argument that you probably won’t see on a prime time TV spot any time soon. 

You should get the vaccine because getting the virus will ruin your sex life. 
I mean, you should also get it because the virus might kill you, or kill other people, or leave you unable to climb stairs, and so on. But if those stories haven’t convinced you already, clearly you have a different set of priorities. So if you need a better reason than your own survival: you should get vaccinated because more and more COVID-19 survivors are developing sexual dysfunction, in particular male erectile dysfunction. Not just from running out of breath or getting tired, either, but from the virus itself being present long after acute infection phase. Tissue samples confirm the presence of COVID-19 spike proteins obstructing normal arousal mechanisms.

Don’t take my word for it. The pilot study is open access, and not that long to read by the standards of  journal articles. Yes, there is some medical jargon, and there’s the usual amount of carefully worded and qualified statements saying that more study is needed, but the data speaks for itself. It’s incredibly obvious, isn’t it? A novel virus is introduced into our precious bodily fluids without the knowledge of the individual, certainly without any choice. Luckily our scientists are able to interpret the resulting loss of essence correctly. 

There are obviously public health implications in these findings that viral particles are lingering in certain tissues and obstructing function after the acute infectious period. But the American public has demonstrated in its actions that it doesn’t really follow the nuance of public health, or scientific studies, or systemic issues in general. The only people who care about things like disability adjusted life expectancy or long term national stability are over-educated bleeding-heart know-it-alls. On the other hand, protecting one’s manhood from the virus’s attempt to sap and impurity our precious bodily fluids is as American as apple pie. 

On Social Distancing Vis a Vis Communism

I wish to try and address some of the concerns raised by protests against measures taken to protect public health in the wake of the ongoing COVID-19 pandemic. Cards on the table: I think people who are going out to protest these measures are, at best, foolhardy and shortsighted. It’s hard for me to muster sympathy for their cause. Still, calling someone names doesn’t often win hearts and minds. So I’m going to try and do that thing that people tell me I’m good at; I’m going to write about the situation from where I stand, and try to understand where these people are coming from, in the hopes that I can, if not change behaviors, at least help people understand who may be equally mystified and apoplectic at my position as I am at theirs. 

I’m not going to address any conspiracy theories, including the conspiracy theory that these measures are part of some ill-defined plan of a shadowy elite to seize control. Mostly because, from where I stand, it’s a moot point. Even taking all of the claims about evil motivations at face value, even if we assume that everyone in government secretly wants to live in a totalitarian dictatorship and they see this as their chance, that doesn’t really affect the reality. The contents of my governor’s soul is between him and God [1]. He says he wants to save lives, and he’s put in place policies to mitigate the spread of disease. People are dying from COVID-19; maybe slightly more or fewer people than the numbers being reported, but definitely people [2], including people I know. 

For context, since the beginning of this episode, I have had friends and acquaintances die, and other friends and acquaintances friends go from being student athletes, to being so sick that they can’t sit up to type on a laptop. My university campus- the places where I learn, interact with others, and often write these posts -is split between being field hospitals, quarantine lodgings for hospital workers, and morgues. Because there aren’t enough staff, undergraduate students, even freshmen like me, who have any experience in nursing or medicine, are called on to volunteer as emergency workers, and facing the same conditions, often without proper equipment, that have claimed so many lives. Every night, from my own bedroom, I hear the sirens of ambulances rushing back and forth from the retirement village to the hospital. We’re not even the epicenter, and things are that bad here. 

So the virus is very real. The toll is very real. The danger is real. We can quibble over who bears responsibility for what later. There will be plenty of time for anger, grief, and blame; plenty of time to soberly assess who overreacted, who under-reacted, who did a good job, and who ought to be voted out. I’m counting on it. In the now, we know that the virus spreads by close and indoor contact [2][3]. We know that there are only so many hospital beds, and we have no way to protect people or cure them [4][5]. It stands to reason that if we want to save lives, we need to be keeping people apart. And if we believe that a function of government is looking out for and protecting lives, which even most libertarians I know agree on, then it stands to reason that it’s the government’s job to take action to save as many lives as possible. Yes, this will require new and different exercise of powers which might in another context be called government overreach. But we live in new and different times. 

Not everyone is able to comfortably come to terms with change. I get it. And if I’m really honest, I’m not happy with it either. A lot of people who argue for shutdowns try to spin it as a positive thing, like a children’s television episode trying to convince kids that, hey, cleaning up your room is actually fun, and vegetables are delicious. Look at the clear skies, and the dolphins in the Hudson River. Staying at home makes you a hero; don’t you want to feel like a hero? And yeah, there are silver linings, and reasons why you can look on the bright side. For some people looking for that bright side is a coping mechanism. But truth be told, mostly it sucks. Not being able to hug your friends, or eat out at a restaurant, or just hang out in public, sucks. You’re not going to get around that. And a lot of people are angry. People feel deprived and cheated.
And you know what? That’s fine. You’re allowed to feel angry, and cheated. Being upset doesn’t make you a bad person. Your experiences and feelings are valid, and you’re allowed to pout and stomp and scream and shout.

That’s fine. Let it out, if you think it’ll make you feel better. You’re right, it’s not fair. Life isn’t fair, good people are suffering, and that’s infuriating. Unfortunately (and I do mean this sincerely), it won’t change anything. The virus has made it abundantly clear that it doesn’t care about our feelings, only our behavior. However we feel, if we want to save people, we need to stay apart. If we support the idea that governments should look out for people, we should insist that they lend their power to these measures. We can still hate being cooped up. But we need to understand that this is the lesser of the evils. Whether it takes a week, a month, or even a year, the alternative of massive death needs to be ruled out.

Some people have raised the argument that, even though we care about human lives, Americans need to work. The implication that Americans need to work, as opposed to, say, just kinda wanting to work, implies a kind of right. Maybe not as absolute as free speech, or as technical as the right to a trial by a jury of peers, but maybe something akin to a right to privacy; a vague but agreed upon notion that we have a general right to strive for something. Of course, no right is truly absolute. Even free speech, the one that we put first in our bill of rights, and generally treat as being the most inviolable, has its limits. As a society we recognize that times of war, rebellion, or public danger, our rights are not absolute. The police don’t have to mirandize you to ask where the bomb is, or stop chasing an armed suspect because they ran into a private home [6]. 

Hopefully, even if we may, as a matter of politics, quibble on where the exact lines are, we can all concede that rights are not absolute, and having exceptions for a larger purpose is not advocating tyranny. This same line of reasoning would apply to any previously undefined right to work as well. And I think it’s pretty clear the basis for why the current pandemic constitutes such an exception. We can have respectful disagreements about what measures are useful in what areas, but when the overarching point is that we need to minimize human contact for public safety, it seems like that covers most things in dispute. Again, you don’t have to like it. You’re welcome to write a response. But do so from your own home. If you’re feel compelled to protest something specific, then protest safely, but don’t sabotage the efforts of people trying to make this go away.

Maybe you’re thinking: Okay, that sounds nice, but I actually need to work. As in, the bills don’t stop coming, and this stimulus check isn’t going to cut it for longer. Life doesn’t stop for illness. Even in localities that have frozen certain bills and have good food banks, there are still expenses. In many places, not enough has been done to allow people who want to do the right thing to be able to do so. Not everyone can work from home, and in a tragic irony, people who live paycheck to paycheck are less likely to be able to work from home, if their jobs even exist in a telecommuting economy. For what it’s worth, I’m with the people who say this is an unfair burden. Unfortunately, as we know, life isn’t fair, and there’s not a way to reconcile saving lives and letting everyone work freely. As an aside, though I don’t think anyone genuinely believes in sacrificing lives for GDP, I’ll point out that more people getting sick and dying actually costs jobs in the long run [7][8]. Economists agree that the best way to get everyone back to work is to devote as much of our resources as possible to fighting this virus.

People say we can’t let the cure be worse than the disease, and although I disagree with the agenda for which this is a talking point, I actually agree with the idiom. Making this a choice between working class families starving, and dying of disease is a no-win scenario, and we do need to weigh the effects of cutting people off. That doesn’t make the virus the lesser of the evils, by any stretch of the imagination. Remember, we haven’t actually ruled out the “Millions of American Deaths” scenario if we go back to regular contact patterns, we’ve just put it off for now. That’s what flattening the curve means; it’s an ongoing process, not a one and done effort [9]. Saving lives is a present tense endeavor, and will be for some time. Still, a cost-benefit analysis requires that we understand the costs. People are losing jobs, and suffering for it, and government policy should take that into account. 

Here’s where I diverge from others: keeping things shut down does not necessarily have to mean that people go hungry. Rather than ease lockdown restrictions, this is where I would say governments, both state and federal, need to be doing more while they’re telling people to stay home. It’s not fair to mandate people stay at home while their livelihoods depend on getting out and working; agreed, but there’s more than one way to neutralize that statement. The government could scale up the stimulus checks, giving every American an emergency basic income. Congress could suspend the debt limit and authorize special bonds akin to war bonds to give unemployment and the Payroll Protection Program as much funding as they need, removing the bottleneck for businesses. Or, you could attack the problem from the opposite end; mandate a halt on payments for things like rent, mortgages, utilities, and so on, and activate emergency nutrition programs drawn up by the pentagon to keep Americans fed during a nuclear winter. Common carriers such as utilities, telecoms, delivery companies, and other essential services could be placed under temporary government control through existing emergency powers if necessary. 

Such a mass mobilization wouldn’t be unprecedented in American history. The world wars the the New Deal show that it can be done while maintaining democratic governance. The measures wouldn’t need to be permanent, just for the duration of the crisis created by the pandemic. There’s a good historical case that a strong response would benefit our economic recovery once this passes [8]. You wouldn’t necessarily need to do all of the things I mentioned; you could tailor it to fit demands in specific areas. The point is, people don’t need to starve. The trade off only exists in the system we’ve constructed for ourselves. That system is malleable, even if we don’t often view it as such, because we so rarely get to a point like this. The lockdown is easier to see as malleable, because it’s recent, and we can remember a time before it, but there’s a much stronger scientific basis for why we need to keep it in place, at least for now.

I’ll address one more point, and that is the argument that, material need or no, people have a deeper need, and by implication a right, to get out and try to make a living in the world. This is subtly different than the idea that people have a default legal right to do as they will, as covered earlier. By contrast this strikes at a deeper, philosophical argument that people have a need to contribute positively. The idea that people simply go stir crazy, and television and video games lack that certain element of, as Aristotle put it, Eudaimonia, the joy achieved by striving for a life well lived [10]. I think this is what people are getting at, at least, the people who have really sat down and thought about it, when they decry increasing government dependence while life is under quarantine. They probably understand that people need to eat, and don’t want anyone to die, but deeper than any legal right, are concerned that if this state of affairs drags out, that people will stop striving, and lose that spark that drives the human spirit. People need to be able to make their own lives, to give them meaning. 

Expressed in philosophical terms, I’m more sympathetic to this argument than my politics might suggest. I agree that people need meaning in their lives. I even agree that handouts don’t provide that meaning the same way that a successful career does. It is human nature is to yearn to contribute, not just survive, and for a lot of people, how they earn money outside the home is what they see as their contribution; the value they add and the proof of their worth. Losing that is more than just tragic, it’s existentially terrifying. I remember the upheaval I went through when it became clear I wasn’t going to be able to graduate on time with my disability, and probably wouldn’t get into the college on which I had pinned my hopes and dreams as a result. I had put a lot of my value on my being a perfect student, and having that taken away from me was traumatic in its way. I questioned what my value was if society didn’t acknowledge me for being smart; how could I be a worthwhile person if society rejected the things I put my work into. Through that prism, I can almost understand how some people might be more terrified of the consequences of a shutdown than of the virus.

The idea that work gives human life meaning isn’t new. Since the industrial revolution created the modern concept of the career, people have been talking about how it relates to our philosophical worth. But let’s tug on that threat a little longer. Before any conservative pundits were using the human value of work to attack government handouts, there was a German philosopher writing about the consequences of a society which ignored the dislocation and alienation which occurred when the ruling class prevented people from meaningful work. He used a German term, Entfremdung der Gattungswesen, to describe the deprivation of the human soul which occurs when artificial systems interfere in human drives. He argued that such measures were oppressive, and based on his understanding of history would eventually end in revolution. 

That philosopher was Karl Marx. He suggested that industrial capitalism, by separating the worker from the means of producing their livelihood, the product of their labor, the profits thereof, and the agency to work on their own terms, the bourgeoisie deny the proletariat something essential to human existence [11]. So I guess that protester with the sign that “social distancing = communism” might be less off the wall that we all thought. Not that social distancing is really communist in the philosophical sense, rather the contrary; social distancing underlines Marxist critiques of capitalism. True to Marxist theory, the protester has achieved consciousness of the class iniquities perpetuated by the binding of exploitative wage labor to the necessities of life, and is rallying against the dislocation artificially created by capitalism. I suspect they probably wouldn’t describe themselves as communist, but their actions fit the profile. 

Here’s the point where I diverge from orthodox Marxism. Because, again, I think there’s more than one way to neutralize this issue. I think that work for meaning doesn’t necessarily need to be work for wages. Suppose you decoupled the drive of material needs from the drives for self improvement and worth, either by something like a universal basic income, or the nationalization and dramatic expansion of food banks, rent controls, and utility discount programs, such that a person was able to survive without working. Not comfortably, mind you, but such that starving is off the table. According to Marx this is most assuredly not communism; it doesn’t involve the worker ownership of the means of production. People still go to work and sell their labor, and market mechanisms dictate prices and reward arbitrage. 

What this does, instead, is address the critique of our current system raised by both Marx, and our protester. In addition to ensuring that no one goes hungry, it also gives the opportunity, indeed, an incentive, for individuals to find socially useful and philosophically meaningful work beyond the market. Feeling useless sitting at home? Go get on video chat and tutor some kids in something you’re good at. Go mow lawns for emergency workers in your area. Take an online class, now that lots of them are free. Make some art; join the trend of celebrities posting videos of themselves singing online. If you have any doubts that there is plenty of unpaid but necessary and useful work around the house, ask a housewife. Rather than protest the lack of a particular task, we should take this opportunity to discover what useful and meaningful work we can accomplish from home. 

The dichotomy between opening and starving is a false fabrication, as is the dichotomy between deference to scientific and philosophical principles. Those who protest one or the other appear either to represent a fringe extreme, or misunderstand the subtleties of the problem and the multitude of measures which we may take to address it. Our individual freedoms reflect a collective responsibility and commitment to self moderation and governance, which we must now demonstrate, by showing the imagination, foresight, and willingness to sacrifice for a greater cause which has defined our human struggle. In this moment, the responsibilities to our fellow human beings outweigh some of the rights we have come to take for granted. This exigency demands a departure from our norms. We must be prepared to suspend our assumptions, and focus on what really matters. Now is the time to find meaning in things that matter to us. To demand better from our government than platitudes and guidelines. To help ourselves and our fellow human being without prejudice. 

Works Consulted

[1] Matthew 7:1, KJV

[2] “Coronavirus Disease 2019 (COVID-19).” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, 2020, www.cdc.gov/coronavirus/2019-ncov/index.html.

[3] “Coronavirus.” World Health Organization, World Health Organization, www.who.int/emergencies/diseases/novel-coronavirus-2019.

[4] “ Over the past several weeks, a mind-boggling array of possible therapies have been considered. None have yet been proven to be effective in rigorously controlled trials”“Pursuing Safe and Effective Anti-Viral Drugs for COVID-19.” National Institutes of Health, U.S. Department of Health and Human Services, 17 Apr. 2020, directorsblog.nih.gov/2020/04/17/pursuing-safe-effective-anti-viral-drugs-for-covid-19/.

[5] “ There are no drugs or other therapeutics approved by the US Food and Drug Administration to prevent or treat COVID-19. Current clinical management includes infection prevention and control measures and supportive care”“Therapeutic Options for COVID-19 Patients.” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, 21 Mar. 2020, www.cdc.gov/coronavirus/2019-ncov/hcp/therapeutic-options.html.

[6] Burney, Nathan. “The Illustrated Guide to Law.” The Illustrated Guide to Law, 17 Apr. 2020, lawcomic.net/.

[7] Pueyo, Tomas. “Coronavirus: Out of Many, One.” Medium, Medium, 20 Apr. 2020, medium.com/@tomaspueyo/coronavirus-out-of-many-one-36b886af37e9.

[8] Carlsson-Szlezak, Philipp, et al. “What Coronavirus Could Mean for the Global Economy.” Harvard Business Review, 16 Apr. 2020, hbr.org/2020/03/what-coronavirus-could-mean-for-the-global-economy.

[9] Ferguson, Neil M, et al. “ Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand.” Imperial College of London, 16 Mar. 2020, https://www.imperial.ac.uk/media/imperial-college/medicine/mrc-gida/2020-03-16-COVID19-Report-9.pdf

[10] Aristotle. “Nicomachean Ethics.” The Internet Classics Archive, classics.mit.edu/Aristotle/nicomachaen.1.i.html.

[11] Marx, Karl. “The Economic and Philosophic Manuscripts of 1844.” Marxists Internet Archive, www.marxists.org/archive/marx/works/1844/manuscripts/preface.htm.

World Health Day

The following message is part of a campaign to raise public awareness and resources in light of the global threat posed by COVID-19 on World Health Day. If you have the resources, please consider contributing in any of the ways listed at the end of this post. Remember to adhere to current local health guidelines wherever you are, which may differ from those referenced in this post. 

Now that the world has woken up to the danger that we face in the Covid-19 pandemic, and world leaders have begun to grapple with the problem in policy terms, many individuals have justifiably wondered how long this crisis will last. The answer is, we don’t know. I’m going to repeat this several times, because it’s important to come to terms with this. For all meaningful purposes, we are living through an event that has never happened before. Yes, there have been pandemics this bad in the long ago, and yes, there have been various outbreaks in recent memory, but there has not been a pandemic which is as deadly, and as contagious, which we have failed to contain so spectacularly, recently enough to use it is a clear point of reference. This means that every prediction is not just speculation, but speculation born of an imperfect mosaic. 

Nevertheless, it seems clear that unless we are willing to accept tens of millions of deaths in every country, humanity will need to settle in for a long war. With the language of the US President and Queen Elizabeth, the metaphor is apt. Whether “long” may mean a few months, or into next year will depend on several factors, among them whether a culture which has for many decades been inculcated with the notion of personal whimsy and convenience is able to adapt to collective sacrifice. The longer we take to accept the gravity of the threat, the weaker our response will be, and the longer it will take us to recover. Right now all of humanity face a collective choice. Either we will stubbornly ignore reality, and pay the price with human tragedy of hitherto-fore unimaginable proportions, and repercussions for decades to come, or we will listen to experts and hunker down, give support to those who need it, and help each other through the storm. 

For those who look upon empty streets and bare shelves and proclaim the apocalypse, I have this to say: it is only the apocalypse if we make it such. Granted, it is conceivable that if we lose sight of our goals and our capabilities, either by blind panic or stubborn ignorance, we may find the structures of our society overwhelmed, and the world we know may collapse. This is indeed a possibility, but a possibility which it is entirely within our collective capacity to avoid. The data clearly shows that by taking care of ourselves at home, and avoiding contact with other people or surfaces, we can slow the spread of the virus. With the full mobilization of communities, we can starve the infection of new victims entirely. But even a partial slowing of cases buys us time. With that most valuable of currencies, we can expand hospital capacity, retool our production, and focus our tremendous scientific effort towards forging new weapons in this fight. 

Under wartime pressure, the global scientific community is making terrific strides. Every day, we are learning more about our enemy, and discovering new ways to give ourselves the advantage. Drugs which prove useful are being deployed as fast as they can be produced. With proper coordination from world leaders, production of these drugs can be expanded to give every person the best fighting chance should they become sick. The great challenges now are staying the course, winning the battle for production, and developing humanity’s super weapon.

Staying the course is fairly simple. For the average individual not working essential jobs, it means staying home, avoiding contact as much as possible, and taking care to stay healthy. For communities and organizations, it means encouraging people to stay at home by making this as easy as possible. Those working essential jobs should be given whatever resources they need to carry on safely. Those staying at home need to have the means to do so, both logistically and psychologically. Logistically, many governments are already instituting emergency financial aid to ensure the many people out of work are able to afford staying home, and many communities have used volunteers or emergency workers such as national guard troops to support deliveries of essentials, in order to keep as many people as possible at home. Psychologically, many groups are offering online activities, and many public figures have taken to providing various forms of entertainment and diversion.

Winning the battle for production is harder, but still within reach. Hospitals are very resource intensive at the best of times. Safety in a healthcare setting means the use of large amounts of single-use disposable materials, in terms of drugs and delivery mechanisms, but also personal protective equipment such as masks, gowns, and gloves. If COVID-19 is a war, ventilators are akin to tanks, but PPE are akin to ammunition. Just as it is counterproductive and harmful to ration how many bullets or grenades a soldier may need to use to win a battle, so too is it counterproductive and harmful to insist that our frontline healthcare workers make do with a limited amount of PPE. 

The size and scope of the present crisis, taken with the amount of time we have to act, demands a global industrial mobilization unprecedented during peacetime, and unseen in living memory. It demands either that individuals exhibit self discipline and a regard for the common good, or central authorities control the distribution of scarce necessities. It demands that we examine new ways of meeting production needs while minimizing the number of people who must be kept out at essential jobs. For the individual, this mobilization may require further sacrifice; during the mobilization of WWII, certain commodities such as automobiles, toys, and textiles were unavailable or out of reach. This is the price we paid to beat back the enemy at the gates, and today we find ourselves in a similar boat. All of these measures are more effective if taken calmly in advance by central government, but if they are not they will undoubtedly be taken desperately by local authorities. 

Lastly, there is the challenge of developing a tool which will put an end to the threat of millions of deaths. In terms of research, there are several avenues which may yield fruit. Many hopes are pinned on a vaccine, which would grant immunity to uninfected, and allow us to contain the spread without mass quarantine. Other researchers are looking for a drug, perhaps an antiviral or immunomodulator which might make COVID-19 treatable at home with a pill, much like Tamiflu blunted the worst of H1N1. Still others are searching for antibodies which could be synthesized en masse, to be infused to the blood of vulnerable patients. Each of these leads requires a different approach. However, they all face the common challenge of not only proving safety and effectiveness against COVID-19, but giving us an understandable mechanism of action.

Identifying the “how and why” is not merely of great academic interest, but a pressing medical concern. Coronaviruses are notoriously unstable and prone to mutation; indeed there are those who speculate that COVID-19 may be more than one strain. Finding a treatment or vaccine without understanding our enemy exposes us to the risk of other strains emerging, undoing our hard work and invalidating our collective sacrifices. Cracking the COVID-19 code is a task of great complexity, requiring a combination of human insight and brilliance, bold experimentation, luck, and enormous computational resources. And like the allied efforts against the German enigma, today’s computer scientists have given us a groundwork to build off.

Unraveling the secrets of COVID-19 requires modeling how viral proteins fold and interact with other molecules and proteins. Although protein folding follows fairly simple rules, the computational power required to actually simulate them is enormous. For this, scientists have developed the Folding@Home distributed computing project. Rather than constructing a new supercomputer which would exceed all past attempts, this project aims to harness the power of unused personal computers in a decentralized network. Since the beginning of March, Folding@Home has focused its priorities on COVID-19 related modeling, and has been inundated with people donating computing power, to the point that they had to get help from other web services companies because simulations being completed faster than their web servers could assign them.

At the beginning of March, the computing power of the entire project clocked in at around 700 petaflops, FLOPS being a unit of computing power, meaning Floating Point Operations Per Second. During the Apollo moonshot, a NASA supercomputer would average somewhere around 100,000 FLOPS. Two weeks ago, they announced a new record in the history of computing: more than an exaflop of constant distributed computing power, or 10^18 FLOPS. With the help of Oracle and Microsoft, by the end of March, Folding@Home exceeded 1.5 Exaflops. These historic and unprecedented feats are a testament to the ability of humanity to respond to a challenge. Every day this capacity is maintained or exceeded brings us closer to breaking the viral code and ending the scourge. 

Humanity’s great strength has always lay in our ability to learn, and to take collective action based on reason. Beating back COVID-19 will entail a global effort, in which every person has an important role to play. Not all of us can work in a hospital or a ventilator factory, but there’s still a way each of us can help. If you can afford to donate money, the World Health Organization’s Solidarity Fund is coordinating humanity’s response to the pandemic. Folding@Home is using the power of your personal computers to crack the COVID-19 code. And if nothing else, every person who stays healthy by staying home, washing hands, wearing homemade masks and keeping social distance is one less person to treat in the ICU. 

What is a Coronavirus, anyway?

I had about come to the conclusion not to write anything on the current crisis. This was because I am not an expert. There are plenty of experts, and you should listen to them over me, and I didn’t want to detract from what they’re saying by adding my own take and spin. I also didn’t want to write something because, in five attempts so far, every time I’ve sat down to write something out, double checking my sources and cross referencing my information, the situation has changed so as to render what I was about to say outdated and irrelevant, which is incredibly frustrating. The last thing I want to do is give advice contrary to what’s being said. 

But it looks like we might be heading towards a situation where the advice is stabilizing, if only because when the advice is “shut down everything”, you can’t really escalate that. And the data suggests that we are moving towards a long war here. It’s hard to say, but I’ve seen reports with numbers ranging from a few weeks, to eighteen months. And whether we manage to skate by lightly after a few weeks at home, or whether the first two years of the 2020s go down in history akin to the time of the Bubonic Plague, we need to start understanding the problems with which we find ourselves dealing in a long term context. Before I delve into what’s going on, and what seems likely to happen, I’m going to spend a post reviewing terminology.

I wasn’t going to die on this hill, but since we’ve got time, I’ll mention it anyway. Despite begrudgingly ceding to the convention myself, I don’t like calling this “Coronavirus”. That’s not accurate; Coronavirus is not the name of a virus. The term refers to a family of viruses, so named for protein chains which resemble the outermost layer of the surface of the sun. You know, the spiky, wavy bit that you would add to the picture after coloring in the circle. There are a lot of viruses that fit this description, to the point that the emoji for virus (ie: 🦠 ) could be said to be a generic Coronavirus. In addition to a number of severe respiratory illnesses, such as SARS, and now COVID-19, Coronaviruses also cause the common cold. 

They’re so common, we usually don’t bother naming them unless there’s something unusual about them. The World Health Organization was a bit slow to come out with its name for this one; and in the interim the media ran with the word they had. Despite my instinct, I’m not going to tell you you need to get up and change everything you’re saying and remove posts where you said Coronavirus, just be aware of the distinction. We’ve gotten to a point in social discourse where the distinction is academic, the same way everyone understands that “rodent problem” refers to rats or mice rather than beavers. But do be aware that if you’re reading scientific journals, if it doesn’t specify, it’s as likely that that they’re referring to the common cold as COVID-19. 

The term COVID-19 is designated by the World Health Organization, short for COronaVIrus Disease, 2019. WHO guidelines are explicitly crafted to design names which are short, pronounceable, and sufficiently generic so as to not “incite undue fear”. These guidelines specifically prohibit using occupational or geographic names, for both ethical and practical reasons. Ethically, calling a disease specific to an area or people-group, even when it doesn’t imply blame, can still create stigma. Suppose a highly infectious epidemic was called “Teacher’s Disease”, for instance. Suppose for the sake of this that teachers are as likely to be carriers as everyone else, but the first confirmed case was a teacher, so everyone just rolls with that. 

Even if everyone who uses and hears this term holds teachers completely blameless (not that they will; human psychology being what it is, but let’s suppose), people are still going to change their behaviors around teachers. If you heard on the news that Teacher’s Disease was spreading and killing people around the world, would you feel comfortable sending your kids to school? What about inviting your teacher friend over while your grandmother is staying with you? Would you feel completely comfortable sitting with them on the bus? Maybe you would, because you’re an uber-mind capable of avoiding all biases, but do you think everyone else will feel the same way? Will teachers be treated fairly in this timeline, by other people and society? And perhaps more crucially, do you think teachers are likely to single themselves out for treatment knowing that they’ll have this label applied to them?

There are other practical reasons why using geographic or occupational names are counterproductive. Even if you have no concern for stigma against people, these kinds of biases impact behavior in other ways. For instance, if something is called Teacher’s Disease, I might imagine that I, as a student, am immune. I might ignore my risk factors, and go out and catch the virus, or worse still, I might ignore symptoms and spread the virus to other people. I mean, really, you expect me, a healthy young person, to cancel my spring break beach bash because of something from somewhere else, which the news says only kills old timers? 

You don’t have to take my word for it either, or even the word of The World Health Organization. You can see this play out through history. Take the Flu Pandemic of 1918. Today, we know that the virus responsible was H1N1, and based on after the fact epidemiology, appeared first in large numbers in North America. Except, it wasn’t reported due to wartime censorship. Instead, it wouldn’t hit the press until it spread to Europe, to neutral Spain, where it was called Spanish Flu. And when the press called it that, the takeaway for most major governments was that this was a Spanish problem, and they had bigger issues than some foreign virus. The resulting pandemic was the worst in human history. 

I am not going to tell you what words you can or can’t use. Ours is a free society, and I have no special expertise that makes me uniquely qualified to lecture others. But I can say, from experience, that words have power. The language you use has an impact, and not always the impact you might intend. At times like this we all need to be mindful of the impact each of us has on each other. 

Do your part to help combat stigma and misinformation, which hurt our efforts to fight disease. For more information on COVID-19, visit the Centers for Disease Control and Prevention webpage. To view the specific guidelines on disease naming, go to the World Health Organization.

This Was A Triumph

Today I am happy to announce a new milestone. As of today I have received from my manufacturer the authorization code to initiate semi-closed loop mode on my life support devices. This means that for the first time, my life support devices are capable of keeping me alive for short periods without immediate direct human intervention. For the first time in more than a decade, it is now safe for me to be distracted by such luxuries as homework, and sleep. At least, for short periods, assuming everything works within normal parameters. 

Okay, yes, this is a very qualified statement. Compared to the kind of developments which are daily promised by fundraising groups and starry eyed researchers, this is severely underwhelming. Even compared solely to technologies which have already proven themselves in other fields and small scale testing, the product which is now being rolled out is rather pathetic. There are many reasons for this, from the risk-aversiveness of industry movers, to the glacial pace of regulatory shakers, to a general shortage of imagination among decision makers. It is easy to find reasons to be angry and feel betrayed that the US healthcare system has once again failed to live up to its promise of delivering breakneck innovation and improvement.

Even though this is disappointing compared to the technological relief we were marketed, I am still excited about this development. First of all, because it is a step in the right direction, even if a small one, and any improvement is worth celebrating. Secondly, and chiefly, because I believe that even if this particular new product is only an incremental improvement over the status quo, and pales in comparison to what had been promised for the past several decades, the particular changes represent the beginning of a larger shift. After all, this is the first iteration of this kind of life support device which uses machine learning, not merely to enable a fail-safe to prevent medication overdoses, but which actually intends to make proactive treatment decisions without human oversight.

True, the parameters for this decision making are remarkably conservative, some argue to the point of uselessness. The software will not deploy under anything short of perfect circumstances, its treatment targets are short of most clinical targets, let alone best practices, the modeling is not self-correcting, and the software can not interpret human intervention and is therefore mutually exclusive with aggressive treatment by a human.

Crucially, however, it is making decisions instead of a human. We are over the hill on this development. Critiques of its decision-making skill can be addressed down the line, and I expect once the data is in, it will be a far easier approval and rollout process than the initial version. But unless some new hurdle appears, as of now we are on the path towards full automation.

Some Like It Temperate

I want to share something that took me a while to understand, but once I did, it changed my understanding of the world around me. I’m not a scientist, so I’m probably not going to get this exactly perfect, and I’ll defer to professional judgment, but maybe I can help illustrate the underlying concept.

So temperature is not the same thing as hot and cold. In fact, temperature and heat aren’t really bound together inherently. On earth, they’re usually correlated, and as humans, our sensory organs perceive them through the same mechanism in relative terms, which is why we usually think of them together. This sensory shortcut works for most of the human experience, but it can become confusing and counterintuitive when we try to look at systems of physics outside the scope of an everyday life. 

So what is temperature? Well, in the purest sense, temperature is a measure of the average kinetic energy among a group of particles. How fast are they going, how often are they bumping into each other, and how much energy are they giving off when they do? This is how temperature and phase of matter correlate. So liquid water has a higher temperature than ice because its molecules are moving around more, with more energy. Because the molecules are moving around more, liquid water is less dense, which it’s easier to cut through water than ice. Likewise, it’s easier still to cut through steam than water. Temperature is a measure of molecular energy, not hotness. Got it? Good, because it’s about to get complicated.

So something with more energy has a higher temperature. This works for everything we’re used to thinking about as being hot, but it applies in a wider context. Take radioactive material. Or don’t, because they’re dangerous. Radioactivity is dangerous because it has a lot of energy, and is throwing it off in random directions. Something that’s radioactive won’t necessarily feel hot, because the way it gives off radiation isn’t the way our sensory organs are calibrated. You can pick up an object with enough radiated energy to shred through the material in your cells and kill you, and have it feel like room temperature. That’s what happened to the firemen at Chernobyl. 

In a technical sense, radioactive materials have a high temperature, since they’re giving off lots of energy. That’s what makes them dangerous. At the same time, though, you could get right up next to highly enriched nuclear materials (and under no circumstances should you ever try this), without feeling warm. You will feel something eventually, as your cells react to being ripped apart by, a hail of neutrons and other subatomic particles. You might feel heat as your cells become irradiated and give off their own energy, but not from the nuclear materials themselves. Also if this happens, it’s too late to get help. So temperature isn’t necessarily what we think about it.

Space is another good example. We call space “cold”, because water freezes when exposed to it. And space will feel cold, since it will immediately suck all the carefully hoarded energy out of any body part exposed to it. But actually, space, at least within the solar system, has a very high temperature wherever it encounters particles, for the same reason as above. The sun is a massive ongoing thermonuclear explosion that makes even our largest atom bombs jealous. There is a great deal of energy flying around the empty space of the solar system at any given moment, it just doesn’t have any particles to give its energy to. This is why the top layer of the atmosphere, the thermosphere, has a very high temperature, despite being totally inhospitable, and why astronauts are at increased cancer risk. 

This confusion is why most scientists who are dealing with fields like chemistry, physics, or astronomy use the Kelvin scale. One degree in the Kelvin scale, or one kelvin, is equivalent to one degree Celsius. However, unlike Celsius, where zero is the freezing point of water, zero kelvins is known as Absolute Zero, a so-far theoretical temperature where there is no movement among the involved particles. This is harder to achieve than it sounds, for a variety of complicated quantum reasons, but consider that body temperature is 310 K, in a scale where one hundred is the entire difference between freezing and boiling. Some of our attempts so far to reach absolute zero have involved slowing down individual particles by suspending them in lasers, which has gotten us close, but those last few degrees are especially tricky. 

Kelvin scale hasn’t really caught on in the same way as Celsius, perhaps because it’s an unwieldy three digits for anything in the normal human range. And given that the US is still dragging their feet about Celsius, which goes back to the French Revolution, not a lot of people are willing to die on that hill. But the Kelvin scale does underline an important point of distinction between temperature as a universal property of physics, from the relative, subjective, inconsistent way that we’re used to feeling it in our bodies.

Which is perhaps interesting, but I said this was relevant to looking at the world, so how’s that true? Sure, it might be more scientifically rigorous, but that’s not always essential. If you’re a redneck farm boy about to jump into the crick, Newtonian gravity is enough without getting into quantum theory and spacetime distortion, right?
Well, we’re having a debate on this planet right now about something referred to as “climate change”, a term which has been promoted in favor over the previous term “global warming”. Advocates of doing nothing have pointed out that, despite all the graphs, it doesn’t feel noticeably warmer. Certainly, they point out, the weather hasn’t been warmer, at least not consistently, on a human timescale. How can we be worried about increased temperature if it’s not warmer?

And, for as much as I suspect the people presenting these arguments to the public have ulterior motives, whether they are economic or political, it doesn’t feel especially warmer, and it’s hard to dispute that. Scientists, for their part, have pointed out that they’re examining the average temperature over a prolonged period, producing graphs which show the trend. They have gone to great lengths to explain the biggest culprit, the greenhouse effect, which fortunately does click nicely with our intuitive human understanding. Greenhouses make things warmer, neat. But not everyone follows before and after that. 

I think part of what’s missing is that scientists are assuming that everyone is working from the same physics-textbook understanding of temperature and energy. This is a recurring problem for academics and researchers, especially when the 24-hour news cycle (and academic publicists that feed them) jump the gun and snatch results from scientific publications without translating the jargon for the layman. If temperature is just how hot it feels, and global warming means it’s going to feel a couple degrees hotter outside, it’s hard to see how that gets to doomsday predictions, and requires me to give up plastic bags and straws. 

But as we’ve seen, temperature can be a lot more than just feeling hot and cold. You won’t feel hot if you’re exposed to radiation, and firing a laser at something seems like a bad way to freeze it. We are dealing on a scale that requires a more consistent rule than our normal human shortcuts. Despite being only a couple of degrees temperature, the amount of energy we’re talking about here is massive. If we say the atmosphere is roughly 5×10^18 kilograms, and the amount of energy it takes to raise a kilogram of air one kelvin is about 1Kj, then we’re looking at 5,000,000,000,000,000,000 Kilojoules. 

That’s a big number; what does it mean? Well, if my math is right, that’s about 1.1 million megatons of TNT. A megaton is a unit used to measure the explosive yield of strategic nuclear weapons. The nuclear bomb dropped on Nagasaki, the bigger of the two, was somewhere in the ballpark of 0.02 megatons. The largest bomb ever detonated, the Tsar Bomba, was 50 megatons. The total energy expenditure of all nuclear testing worldwide is estimated at about 510 megatons, or about 0.05% of the energy we’re introducing with each degree of climate change. 

Humanity’s entire current nuclear arsenal is estimated somewhere in the ballpark of 14,000 bombs. This is very much a ballpark figure, since some countries are almost certainly bluffing about what weapons they do and don’t have, and how many. The majority of these, presumably, are cheaper, lower-yield tactical weapons. Some, on the other hand, will be over-the-top monstrosities like the Tsar Bomba. Let’s generously assume that these highs and lows average out to about one megaton apiece. Suppose we detonated all of those at once. I’m not saying we should do this; in fact, I’m going to go on record as saying we shouldn’t. But let’s suppose we do, releasing 14,000 megatons of raw, unadulterated atom-splitting power in a grand, civilization-ending bonanza. In that instant, we would do have unleashed approximately one percent of the energy as we are adding in each degree of climate change. 

This additional energy means more power for every hurricane, wildfire, flood, tornado, drought, blizzard, and weather system everywhere on earth. The additional energy is being absorbed by glaciers, which then have too much energy to remain frozen, and so are melting, raising sea levels. The chain of causation is complicated, and involves understanding of phenomena which are highly specialized and counterintuitive to our experience from most of human existence. Yet when we examine all of the data, it is the pattern that seems to emerge. Whether or not we fully understand the patterns at work, this is the precarious situation in which our species finds itself. 

Truth Machine

I find polygraphs fascinating. The idea of using a machine to exploit bugs in human behavior to discern objective truth from falsehood is just an irresistible notion to a story-minded person like me. To have a machine that can cut through the illusions and deceptions of human stories is just so metaphorically resonant. Of course, I know that polygraphs aren’t really lie detectors, not in the way they’re imagined. At best they monitor a person for signs of physiological stress as a reaction to making up lies on the spot. This is easily lost in background noise, and easily sidestepped by rehearsing a convincing lie ahead of time. 

A large part of the machine’s job is to make a subject afraid to lie in the first place, which makes lies easier to spot. It doesn’t work if the subject believes the lie, or doesn’t experience stress while telling it, nor is it effective on people who fall outside of some basic stereotypes about liars. Eye surgery, heart arrhythmia, brain damage, and ambidextrousness can all throw a polygraph to the point of uselessness. At worst, polygraphs provide a prop for interrogators to confirm their own biases and coerce a subject into believing they’re trapped, whether or not they’re actually guilty, or else to convince jurors of an unproven circumstantial case. 

Still, they’re fascinating. The kabuki theater act that interrogators put on to try and maneuver the subject into the correct state of mind to find a chink in the psychological armor, the different tactics, the mix of science and showmanship is exciting to explore. I enjoy reading through things like polygraph manuals, and the list of questions used in interviews of federal employees for security clearance. 

What’s interesting is that most of the questions are just bad. Questions like “Prior to [date], did you ever do anything dishonest?” are just bad questions. After all, who decides dishonesty? Is a dishonest act only an action committed in service of a direct, intentional lie, or is it broader? Does omission count as an act in this context? Is dishonesty assessed at the time of the act, or in retrospect? Would a knowing deception made in the interest of a unambiguously moral end (for example, misdirecting a friend about a Christmas present) constitute a dishonest act? 

These questions are listed in the manual as “No-answer Comparison Questions”, which if I understand the protocol correctly, are supposed to be set up such that a subject will always answer “No”, and most of the time, will be lying. The idea here is to establish a baseline, to get an idea of what the subject looks like when lying. The manual suggests that these questions will always be answered with “no” because, earlier in the interrogation, the interrogator will have made clear that it is crucial for subjects to provide an impression of being truthful people. The government, the interrogator is instructed to say, doesn’t want to work with people who lie or cheat, and so it is very important that people going through this process appear honest and straight laced. 

Of course, this is hogwash. The government does want people who lie, and it wants people who are talented at it. A general needs to be talented at deception. An intelligence operative needs to keep secrets. Any public figure dealing with sensitive information needs to be able to spin and bend the truth when national security demands it. Even the most morally absolutist, pro-transparency fiend understands that certain government functions require discretion with the truth, and these are exactly the kind of jobs that would involve polygraph tests beforehand. 

The government’s polygraph interrogation protocols rely on subjects swallowing this lie, that they need to keep a consistent and presentable story at the expense of telling the truth. They also rely on the subject recognizing that they are lying and having a reaction, since a polygraph cannot in itself divine material truths, but work only by studying reactions. For it to really work, the subject must also be nervous about lying. This too is set up ahead of time; interrogators are instructed to explain that lying is a conscious and deliberate act, which inspires involuntary physiological fear in the subject. This is arguably half true, but mostly it sets up a self-fulfilling prophecy in the mind of the subject. 

It’s pretty clear that the modern polygraph is not a lie detector. But then again, how could it be? Humans can barely even agree on a consistent definition of a lie within the same language and culture. Most often we tie in our definition of lying with our notions of morality. If you used deception and misrepresentation to do a bad thing, then you lied. If you said something that wasn’t true, but meant nothing by it, and nothing bad came out of it, well then you were probably just mistaken. I don’t want to make this post political, but this trend is obvious if you look at politics: The other side lies, because their ranks are filled with lying liars. By contrast, our side occasionally misspeaks, or is misinterpreted.

This isn’t to say that there’s no such thing as truth or lies, just that we can’t seem to pin down a categorical definition, which you do need if you’re going to program a machine to identify them. We could look for physiological reactions involved in what we collectively call lying, which is what polygraphs purport to do, but this just kicks the problem back a step. After all, what if I genuinely and wholeheartedly don’t consider my tactful omission about “clandestine, secret, unauthorized contact with a non-U.S. citizen or someone (U.S. citizen or non-U.S. citizen) who represents a foreign government, power, group or organization, which could result in a potential or real adverse impact on U.S. national security, or else could result in the unauthorized aid to a foreign government, power, group or organization” to be a lie? If the machine is testing my reactions, it would find nothing, provided I didn’t believe I had anything to lie about. 

This is where competent question design and interrogation technique is supposed to obviate this issue. So, a competent interrogator would be sure to explain the definition of contact, and foreign power, and so on, in such a way that would cause me to doubt any misconceptions, and hopefully if I’m lying, trigger a stress reaction. The interrogator might insinuate that I’m withholding information in order to get me to open up, or try and frame the discussion in such a way that I would think opening up was my only option. But at that point, we’re not really talking about a lie detecting machine, so much as a machine that gives an interrogator data to know when to press psychological attacks. The main function of the machine is to give the interrogator certainty and undermine my own confidence, so that the interrogator can pull off bluffing me into cracking. 

So are polygraphs useful? Obviously, as a psychological tool in an inquisitional interrogation, they provide a powerful weapon. But are they still more useful than, say, a metal box with a colander attached? Probably, under some circumstances, in the hands of someone familiar with the underlying principles and moving parts of both psychology, physiology, and the machine itself. After all, I don’t think there would be such a market if they were complete bunk. But then again, do I trust that they’re currently being used that way by the groups that employ them? Probably not.

Works Consulted

Burney, Nathan. “Convict Yourself.” The Illustrated Guide to Law, lawcomic.net/guide/?p=2494.

United States, Department of Defense, Polygraph Institute. “Law Enforcement Pre-Employment Test.” Law Enforcement Pre-Employment Test. antipolygraph.org/documents/dodpi-lepet.pdf.

A Lesson in Credulity

Last week I made a claim that, on review, might be untrue. This was bound to happen sooner or later. I do research these posts, but except for the posts where I actually include a bibliography, I’m not fact checking every statement I make. 


One of the dangers of being smart, of being told that you’re smart, and of repeatedly getting good grades or otherwise being vindicated on matters of intelligence, is that it can lead to a sense of complacency. I’m usually right, I think to myself, and when I think I know a fact, it’s often true, so unless I have some reason to suspect I’m wrong, I don’t generally check. For example, take the statement: there are more people that voted for republicans in the last election living to the south of me than to the north. 

I am almost certain this is true, even without checking. I would probably bet money on it. I live north of New York City, so there aren’t even that many people north of me, let alone republican voters. It’s objectively possible that I’m wrong. I might be missing some piece of information, like a large population of absentee Republicans in Canada, or the state of Alaska. Or I might simply be mistaken. Maybe the map I’m picturing in my head misrepresents how far north I am compared to other northern border states like North Dakota, Michigan, and Wisconsin. But I’m pretty sure I’m still right here, and until I started second guessing myself for the sake of argument, I would have confidently asserted that statement as fact, and even staked a sizable sum on it. 

Last week I made the following claim: Plenty of studies in the medical field have exalted medical identification as a simple, cost-effective means of promoting patient safety. 

I figured that this had to be true. After all, doctors recommend wearing medical identification almost universally. It’s one of those things, like brushing your teeth, or eating your vegetables that’s such common advice that we assume it to be proven truth. After all, if there wasn’t some compelling study to show it to be worthwhile, why would doctors continue to breath down the necks of patients? Why would patients themselves put up with it? Why would insurance companies, which are some of the most ruthlessly skeptical entities in existence, especially when it comes to paying for preventative measures, shell out for medical identification unless it was already demonstrated to be a good deal in the long run?

Turns out I may have overestimated science and economics here. Because in writing my paper, I searched for that definitive overarching study or meta analysis that conclusively proved that medical identification had a measurable positive impact. I searched broadly on google, and also through the EBSCO search engine, which my trusty research librarian told me was the best agglomeration of scientific and academic literature tuition can buy. I went through papers from NIH immunohematolgy researchers to the Army Medical Corps; from clinics in the Canadian high arctic to the developing regions of Southeast Asia. I read through translations of papers originally published in French and Chinese, in the most prestigious journals of their home countries. And I found no conclusive answers.

 There was plenty of circumstantial evidence. Every paper I found supported the use of medical identification. Most papers I found were actually about other issues, and merely alluded to medical identification by describing how they used it in their own protocols. In most clinics, it’s now an automatic part of the checklist to refer newly diagnosed patients to wear medical identification; almost always through the MedicAlert Foundation.

The two papers I found that addressed the issue head on were a Canadian study about children wearing MedicAlert bracelets being bullied, and a paper in an emergency services journal about differing standards in medical identification. Both of these studies, though, seemed to skirt around the quantifiable efficacy of medical identification and were more interested in the tangential effects.

There was a third paper that dealt more directly as well, but there was something fishy about it. The title was “MedicAlert: Speaking for Patients When They Can’t”, and the language and graphics were suspiciously similar to the advertising used by the MedicAlert Foundation website. By the time I had gotten to this point, I was already running late with my paper. EBSCO listed the paper as “peer reviewed”, which my trusty research librarian said meant it was credible (or at least, credible enough), and it basically said exactly the things that I needed a source for, so I included it in my bibliography. But looking back, I’m worried that I’ve fallen into the Citogenesis trap, just  this time with a private entity rather than Wikipedia.
The conspiracy theorist in me wants to jump to the conclusion that I’ve uncovered a massive ruse; that the MedicAlert Foundation has created and perpetuated a myth about the efficacy of their services, and the sheeple of the medical-industrial complex are unwitting collaborators. Something something database with our medical records something something hail hydra. This pretty blatantly fails Occam’s Razor, so I’m inclined to write it off. The most likely scenario here is that there is a study lying around that I simply missed in my search, and it’s so old and foundational that later research has just accepted it as common knowledge. Or maybe it was buried deep in the bibliographies of other papers I read, and I just missed it. 

Still, the fact that I didn’t find this study when explicitly looking for it raises questions. Which leads me to the next most likely scenario: I have found a rare spot of massive oversight in the medical scientific community. After all, the idea that wearing medical identification is helpful in an emergency situation is common sense, bordering on self-evident. And there’s no shortage of anecdotes from paramedics and ER doctors that medical identification can help save lives. Even in the literature, while I can’t find an overview, there are several individual case studies. It’s not difficult to imagine that doctors have simply taken medical identification as a logical given, and gone ahead and implemented it into their protocols.

In that case, it would make sense that MedicAlert would jump on the bandwagon. If anything, having a single standard makes the process more rigorous. I’m a little skeptical that insurance companies just went along with it; it’s not like common sense has ever stopped them from penny-pinching before. But who knows, maybe this is the one time they took doctors at their word. Maybe, through some common consensus, this has just become a massive blind spot for research. After all, I only noticed it when I was looking into something tangential to it. 
So where does this leave us? If the data is really out there somewhere, then the only problem is that I need a better search engine. If this is part of a blind spot, if the research has never been done and everyone has just accepted it as common sense, then it needs to be put in the queue for an overarching study. Not that I expect that such a study won’t find a correlation between wearing medical identification and better health outcomes. After all, it’s common sense. But we can do better than just acting on common sense and gut instincts. We have to do better if we want to advance as a species.

The other reason why we need to have hard, verifiable numbers with regards to efficacy, besides the possibility we might discover our assumptions were wrong, is to have a way to justify the trade off. My whole paper has been about trying to prove the trade off a person makes when deciding to wear medical identification, in terms of stigma, self perception, and comfort. We often brush this off as being immaterial. And maybe it is. Maybe, next to an overwhelming consensus of evidence showing a large and measurable positive impact on health outcomes, some minor discomfort wearing a bracelet for life is easily outweighed. 

Then again, what if the positive impact is fairly minor? If the statistical difference amounts only to, let’s say, a few extra hours life expectancy, is that worth a lifetime of having everyone know that you’re disabled wherever you go? People I know would disagree on this matter. But until we can say definitively the medical impact on the one hand, we can’t justify it against the social impact on the other. We can’t have a real debate based on folk wisdom versus anecdotes. 

On Hippocratic Oaths

I’ve been thinking about the Hippocratic Oath this week. This came up while wandering around campus during downtime, when I encountered a mural showing a group of nurses posing heroically, amid a collage of vaguely related items, between old timey nurse recruitment posters. In the background, the words of the Hippocratic Oath were typed behind the larger than life figures. I imagine they took cues from military posters that occasionally do similar things with oaths of enlistment. 

I took special note of this, because strictly speaking, the Hippocratic Oath isn’t meant for nurses. It could arguably apply to paramedics or EMTs, since, epistemologically at least, a paramedic is a watered down doctor, the first ambulances being an extension of the military hospitals and hence under the aegis of surgeons and doctors rather than nurses. But that kind of pedantic argument not only ignores actual modern day training requirements, since in most jurisdictions the requirements for nurses are more stringent than EMTs and at least as stringent as paramedics, but shortchanges nurses, a group to whom I owe an enormous gratitude and for whom I hold an immense respect. 

Besides which, whether or not the Hippocratic Oath – or rather, since the oath recorded by Hippocrates himself is recognized as being outdated, and has been almost universally superseded by more modern oaths – is necessarily binding to nurses, it is hard to argue that the basic principles aren’t applicable. Whether or not modern nurses have at their disposal the same curative tools as their doctorate-holding counterparts, they still play an enormous role in patient outcomes. In fact, by some scientific estimates, the quality of nursing staff may actually matter more than the actions undertaken by doctors. 

Moreover, all of the ethical considerations still apply. Perhaps most obviously, respect for patients and patient confidentiality. After all, how politely the doctor treats you in their ten minutes of rounds isn’t going to outweigh your direct overseers for the rest of the day. And as far as confidentiality, whom are you more concerned about gossiping: the nerd who reads your charts and writes out your prescription, or the nurse who’s in your room, undressing you to inject the drugs into the subcutaneous tissue where the sun doesn’t shine? 

So I don’t actually mind if nurses are taking the Hippocratic Oath, whether or not it historically applies. But that’s not why it’s been rattling around my mind the last week. 

See, my final paper in sociology is approaching. Actually, it’s been approaching; at this point the paper is waiting impatiently at the door to be let in. My present thinking is that I will follow the suggestion laid down in the syllabus and create a survey for my paper. My current topic regards medical identification. Plenty of studies in the medical field have exalted medical identification as a simple, cost-effective means of promoting patient safety. But compelling people to wear something that identifies them as being part of a historically oppressed minority group has serious implications that I think are being overlooked when we treat people who refuse to wear medical identification in the same group as people who refuse to get vaccinated, or take prescribed medication.

What I want to find out in my survey is why people who don’t wear medical identification choose not to. But to really prove (or disprove, as the case may be, since a proper scientific approach demands that possibility) my point, I need to get at the sensitive matters at the heart of this issue: medical issues and minority status. This involves a lot of sensitive topics, and consequently gathering data on it means collecting potentially sensitive information. 

This leaves me in an interesting position. The fact that I am doing this for a class at an accredited academic institution gives me credibility, if more-so with the lay public than among those who know enough about modern science to realize that I have no real earned credentials. But the point remains, if I posted online that I was conducting a survey for my institution, which falls within a stretched interpretation of the truth, I could probably get many people to disclose otherwise confidential information to me. 

Since I have never taken an oath, and have essentially no oversight in the execution n if this survey, other than the bare minimum privacy safeguards required by the FCC in my use of the internet, which I can satisfy through a simple checkbox in the United States. If I were so inclined, I could take this information entrusted to me, and either sell it, or use it for personal gain. I couldn’t deliberately target individual subjects, more because that would be criminal harassment than because of any breach of trust. But I might be able to get away with posting it online and letting the internet wreak what havoc it will. This would be grossly unethical and bordering on illegal, but I could probably get away with it. 

I would never do that, of course. Besides being wrong on so many different counts, including betraying the trust of my friends, my community, and my university, it would undermine trust in the academic and scientific communities, at a time where they have come under political attack by those who have a vested interest in discrediting truth. And as a person waiting on a breakthrough cure that will allow me to once again be a fully functional human being, I have a vested interest in supporting these institutions. But I could do it, without breaking any laws, or oaths.

Would an oath stop me? If, at the beginning of my sociology class, I had stood alongside my fellow students, with my hand on the Bible I received in scripture class, in which I have sought comfort and wisdom in dark hours, and swore an oath like the Hippocratic one or its modern equivalents to adhere to ethical best practices and keep to my responsibilities as a student and scientist, albeit of sociology rather than one of the more sciency sciences, would that stop me if I had already decided to sell out my friends?

I actually can’t say with confidence. I’m inclined to say it would, but this is coming from the version of me that wouldn’t do that anyway. The version of me that would cross that line is probably closer to my early-teenage self, whom my modern self has come to regard with a mixture of shame and contempt, who essentially believed that promises were made to be broken. I can’t say for sure what this version of myself would have done. He shared a lot of my respect for science and protocol, and there’s a chance he might’ve been really into the whole oath vibe. So it could’ve worked. On the other hand, it he thought he would’ve gained more than he had to lose, I can imagine how he would’ve justified it to himself. 

Of course, the question of the Hippocratic oath isn’t really about the individual that takes it, so much as it is the society around it. It’s not even so much about how the society enforces oaths and punished oath-breakers. With the exception of perjury, we’ve kind of moved away from Greco-Roman style sacred blood oaths. Adultery and divorce, for instance, are both oath-breaking, but apart from the occasional tut-tut, as a society we’ve more or less just agreed to let it slide. Perhaps as a consequence of longer and more diverse lives, we don’t really care about oaths.

Perjury is another interesting case, though. Because contrary to the occasionally held belief, the crime of perjury isn’t actually affected by whether the lie in question is about some other crime. If you’re on the stand for another charge of which you’re innocent, and your alibi is being at Steak Shack, but you say you were at Veggie Villa, that’s exactly as much perjury as if you had been at the scene of the crime and lied about that. This is because witness testimony is treated legally as fact. The crime of perjury isn’t about trying to get out of being punished. It’s about the integrity of the system. That’s why there’s an oath, and why that oath is taken seriously.

The revival of the Hippocratic Oath as an essential part of the culture of medicine came after World War II, at least partially in response to the conclusion of the Nuremberg Trials and revelations about the holocaust. Particularly horrifying was how Nazi doctors had been involved in the process, both in the acute terms of unethical human experimentation, and in providing medical expertise to ensure that the apparatus of extermination was as efficient as possible. The Red Cross was particularly alarmed- here were people who had dedicated their lives to an understanding of the human condition, and had either sacrificed all sense of morality in the interest of satiating base curiosity, or had actively taken the tools of human progress to inflict destruction in service of an evil end. 

Doctors were, and are, protected under the Geneva Convention. Despite Hollywood and video games, shooting a medic wearing medical symbol, even if they are coming off a landing craft towards your country, is a war crime. As a society, we give them enormous power, with the expectation that they will use that power and their knowledge and skills to help us. This isn’t just some set of privileges we give doctors because they’re smart, though; that trust is essential to their job. Doctors can’t perform surgery if they aren’t trusted with knives, and we can’t eradicate polio if no one is willing to be inoculated.

The first of the modern wave of revisions of the Hippocratic Oath to make it relevant and appropriate for today started with the Red Cross after World War II. The goal was twofold. First: establish trust in medical professionals by setting down a simple, overriding set of basic ethical principles that can be distilled down to a simple oath, so that it can be understood by everyone. Second: make this oath not only universal within the field, but culturally ubiquitous, so as to make it effectively self-enforcing. 

It’s hard to say whether this gambit has worked. I’m not sure how you’d design a study to test it. But my gut feeling is that most people trust their own doctors, certainly more than, say, pharmacologists, meteorologists, or economists, at least partially because of the idea of the Hippocratic Oath. The general public understands that doctors are bound by an oath of ethical principles, and this creates trust. It also means that stories about individual incidents of malpractice or ethics breaches tend to be attributed to sole bad actors, rather than large scale conspiracies. After all, there was an oath, and they broke it; clearly it’s on that person, not the people that came up with the oath.

Other fields, of course, have their own ethical standards. And since, in most places, funding for experiments are contingent on approval from an ethics board, they’re reasonably well enforced. A rogue astrophysicist, for instance, would find themselves hard pressed to find the cash on their own to unleash their dark matter particle accelerator, or whatever, if they aren’t getting their funding to pay for electricity. This is arguably a more fail-safe model than the medical field, where with the exception of big, experimental projects, ethical reviews mostly happen after something goes wrong. 

But if you ask people around the world to rate the trustworthiness of both physicians and astrophysicists, I’d wager a decent sum that more people will say they trust the medical doctor more. It’s not because the ethical review infrastructure keeps doctors better in check, it’s not because doctors are any better educated in their field, and it’s certainly not anything about the field itself that makes medicine more consistent or less error prone. It’s because medical doctors have an oath. And whether or not we treat oaths as a big deal these days, they make a clear and understandable line in the sand. 

I don’t know whether other sciences need their own oath. In terms of reducing ethical ethical breaches, I doubt it will have a serious impact. But it might help with the public trust and relatability probables that the scientific community seems to be suffering. If there was an oath that made it apparent how the language of scientists, unlike pundits, is seldom speculative, but always couched in facts; how scientists almost never defend their work even when they believe in it, preferring to let the data speak for itself; and how the best scientists already hold themselves to an inhumanly rigid standard of ethics and impartiality in their work, I think it could go a ways towards improving appreciation of science, and our discourse as a whole.

Mr. Roboto

I’m a skeptic and an intellectual, so I don’t put too much weight coincidence. But then again, I’m a storyteller, so I love chalking up coincidences as some sort of element of an unseen plot.

Yesterday, my YouTube music playlist brought me across Halsey’s Gasoline. Thinking it over, I probably heard this song in passing some time ago, but if I did, I didn’t commit it to memory, because hearing it was like listening to it for the first time. And what a day to stumble across it. The lyrics, if you’ve never heard them, go thusly:

And all the people say
You can’t wake up, this is not a dream
You’re part of a machine, you are not a human being
With your face all made up, living on a screen
Low on self esteem, so you run on gasoline

I think there’s a flaw in my code
These voices won’t leave me alone
Well my heart is gold and my hands are cold

Why did this resonate with me so much today of all days? Because I had just completed an upgrade of my life support systems to new software, which for the first time includes new computer algorithms that allow the cyborg parts of me to act in a semi-autonomous manner instead of relying solely on human input.

It’s a small step, both from a technical and medical perspective. The algorithm it uses is simple linear regression model rather than a proper machine learning program as people expect will be necessary for fully autonomous artificial organs. The only function the algorithm has at the moment is to track biometrics and shut off the delivery of new medication to prevent an overdose, rather than keeping those biometrics in range in general. And it only does this within very narrow limits; it’s not really a fail-safe against overdoses, because the preventative mechanism is still very narrowly applied, and very fallible.

But the word prevention is important here. Because this isn’t a simple dead man’s switch. The new upgrade is predictive, making decisions based on what it thinks is going to happen, often before the humans clue in (in twelve hours, this has already happened to me). In a sense, it is already offloading human cognitive burden and upgrading the human ability to mimic body function. As of yesterday, we are now on the slippery slope that leads to cyborgs having superhuman powers.

We’re getting well into sci-fi and cyberpunk territory here, with the door open to all sorts of futurist speculation, but there are more questions that need to be answered sooner rather than later. For instance, take the EU General Data Protection Regulation, which (near as I, an American non-lawyer can make heads or tails of it,) mandates companies and people disclose when they use AI or algorithms to make decisions regarding EU citizens or their data, and mandating recourse for those who want the decisions reviewed by a human; a nifty idea for ensuring the era of big data remains rooted in human ethics.

But how does it fit in if, instead of humans behind algorithms, its algorithms behind humans? In its way, all of my decisions are at least now partially based on algorithms, given that the algorithms keep me alive to be able to make decisions, and have taken over other cognitive functions that would occupy my time and focus otherwise. And I do interact with EU citizens. A very strict reading of the EU regulations suggests this might be enough for me to fall under its aegis.

And sure, this is a relatively clear cut answer today; an EU court isn’t going to rule that all of my actions need to be regulated like AI because I’m wearing a medical device. But as the technology becomes more robust, the line is going to get blurrier, and we’re going to need to start treating some hard ethical questions not as science fiction, but as law. What happens when algorithms start taking over more medical functions? What happens when we start using machines for neurological problems, and there really isn’t a clear line between human and machine for decision making process?

I have no doubt that when we get to that point, there will be people who oppose the technology, and want it to be regulated like AI. Some of them will be Westboro Baptist types, but many will be ordinary citizens legitimately concerned about privacy and ethics. How do we build a society so that people who take advantage of these medical breakthroughs aren’t, as in Halsey’s song, derided and ostracized in public? How do we avoid creating another artificial divide and sparking fear between groups?

As usual, I don’t know the answer. Fortunately for us, we don’t need an answer today. But we will soon. The next software update for my medical device, which will have the new algorithms assuming greater functions and finer granularity, is already in clinical trials, and expected to launch this time next year. The EU GDPR was first proposed in 2012 and only rolled out this year. The best way to avoid a sci-fi dystopia future is conscious and concerted thought and discussion today.