Real Men Aren’t Scared of Needles

Since most of my readers access this site from countries where the COVID-19 vaccine is now available, I’m here to remind you to get vaccinated when it’s your turn. If you’re over twelve in the United States, you are eligible now. While there are many things in life that can be safely postponed or procrastinated, this isn’t one of them. Getting as many people vaccinated as quickly as possible is humanity’s last best chance to quash this virus before it becomes endemic, which would make it impossible to go back to normal. 

You’ve probably already heard this argument from better qualified sources than me. And let’s be real, if you haven’t listened to epidemiological statistics or long term morbidity case studies coming from the CDC, you have no reason to listen to them coming from me. So instead, I’m going to present an argument that you probably won’t see on a prime time TV spot any time soon. 

You should get the vaccine because getting the virus will ruin your sex life. 
I mean, you should also get it because the virus might kill you, or kill other people, or leave you unable to climb stairs, and so on. But if those stories haven’t convinced you already, clearly you have a different set of priorities. So if you need a better reason than your own survival: you should get vaccinated because more and more COVID-19 survivors are developing sexual dysfunction, in particular male erectile dysfunction. Not just from running out of breath or getting tired, either, but from the virus itself being present long after acute infection phase. Tissue samples confirm the presence of COVID-19 spike proteins obstructing normal arousal mechanisms.

Don’t take my word for it. The pilot study is open access, and not that long to read by the standards of  journal articles. Yes, there is some medical jargon, and there’s the usual amount of carefully worded and qualified statements saying that more study is needed, but the data speaks for itself. It’s incredibly obvious, isn’t it? A novel virus is introduced into our precious bodily fluids without the knowledge of the individual, certainly without any choice. Luckily our scientists are able to interpret the resulting loss of essence correctly. 

There are obviously public health implications in these findings that viral particles are lingering in certain tissues and obstructing function after the acute infectious period. But the American public has demonstrated in its actions that it doesn’t really follow the nuance of public health, or scientific studies, or systemic issues in general. The only people who care about things like disability adjusted life expectancy or long term national stability are over-educated bleeding-heart know-it-alls. On the other hand, protecting one’s manhood from the virus’s attempt to sap and impurity our precious bodily fluids is as American as apple pie. 

World Health Day

The following message is part of a campaign to raise public awareness and resources in light of the global threat posed by COVID-19 on World Health Day. If you have the resources, please consider contributing in any of the ways listed at the end of this post. Remember to adhere to current local health guidelines wherever you are, which may differ from those referenced in this post. 

Now that the world has woken up to the danger that we face in the Covid-19 pandemic, and world leaders have begun to grapple with the problem in policy terms, many individuals have justifiably wondered how long this crisis will last. The answer is, we don’t know. I’m going to repeat this several times, because it’s important to come to terms with this. For all meaningful purposes, we are living through an event that has never happened before. Yes, there have been pandemics this bad in the long ago, and yes, there have been various outbreaks in recent memory, but there has not been a pandemic which is as deadly, and as contagious, which we have failed to contain so spectacularly, recently enough to use it is a clear point of reference. This means that every prediction is not just speculation, but speculation born of an imperfect mosaic. 

Nevertheless, it seems clear that unless we are willing to accept tens of millions of deaths in every country, humanity will need to settle in for a long war. With the language of the US President and Queen Elizabeth, the metaphor is apt. Whether “long” may mean a few months, or into next year will depend on several factors, among them whether a culture which has for many decades been inculcated with the notion of personal whimsy and convenience is able to adapt to collective sacrifice. The longer we take to accept the gravity of the threat, the weaker our response will be, and the longer it will take us to recover. Right now all of humanity face a collective choice. Either we will stubbornly ignore reality, and pay the price with human tragedy of hitherto-fore unimaginable proportions, and repercussions for decades to come, or we will listen to experts and hunker down, give support to those who need it, and help each other through the storm. 

For those who look upon empty streets and bare shelves and proclaim the apocalypse, I have this to say: it is only the apocalypse if we make it such. Granted, it is conceivable that if we lose sight of our goals and our capabilities, either by blind panic or stubborn ignorance, we may find the structures of our society overwhelmed, and the world we know may collapse. This is indeed a possibility, but a possibility which it is entirely within our collective capacity to avoid. The data clearly shows that by taking care of ourselves at home, and avoiding contact with other people or surfaces, we can slow the spread of the virus. With the full mobilization of communities, we can starve the infection of new victims entirely. But even a partial slowing of cases buys us time. With that most valuable of currencies, we can expand hospital capacity, retool our production, and focus our tremendous scientific effort towards forging new weapons in this fight. 

Under wartime pressure, the global scientific community is making terrific strides. Every day, we are learning more about our enemy, and discovering new ways to give ourselves the advantage. Drugs which prove useful are being deployed as fast as they can be produced. With proper coordination from world leaders, production of these drugs can be expanded to give every person the best fighting chance should they become sick. The great challenges now are staying the course, winning the battle for production, and developing humanity’s super weapon.

Staying the course is fairly simple. For the average individual not working essential jobs, it means staying home, avoiding contact as much as possible, and taking care to stay healthy. For communities and organizations, it means encouraging people to stay at home by making this as easy as possible. Those working essential jobs should be given whatever resources they need to carry on safely. Those staying at home need to have the means to do so, both logistically and psychologically. Logistically, many governments are already instituting emergency financial aid to ensure the many people out of work are able to afford staying home, and many communities have used volunteers or emergency workers such as national guard troops to support deliveries of essentials, in order to keep as many people as possible at home. Psychologically, many groups are offering online activities, and many public figures have taken to providing various forms of entertainment and diversion.

Winning the battle for production is harder, but still within reach. Hospitals are very resource intensive at the best of times. Safety in a healthcare setting means the use of large amounts of single-use disposable materials, in terms of drugs and delivery mechanisms, but also personal protective equipment such as masks, gowns, and gloves. If COVID-19 is a war, ventilators are akin to tanks, but PPE are akin to ammunition. Just as it is counterproductive and harmful to ration how many bullets or grenades a soldier may need to use to win a battle, so too is it counterproductive and harmful to insist that our frontline healthcare workers make do with a limited amount of PPE. 

The size and scope of the present crisis, taken with the amount of time we have to act, demands a global industrial mobilization unprecedented during peacetime, and unseen in living memory. It demands either that individuals exhibit self discipline and a regard for the common good, or central authorities control the distribution of scarce necessities. It demands that we examine new ways of meeting production needs while minimizing the number of people who must be kept out at essential jobs. For the individual, this mobilization may require further sacrifice; during the mobilization of WWII, certain commodities such as automobiles, toys, and textiles were unavailable or out of reach. This is the price we paid to beat back the enemy at the gates, and today we find ourselves in a similar boat. All of these measures are more effective if taken calmly in advance by central government, but if they are not they will undoubtedly be taken desperately by local authorities. 

Lastly, there is the challenge of developing a tool which will put an end to the threat of millions of deaths. In terms of research, there are several avenues which may yield fruit. Many hopes are pinned on a vaccine, which would grant immunity to uninfected, and allow us to contain the spread without mass quarantine. Other researchers are looking for a drug, perhaps an antiviral or immunomodulator which might make COVID-19 treatable at home with a pill, much like Tamiflu blunted the worst of H1N1. Still others are searching for antibodies which could be synthesized en masse, to be infused to the blood of vulnerable patients. Each of these leads requires a different approach. However, they all face the common challenge of not only proving safety and effectiveness against COVID-19, but giving us an understandable mechanism of action.

Identifying the “how and why” is not merely of great academic interest, but a pressing medical concern. Coronaviruses are notoriously unstable and prone to mutation; indeed there are those who speculate that COVID-19 may be more than one strain. Finding a treatment or vaccine without understanding our enemy exposes us to the risk of other strains emerging, undoing our hard work and invalidating our collective sacrifices. Cracking the COVID-19 code is a task of great complexity, requiring a combination of human insight and brilliance, bold experimentation, luck, and enormous computational resources. And like the allied efforts against the German enigma, today’s computer scientists have given us a groundwork to build off.

Unraveling the secrets of COVID-19 requires modeling how viral proteins fold and interact with other molecules and proteins. Although protein folding follows fairly simple rules, the computational power required to actually simulate them is enormous. For this, scientists have developed the Folding@Home distributed computing project. Rather than constructing a new supercomputer which would exceed all past attempts, this project aims to harness the power of unused personal computers in a decentralized network. Since the beginning of March, Folding@Home has focused its priorities on COVID-19 related modeling, and has been inundated with people donating computing power, to the point that they had to get help from other web services companies because simulations being completed faster than their web servers could assign them.

At the beginning of March, the computing power of the entire project clocked in at around 700 petaflops, FLOPS being a unit of computing power, meaning Floating Point Operations Per Second. During the Apollo moonshot, a NASA supercomputer would average somewhere around 100,000 FLOPS. Two weeks ago, they announced a new record in the history of computing: more than an exaflop of constant distributed computing power, or 10^18 FLOPS. With the help of Oracle and Microsoft, by the end of March, Folding@Home exceeded 1.5 Exaflops. These historic and unprecedented feats are a testament to the ability of humanity to respond to a challenge. Every day this capacity is maintained or exceeded brings us closer to breaking the viral code and ending the scourge. 

Humanity’s great strength has always lay in our ability to learn, and to take collective action based on reason. Beating back COVID-19 will entail a global effort, in which every person has an important role to play. Not all of us can work in a hospital or a ventilator factory, but there’s still a way each of us can help. If you can afford to donate money, the World Health Organization’s Solidarity Fund is coordinating humanity’s response to the pandemic. Folding@Home is using the power of your personal computers to crack the COVID-19 code. And if nothing else, every person who stays healthy by staying home, washing hands, wearing homemade masks and keeping social distance is one less person to treat in the ICU. 

This Was A Triumph

Today I am happy to announce a new milestone. As of today I have received from my manufacturer the authorization code to initiate semi-closed loop mode on my life support devices. This means that for the first time, my life support devices are capable of keeping me alive for short periods without immediate direct human intervention. For the first time in more than a decade, it is now safe for me to be distracted by such luxuries as homework, and sleep. At least, for short periods, assuming everything works within normal parameters. 

Okay, yes, this is a very qualified statement. Compared to the kind of developments which are daily promised by fundraising groups and starry eyed researchers, this is severely underwhelming. Even compared solely to technologies which have already proven themselves in other fields and small scale testing, the product which is now being rolled out is rather pathetic. There are many reasons for this, from the risk-aversiveness of industry movers, to the glacial pace of regulatory shakers, to a general shortage of imagination among decision makers. It is easy to find reasons to be angry and feel betrayed that the US healthcare system has once again failed to live up to its promise of delivering breakneck innovation and improvement.

Even though this is disappointing compared to the technological relief we were marketed, I am still excited about this development. First of all, because it is a step in the right direction, even if a small one, and any improvement is worth celebrating. Secondly, and chiefly, because I believe that even if this particular new product is only an incremental improvement over the status quo, and pales in comparison to what had been promised for the past several decades, the particular changes represent the beginning of a larger shift. After all, this is the first iteration of this kind of life support device which uses machine learning, not merely to enable a fail-safe to prevent medication overdoses, but which actually intends to make proactive treatment decisions without human oversight.

True, the parameters for this decision making are remarkably conservative, some argue to the point of uselessness. The software will not deploy under anything short of perfect circumstances, its treatment targets are short of most clinical targets, let alone best practices, the modeling is not self-correcting, and the software can not interpret human intervention and is therefore mutually exclusive with aggressive treatment by a human.

Crucially, however, it is making decisions instead of a human. We are over the hill on this development. Critiques of its decision-making skill can be addressed down the line, and I expect once the data is in, it will be a far easier approval and rollout process than the initial version. But unless some new hurdle appears, as of now we are on the path towards full automation.

Some Like It Temperate

I want to share something that took me a while to understand, but once I did, it changed my understanding of the world around me. I’m not a scientist, so I’m probably not going to get this exactly perfect, and I’ll defer to professional judgment, but maybe I can help illustrate the underlying concept.

So temperature is not the same thing as hot and cold. In fact, temperature and heat aren’t really bound together inherently. On earth, they’re usually correlated, and as humans, our sensory organs perceive them through the same mechanism in relative terms, which is why we usually think of them together. This sensory shortcut works for most of the human experience, but it can become confusing and counterintuitive when we try to look at systems of physics outside the scope of an everyday life. 

So what is temperature? Well, in the purest sense, temperature is a measure of the average kinetic energy among a group of particles. How fast are they going, how often are they bumping into each other, and how much energy are they giving off when they do? This is how temperature and phase of matter correlate. So liquid water has a higher temperature than ice because its molecules are moving around more, with more energy. Because the molecules are moving around more, liquid water is less dense, which it’s easier to cut through water than ice. Likewise, it’s easier still to cut through steam than water. Temperature is a measure of molecular energy, not hotness. Got it? Good, because it’s about to get complicated.

So something with more energy has a higher temperature. This works for everything we’re used to thinking about as being hot, but it applies in a wider context. Take radioactive material. Or don’t, because they’re dangerous. Radioactivity is dangerous because it has a lot of energy, and is throwing it off in random directions. Something that’s radioactive won’t necessarily feel hot, because the way it gives off radiation isn’t the way our sensory organs are calibrated. You can pick up an object with enough radiated energy to shred through the material in your cells and kill you, and have it feel like room temperature. That’s what happened to the firemen at Chernobyl. 

In a technical sense, radioactive materials have a high temperature, since they’re giving off lots of energy. That’s what makes them dangerous. At the same time, though, you could get right up next to highly enriched nuclear materials (and under no circumstances should you ever try this), without feeling warm. You will feel something eventually, as your cells react to being ripped apart by, a hail of neutrons and other subatomic particles. You might feel heat as your cells become irradiated and give off their own energy, but not from the nuclear materials themselves. Also if this happens, it’s too late to get help. So temperature isn’t necessarily what we think about it.

Space is another good example. We call space “cold”, because water freezes when exposed to it. And space will feel cold, since it will immediately suck all the carefully hoarded energy out of any body part exposed to it. But actually, space, at least within the solar system, has a very high temperature wherever it encounters particles, for the same reason as above. The sun is a massive ongoing thermonuclear explosion that makes even our largest atom bombs jealous. There is a great deal of energy flying around the empty space of the solar system at any given moment, it just doesn’t have any particles to give its energy to. This is why the top layer of the atmosphere, the thermosphere, has a very high temperature, despite being totally inhospitable, and why astronauts are at increased cancer risk. 

This confusion is why most scientists who are dealing with fields like chemistry, physics, or astronomy use the Kelvin scale. One degree in the Kelvin scale, or one kelvin, is equivalent to one degree Celsius. However, unlike Celsius, where zero is the freezing point of water, zero kelvins is known as Absolute Zero, a so-far theoretical temperature where there is no movement among the involved particles. This is harder to achieve than it sounds, for a variety of complicated quantum reasons, but consider that body temperature is 310 K, in a scale where one hundred is the entire difference between freezing and boiling. Some of our attempts so far to reach absolute zero have involved slowing down individual particles by suspending them in lasers, which has gotten us close, but those last few degrees are especially tricky. 

Kelvin scale hasn’t really caught on in the same way as Celsius, perhaps because it’s an unwieldy three digits for anything in the normal human range. And given that the US is still dragging their feet about Celsius, which goes back to the French Revolution, not a lot of people are willing to die on that hill. But the Kelvin scale does underline an important point of distinction between temperature as a universal property of physics, from the relative, subjective, inconsistent way that we’re used to feeling it in our bodies.

Which is perhaps interesting, but I said this was relevant to looking at the world, so how’s that true? Sure, it might be more scientifically rigorous, but that’s not always essential. If you’re a redneck farm boy about to jump into the crick, Newtonian gravity is enough without getting into quantum theory and spacetime distortion, right?
Well, we’re having a debate on this planet right now about something referred to as “climate change”, a term which has been promoted in favor over the previous term “global warming”. Advocates of doing nothing have pointed out that, despite all the graphs, it doesn’t feel noticeably warmer. Certainly, they point out, the weather hasn’t been warmer, at least not consistently, on a human timescale. How can we be worried about increased temperature if it’s not warmer?

And, for as much as I suspect the people presenting these arguments to the public have ulterior motives, whether they are economic or political, it doesn’t feel especially warmer, and it’s hard to dispute that. Scientists, for their part, have pointed out that they’re examining the average temperature over a prolonged period, producing graphs which show the trend. They have gone to great lengths to explain the biggest culprit, the greenhouse effect, which fortunately does click nicely with our intuitive human understanding. Greenhouses make things warmer, neat. But not everyone follows before and after that. 

I think part of what’s missing is that scientists are assuming that everyone is working from the same physics-textbook understanding of temperature and energy. This is a recurring problem for academics and researchers, especially when the 24-hour news cycle (and academic publicists that feed them) jump the gun and snatch results from scientific publications without translating the jargon for the layman. If temperature is just how hot it feels, and global warming means it’s going to feel a couple degrees hotter outside, it’s hard to see how that gets to doomsday predictions, and requires me to give up plastic bags and straws. 

But as we’ve seen, temperature can be a lot more than just feeling hot and cold. You won’t feel hot if you’re exposed to radiation, and firing a laser at something seems like a bad way to freeze it. We are dealing on a scale that requires a more consistent rule than our normal human shortcuts. Despite being only a couple of degrees temperature, the amount of energy we’re talking about here is massive. If we say the atmosphere is roughly 5×10^18 kilograms, and the amount of energy it takes to raise a kilogram of air one kelvin is about 1Kj, then we’re looking at 5,000,000,000,000,000,000 Kilojoules. 

That’s a big number; what does it mean? Well, if my math is right, that’s about 1.1 million megatons of TNT. A megaton is a unit used to measure the explosive yield of strategic nuclear weapons. The nuclear bomb dropped on Nagasaki, the bigger of the two, was somewhere in the ballpark of 0.02 megatons. The largest bomb ever detonated, the Tsar Bomba, was 50 megatons. The total energy expenditure of all nuclear testing worldwide is estimated at about 510 megatons, or about 0.05% of the energy we’re introducing with each degree of climate change. 

Humanity’s entire current nuclear arsenal is estimated somewhere in the ballpark of 14,000 bombs. This is very much a ballpark figure, since some countries are almost certainly bluffing about what weapons they do and don’t have, and how many. The majority of these, presumably, are cheaper, lower-yield tactical weapons. Some, on the other hand, will be over-the-top monstrosities like the Tsar Bomba. Let’s generously assume that these highs and lows average out to about one megaton apiece. Suppose we detonated all of those at once. I’m not saying we should do this; in fact, I’m going to go on record as saying we shouldn’t. But let’s suppose we do, releasing 14,000 megatons of raw, unadulterated atom-splitting power in a grand, civilization-ending bonanza. In that instant, we would do have unleashed approximately one percent of the energy as we are adding in each degree of climate change. 

This additional energy means more power for every hurricane, wildfire, flood, tornado, drought, blizzard, and weather system everywhere on earth. The additional energy is being absorbed by glaciers, which then have too much energy to remain frozen, and so are melting, raising sea levels. The chain of causation is complicated, and involves understanding of phenomena which are highly specialized and counterintuitive to our experience from most of human existence. Yet when we examine all of the data, it is the pattern that seems to emerge. Whether or not we fully understand the patterns at work, this is the precarious situation in which our species finds itself. 

A Lesson in Credulity

Last week I made a claim that, on review, might be untrue. This was bound to happen sooner or later. I do research these posts, but except for the posts where I actually include a bibliography, I’m not fact checking every statement I make. 


One of the dangers of being smart, of being told that you’re smart, and of repeatedly getting good grades or otherwise being vindicated on matters of intelligence, is that it can lead to a sense of complacency. I’m usually right, I think to myself, and when I think I know a fact, it’s often true, so unless I have some reason to suspect I’m wrong, I don’t generally check. For example, take the statement: there are more people that voted for republicans in the last election living to the south of me than to the north. 

I am almost certain this is true, even without checking. I would probably bet money on it. I live north of New York City, so there aren’t even that many people north of me, let alone republican voters. It’s objectively possible that I’m wrong. I might be missing some piece of information, like a large population of absentee Republicans in Canada, or the state of Alaska. Or I might simply be mistaken. Maybe the map I’m picturing in my head misrepresents how far north I am compared to other northern border states like North Dakota, Michigan, and Wisconsin. But I’m pretty sure I’m still right here, and until I started second guessing myself for the sake of argument, I would have confidently asserted that statement as fact, and even staked a sizable sum on it. 

Last week I made the following claim: Plenty of studies in the medical field have exalted medical identification as a simple, cost-effective means of promoting patient safety. 

I figured that this had to be true. After all, doctors recommend wearing medical identification almost universally. It’s one of those things, like brushing your teeth, or eating your vegetables that’s such common advice that we assume it to be proven truth. After all, if there wasn’t some compelling study to show it to be worthwhile, why would doctors continue to breath down the necks of patients? Why would patients themselves put up with it? Why would insurance companies, which are some of the most ruthlessly skeptical entities in existence, especially when it comes to paying for preventative measures, shell out for medical identification unless it was already demonstrated to be a good deal in the long run?

Turns out I may have overestimated science and economics here. Because in writing my paper, I searched for that definitive overarching study or meta analysis that conclusively proved that medical identification had a measurable positive impact. I searched broadly on google, and also through the EBSCO search engine, which my trusty research librarian told me was the best agglomeration of scientific and academic literature tuition can buy. I went through papers from NIH immunohematolgy researchers to the Army Medical Corps; from clinics in the Canadian high arctic to the developing regions of Southeast Asia. I read through translations of papers originally published in French and Chinese, in the most prestigious journals of their home countries. And I found no conclusive answers.

 There was plenty of circumstantial evidence. Every paper I found supported the use of medical identification. Most papers I found were actually about other issues, and merely alluded to medical identification by describing how they used it in their own protocols. In most clinics, it’s now an automatic part of the checklist to refer newly diagnosed patients to wear medical identification; almost always through the MedicAlert Foundation.

The two papers I found that addressed the issue head on were a Canadian study about children wearing MedicAlert bracelets being bullied, and a paper in an emergency services journal about differing standards in medical identification. Both of these studies, though, seemed to skirt around the quantifiable efficacy of medical identification and were more interested in the tangential effects.

There was a third paper that dealt more directly as well, but there was something fishy about it. The title was “MedicAlert: Speaking for Patients When They Can’t”, and the language and graphics were suspiciously similar to the advertising used by the MedicAlert Foundation website. By the time I had gotten to this point, I was already running late with my paper. EBSCO listed the paper as “peer reviewed”, which my trusty research librarian said meant it was credible (or at least, credible enough), and it basically said exactly the things that I needed a source for, so I included it in my bibliography. But looking back, I’m worried that I’ve fallen into the Citogenesis trap, just  this time with a private entity rather than Wikipedia.
The conspiracy theorist in me wants to jump to the conclusion that I’ve uncovered a massive ruse; that the MedicAlert Foundation has created and perpetuated a myth about the efficacy of their services, and the sheeple of the medical-industrial complex are unwitting collaborators. Something something database with our medical records something something hail hydra. This pretty blatantly fails Occam’s Razor, so I’m inclined to write it off. The most likely scenario here is that there is a study lying around that I simply missed in my search, and it’s so old and foundational that later research has just accepted it as common knowledge. Or maybe it was buried deep in the bibliographies of other papers I read, and I just missed it. 

Still, the fact that I didn’t find this study when explicitly looking for it raises questions. Which leads me to the next most likely scenario: I have found a rare spot of massive oversight in the medical scientific community. After all, the idea that wearing medical identification is helpful in an emergency situation is common sense, bordering on self-evident. And there’s no shortage of anecdotes from paramedics and ER doctors that medical identification can help save lives. Even in the literature, while I can’t find an overview, there are several individual case studies. It’s not difficult to imagine that doctors have simply taken medical identification as a logical given, and gone ahead and implemented it into their protocols.

In that case, it would make sense that MedicAlert would jump on the bandwagon. If anything, having a single standard makes the process more rigorous. I’m a little skeptical that insurance companies just went along with it; it’s not like common sense has ever stopped them from penny-pinching before. But who knows, maybe this is the one time they took doctors at their word. Maybe, through some common consensus, this has just become a massive blind spot for research. After all, I only noticed it when I was looking into something tangential to it. 
So where does this leave us? If the data is really out there somewhere, then the only problem is that I need a better search engine. If this is part of a blind spot, if the research has never been done and everyone has just accepted it as common sense, then it needs to be put in the queue for an overarching study. Not that I expect that such a study won’t find a correlation between wearing medical identification and better health outcomes. After all, it’s common sense. But we can do better than just acting on common sense and gut instincts. We have to do better if we want to advance as a species.

The other reason why we need to have hard, verifiable numbers with regards to efficacy, besides the possibility we might discover our assumptions were wrong, is to have a way to justify the trade off. My whole paper has been about trying to prove the trade off a person makes when deciding to wear medical identification, in terms of stigma, self perception, and comfort. We often brush this off as being immaterial. And maybe it is. Maybe, next to an overwhelming consensus of evidence showing a large and measurable positive impact on health outcomes, some minor discomfort wearing a bracelet for life is easily outweighed. 

Then again, what if the positive impact is fairly minor? If the statistical difference amounts only to, let’s say, a few extra hours life expectancy, is that worth a lifetime of having everyone know that you’re disabled wherever you go? People I know would disagree on this matter. But until we can say definitively the medical impact on the one hand, we can’t justify it against the social impact on the other. We can’t have a real debate based on folk wisdom versus anecdotes. 

On Hippocratic Oaths

I’ve been thinking about the Hippocratic Oath this week. This came up while wandering around campus during downtime, when I encountered a mural showing a group of nurses posing heroically, amid a collage of vaguely related items, between old timey nurse recruitment posters. In the background, the words of the Hippocratic Oath were typed behind the larger than life figures. I imagine they took cues from military posters that occasionally do similar things with oaths of enlistment. 

I took special note of this, because strictly speaking, the Hippocratic Oath isn’t meant for nurses. It could arguably apply to paramedics or EMTs, since, epistemologically at least, a paramedic is a watered down doctor, the first ambulances being an extension of the military hospitals and hence under the aegis of surgeons and doctors rather than nurses. But that kind of pedantic argument not only ignores actual modern day training requirements, since in most jurisdictions the requirements for nurses are more stringent than EMTs and at least as stringent as paramedics, but shortchanges nurses, a group to whom I owe an enormous gratitude and for whom I hold an immense respect. 

Besides which, whether or not the Hippocratic Oath – or rather, since the oath recorded by Hippocrates himself is recognized as being outdated, and has been almost universally superseded by more modern oaths – is necessarily binding to nurses, it is hard to argue that the basic principles aren’t applicable. Whether or not modern nurses have at their disposal the same curative tools as their doctorate-holding counterparts, they still play an enormous role in patient outcomes. In fact, by some scientific estimates, the quality of nursing staff may actually matter more than the actions undertaken by doctors. 

Moreover, all of the ethical considerations still apply. Perhaps most obviously, respect for patients and patient confidentiality. After all, how politely the doctor treats you in their ten minutes of rounds isn’t going to outweigh your direct overseers for the rest of the day. And as far as confidentiality, whom are you more concerned about gossiping: the nerd who reads your charts and writes out your prescription, or the nurse who’s in your room, undressing you to inject the drugs into the subcutaneous tissue where the sun doesn’t shine? 

So I don’t actually mind if nurses are taking the Hippocratic Oath, whether or not it historically applies. But that’s not why it’s been rattling around my mind the last week. 

See, my final paper in sociology is approaching. Actually, it’s been approaching; at this point the paper is waiting impatiently at the door to be let in. My present thinking is that I will follow the suggestion laid down in the syllabus and create a survey for my paper. My current topic regards medical identification. Plenty of studies in the medical field have exalted medical identification as a simple, cost-effective means of promoting patient safety. But compelling people to wear something that identifies them as being part of a historically oppressed minority group has serious implications that I think are being overlooked when we treat people who refuse to wear medical identification in the same group as people who refuse to get vaccinated, or take prescribed medication.

What I want to find out in my survey is why people who don’t wear medical identification choose not to. But to really prove (or disprove, as the case may be, since a proper scientific approach demands that possibility) my point, I need to get at the sensitive matters at the heart of this issue: medical issues and minority status. This involves a lot of sensitive topics, and consequently gathering data on it means collecting potentially sensitive information. 

This leaves me in an interesting position. The fact that I am doing this for a class at an accredited academic institution gives me credibility, if more-so with the lay public than among those who know enough about modern science to realize that I have no real earned credentials. But the point remains, if I posted online that I was conducting a survey for my institution, which falls within a stretched interpretation of the truth, I could probably get many people to disclose otherwise confidential information to me. 

Since I have never taken an oath, and have essentially no oversight in the execution n if this survey, other than the bare minimum privacy safeguards required by the FCC in my use of the internet, which I can satisfy through a simple checkbox in the United States. If I were so inclined, I could take this information entrusted to me, and either sell it, or use it for personal gain. I couldn’t deliberately target individual subjects, more because that would be criminal harassment than because of any breach of trust. But I might be able to get away with posting it online and letting the internet wreak what havoc it will. This would be grossly unethical and bordering on illegal, but I could probably get away with it. 

I would never do that, of course. Besides being wrong on so many different counts, including betraying the trust of my friends, my community, and my university, it would undermine trust in the academic and scientific communities, at a time where they have come under political attack by those who have a vested interest in discrediting truth. And as a person waiting on a breakthrough cure that will allow me to once again be a fully functional human being, I have a vested interest in supporting these institutions. But I could do it, without breaking any laws, or oaths.

Would an oath stop me? If, at the beginning of my sociology class, I had stood alongside my fellow students, with my hand on the Bible I received in scripture class, in which I have sought comfort and wisdom in dark hours, and swore an oath like the Hippocratic one or its modern equivalents to adhere to ethical best practices and keep to my responsibilities as a student and scientist, albeit of sociology rather than one of the more sciency sciences, would that stop me if I had already decided to sell out my friends?

I actually can’t say with confidence. I’m inclined to say it would, but this is coming from the version of me that wouldn’t do that anyway. The version of me that would cross that line is probably closer to my early-teenage self, whom my modern self has come to regard with a mixture of shame and contempt, who essentially believed that promises were made to be broken. I can’t say for sure what this version of myself would have done. He shared a lot of my respect for science and protocol, and there’s a chance he might’ve been really into the whole oath vibe. So it could’ve worked. On the other hand, it he thought he would’ve gained more than he had to lose, I can imagine how he would’ve justified it to himself. 

Of course, the question of the Hippocratic oath isn’t really about the individual that takes it, so much as it is the society around it. It’s not even so much about how the society enforces oaths and punished oath-breakers. With the exception of perjury, we’ve kind of moved away from Greco-Roman style sacred blood oaths. Adultery and divorce, for instance, are both oath-breaking, but apart from the occasional tut-tut, as a society we’ve more or less just agreed to let it slide. Perhaps as a consequence of longer and more diverse lives, we don’t really care about oaths.

Perjury is another interesting case, though. Because contrary to the occasionally held belief, the crime of perjury isn’t actually affected by whether the lie in question is about some other crime. If you’re on the stand for another charge of which you’re innocent, and your alibi is being at Steak Shack, but you say you were at Veggie Villa, that’s exactly as much perjury as if you had been at the scene of the crime and lied about that. This is because witness testimony is treated legally as fact. The crime of perjury isn’t about trying to get out of being punished. It’s about the integrity of the system. That’s why there’s an oath, and why that oath is taken seriously.

The revival of the Hippocratic Oath as an essential part of the culture of medicine came after World War II, at least partially in response to the conclusion of the Nuremberg Trials and revelations about the holocaust. Particularly horrifying was how Nazi doctors had been involved in the process, both in the acute terms of unethical human experimentation, and in providing medical expertise to ensure that the apparatus of extermination was as efficient as possible. The Red Cross was particularly alarmed- here were people who had dedicated their lives to an understanding of the human condition, and had either sacrificed all sense of morality in the interest of satiating base curiosity, or had actively taken the tools of human progress to inflict destruction in service of an evil end. 

Doctors were, and are, protected under the Geneva Convention. Despite Hollywood and video games, shooting a medic wearing medical symbol, even if they are coming off a landing craft towards your country, is a war crime. As a society, we give them enormous power, with the expectation that they will use that power and their knowledge and skills to help us. This isn’t just some set of privileges we give doctors because they’re smart, though; that trust is essential to their job. Doctors can’t perform surgery if they aren’t trusted with knives, and we can’t eradicate polio if no one is willing to be inoculated.

The first of the modern wave of revisions of the Hippocratic Oath to make it relevant and appropriate for today started with the Red Cross after World War II. The goal was twofold. First: establish trust in medical professionals by setting down a simple, overriding set of basic ethical principles that can be distilled down to a simple oath, so that it can be understood by everyone. Second: make this oath not only universal within the field, but culturally ubiquitous, so as to make it effectively self-enforcing. 

It’s hard to say whether this gambit has worked. I’m not sure how you’d design a study to test it. But my gut feeling is that most people trust their own doctors, certainly more than, say, pharmacologists, meteorologists, or economists, at least partially because of the idea of the Hippocratic Oath. The general public understands that doctors are bound by an oath of ethical principles, and this creates trust. It also means that stories about individual incidents of malpractice or ethics breaches tend to be attributed to sole bad actors, rather than large scale conspiracies. After all, there was an oath, and they broke it; clearly it’s on that person, not the people that came up with the oath.

Other fields, of course, have their own ethical standards. And since, in most places, funding for experiments are contingent on approval from an ethics board, they’re reasonably well enforced. A rogue astrophysicist, for instance, would find themselves hard pressed to find the cash on their own to unleash their dark matter particle accelerator, or whatever, if they aren’t getting their funding to pay for electricity. This is arguably a more fail-safe model than the medical field, where with the exception of big, experimental projects, ethical reviews mostly happen after something goes wrong. 

But if you ask people around the world to rate the trustworthiness of both physicians and astrophysicists, I’d wager a decent sum that more people will say they trust the medical doctor more. It’s not because the ethical review infrastructure keeps doctors better in check, it’s not because doctors are any better educated in their field, and it’s certainly not anything about the field itself that makes medicine more consistent or less error prone. It’s because medical doctors have an oath. And whether or not we treat oaths as a big deal these days, they make a clear and understandable line in the sand. 

I don’t know whether other sciences need their own oath. In terms of reducing ethical ethical breaches, I doubt it will have a serious impact. But it might help with the public trust and relatability probables that the scientific community seems to be suffering. If there was an oath that made it apparent how the language of scientists, unlike pundits, is seldom speculative, but always couched in facts; how scientists almost never defend their work even when they believe in it, preferring to let the data speak for itself; and how the best scientists already hold themselves to an inhumanly rigid standard of ethics and impartiality in their work, I think it could go a ways towards improving appreciation of science, and our discourse as a whole.

Time Flying

No. It is not back to school season. I refuse to accept it. I have just barely begun to enjoy summer in earnest. Don’t tell me it’s already nearly over.

It feels like this summer really flew by. This is always true to an extent, but it feels more pronounced this year, and I’m not really sure how to parse it. I’m used to having time seemingly ambush me when I’m sick, having weeks seem to disappear from my life in feverish haze, but not when I’m well.

If I have to start working myself back towards productivity, and break my bohemian habit of rising at the crack of noon, then I suppose that summer was worthwhile. I didn’t get nearly as much done as I expected. Near as I can tell, nothing I failed to accomplish was vital in any meaningful respect, but it is somewhat disappointing. I suppose I expected to have more energy to tick things off my list. Then again, the fact that nothing was vital meant that I didn’t really push myself. It wasn’t so much that I tried and failed as I failed to try.

Except I can’t help but think that the reason that I didn’t push myself; that I’m still not pushing myself, despite having a few days left; is, aside from a staunch commitment to avoid overtaxing myself before the school year even begins, a sense that I would have plenty of time later. Indeed, this has been my refrain all season long. And despite this, the weeks and months have sailed by, until, to my alarm and terror, we come upon mid-August, and I’ve barely reached the end of my June checklist.

Some of it is simple procrastination, laziness, and work-shyness, and I’ll own that. I spent a lot of my time this summer downright frivolously, and even in retrospect, I can’t really say I regret it. I enjoyed it, after all, and I can’t really envision a scenario where I would’ve enjoyed it in moderation and been able to get more done without the sort of rigid planned schedules that belie the laid back hakunnah mattata attitude that, if I have not necessarily successfully adopted, then at least have taken to using as a crutch in the face of looming terror of starting college classes.

But I’m not just saying “summer flew by” as an idle excuse to cover my apparent lack of progress. I am genuinely concerned that the summer went by faster than some sort of internal sense of temporal perception says it ought have, like a step that turned out to be off kilter from its preceding stairs, causing me to stumble. And while this won’t get me back time, and is unlikely to be a thing that I can fix, even if it is an internal mental quirk, should I not at least endeavor to be aware of it, in the interest of learning from past mistakes?

So, what’s the story with my sense of time?

One of the conversation I remember most vividly of my childhood was about how long an hour is. It was a sunny afternoon late in the school year, and my mother was picking my brother and I up from school. A friend of mine invited us over for a play*, but my mother stated that we had other things to do and places to be.

*Lexicographical sidenote: I have been made aware that the turn of phrase, “to have a play” may be unique to Australian vocabulary. Its proper usage is similar to “have a swim” or “have a snack”. It is perhaps most synonymous with a playdate, but is more casual, spontaneous, and carries less of a distinctly juvenile connotation.

I had a watch at this point, and I knew when we had to be elsewhere, and a loose idea of the time it took to get between the various places, and so I made a case that we did in fact have time to go over and have a play, and still get to our other appointments. My mother countered that if we did go, we wouldn’t be able to stay long. I asked how long we would have, and she said only about an hour. I considered this, and then voiced my opinion that an hour is plenty of time; indeed more than enough. After all, an hour was an unbearably long time to wait, and so naturally it should be plenty of time to play.

I would repudiate this point of view several months later, while in the hospital. Laying there in my bed, hooked up to machines, my only entertainment was watching the ticking wall clock, and trying to be quiet enough to hear it reverberate through the room. It should, by all accounts, have been soul-crushingly boring. But the entire time I was dwelling on my dread, because I knew that at the top of every hour, the nurses would come and stab me to draw blood. And even if I made it through this time, I didn’t know how many hours I had left to endure, or indeed, to live.

I remember sitting there thinking about how my mother had in fact been right. An hour isn’t that long. It isn’t enough to make peace, or get over fears, or get affairs in order. It’s not enough to settle down or gear up. This realization struck me like a groundbreaking revelation, and when I look back and try to put a finger on exactly where my childhood ended, that moment stands out as a major point.

That, eleven years ago, was the last major turning point; the last time I remember revising my scale for how long an hour, a day, and so on are in the scheme of things. Slowly, as I’ve gotten older, I’ve become more comfortable with longer time scales, but this hasn’t really had a massive effect on my perception.

Over the past half-decade there have been occasions when, being sick, I have seemed to “lose” time, by being sick and not at full processing capacity as time passes. Other occasions it has been a simple matter of being a home body, and so the moments I remember most recently having seen people, which are in reality some time ago, seem to be more recent than they were, creating a disconnect. But this has always happened as a consequence of being unwell and disconnected from everyday life. In other situations, time has always seemed to match my expectations, and I have been able to use my expectations and perception to have a more intrinsic sense of when I needed to be at certain places.

In the past few months this perception seems to have degraded. Putting my finger on when this started being a noticeable problem is difficult, because much of the past few months has been spent more or less relaxing, which in my case means sleeping in and ignoring the outside world, which as previously noted does tend to affect my perception of how much time has passed. The first time I recall mentioning that time had passed me by was in May, at a conference. I don’t want to give that one data point too much weight, though, because, for one thing, it was a relatively short break in my routine, for another, it was a new conference with nothing to compare it to, and finally, I was jet lagged.

But I definitely do recall mentioning this feeling during the buildup to, and all throughout, our summer travels. This period, unlike previous outings, is definitely long enough that I can say it doesn’t fall into the category of being a homebody. Something has changed in my perception of time, and my sense of how much time I have to work with before scheduled events is degraded.

So what gives? The research into perception of time falls into the overlap between various fields, and is fraught with myths and pseudoscience. For example, it is commonly held and accepted that perception of time becomes faster with age. But this hypothesis dates back to the 1870s, and while there is some evidence to support a correlation, particularly early in life, the correlation is weak, and not linear. Still, this effect is present early in life, and it is plausible that this is part of my problem.

One point that is generally agreed upon in the scientific literature regards the neurochemistry. It seems to be that the perception of time is moderated by the same mechanisms that regulate our circadian rhythm, specifically dopamine and a handful of other neurotransmitters. Disruptions to these levels causes a corresponding disruption to the sense of time. In particular, it seems that more dopamine causes time to go faster; hence time seeming to pass faster when one is having fun. This would explain why the passage of time over my vacation has seemed particularly egregious, and also why jet lag seems to have such a profound effect on time perception.

Both of these explanations would go a ways towards explaining the sensorial discrepancy I find. Another explanation would place blame on my glasses, since eye movement seems to also be tied to small-scale passage of time. Perhaps since I have started wearing glasses in the last couple of years, my eyes have been squinting less, and my internal clock has been running subtly slow since, and I am only now starting to notice it.

With the field of time perception research still in relative infancy, the scientific logic behind these explanations is far from ironclad. But then again, it doesn’t need to be ironclad. For our purposes, the neurobiological mechanisms are almost entirely irrelevant. What matters is that the effect is real, that it isn’t just me, nor is it dangerous, and that there’s nothing I can really do about it other than adapt. After all, being blind without my glasses, or giving myself injections of neurotransmitters as a means of deterring procrastination might be a bit overkill.

What matters is that I can acknowledge this change as an effect that will need to be accounted for going forwards. How I will account for it is outside the scope of this post. Probably I will work to be a bit more organized and sensitive to the clock. But what’s important is that this is a known quantity now, and so hopefully I can avoid being caught so terribly off guard next summer.

Works Consulted
Eagleman, David M. “Human Time Perception and Its Illusions.” Current Opinion in Neurobiology, vol. 18, no. 2, 2008, pp. 131–136., doi:10.1016/j.conb.2008.06.002.

Friedman, W.J. and S.M.J. Janssen. 2010. Aging and the speed of time. Acta Psychologica 134: 130-141.

Janssen, S.M.J., M. Naka, and W.J. Friedman. 2013. Why does life appear to speed up as people get older? Time & Society 22(2): 274-290.

Wittmann, M. and S. Lehnhoff. 2005. Age effects in perception of time. Psychological Reports 97: 921-935.

The Lego Census

So the other day I was wondering about the demographics of Lego mini figures. I’m sure we’re all at least vaguely aware of the fact that Lego minifigs tend to be, by default, adult, male, and yellow-skinned. This wasn’t terribly worthy of serious thought back when Lego had only a handful of different minifigure designs existed. Yet nowadays Lego has thousands, if not millions of different minifigure permutations. Moreover, the total number of minifigures in circulation is set to eclipse the number of living humans within a few years.

Obviously, even with a shift towards trying to be more representative, the demographics of Lego minifigures are not an accurate reflection of the demographics of humankind. But just how out of alignment are they? Or, to ask it another way, could the population of a standard Lego city exist in real life without causing an immediate demographic crisis?

This question has bugged me enough that I decided to conduct an informal study based on a portion of my Lego collection, or rather, a portion of it that I reckon is large enough to be vaguely representative of a population. I have chosen to conduct my counts based on the central district of the Lego city that exists in our family basement, on the grounds that it includes a sizable population from across a variety of different sets.

With that background in mind, I have counted roughly 154 minifigures. The area of survey is defined as the city central district, which for our purposes is defined by the largest tables with the greatest number of buildings and skyscrapers, and so presumably the highest population density.

Because Lego minifigures don’t have numerical ages attached to them, I counted ages by dividing minifigures into four categories. The categories are: Children, Young Adults, Middle Aged, and Elderly. Obviously these categories are qualitative and subject to some interpretation. Children are fairly obvious for their different sized minifigures. An example of adult categories follows.

The figure on the left would be a young adult. The one in the middle would be classified as middle aged, and the one on the right, elderly.

Breakdown by age

Children (14)
Lego children are the most distinct category because, in addition to childish facial features and clothes, they are given shorter leg pieces. This is the youngest category, as Lego doesn’t include infant Lego minifigures in their sets. I would guess that this age includes years 5-12.

Young Adults (75)
Young adults encompasses a fairly wide range, from puberty to early middle age. This group is the largest, partially because it includes the large contingent of conscripts serving in the city. An age range would be roughly 12-32.

Middle Aged (52)
Includes visibly older adults that do not meet the criteria for elderly. This group encompasses most of the city’s administration and professionals.

Elderly (13)
The elderly are those that stand out for being old, including those with features such as beards, wrinkled skin, or off-color hair.

Breakdown by industry

Second is occupations. Again, since minifigures cant exactly give their own occupations, and since most jobs happen indoors where I can’t see, I was forced to make some guesses based on outfits and group them into loose collections.

27 Military
15 Government administration
11 Entertainment
9 Law enforcement
9 Transport / Shipping
9 Aerospace industries
8 Heavy industry
6 Retail / services
5 Healthcare
5 Light Industry

An unemployment rate would be hard to gauge, because most of the time the unemployment rate is adjusted to omit those who aren’t actively seeking work, such as students, retired persons, disabled persons, homemakers, and the like. Unfortunately for our purposes, a minifigure who is transitionally unemployed looks pretty much identical to one who has decided to take an early retirement.

What we can take a stab at is a workforce participation rate. This is a measure of what percentage of the total number of people eligible to be working are doing so. So, for our purposes, this means tallying the total number of people assigned jobs and dividing by the total number of people capable of working, which we will assume means everyone except children. This gives us a ballpark of about 74%, decreasing to 68% if we exclude military to look only at the civilian economy. Either of these numbers would be somewhat high, but not unexplainably so.

Breakdown by sex

With no distinction between the physical form of Lego bodies, the differences between sexes in minifigure is based purely on cosmetic details such as hair type, the presence of eyelashes, makeup, or lipstick on a face, and dresses. This is obviously based on stereotypes, and makes it tricky to tease apart edge cases. Is the figure with poorly-detailed facial features male or female? What about that faceless conscript marching in formation with their helmet and combat armor? Does dwelling on this topic at length make me some kind of weirdo?

The fact that Lego seems to embellish characters that are female with stereotypical traits suggests that the default is male. Operating on this assumption gives you somewhere between 50 and 70 minifigures with at least one distinguishing female trait depending on how particular you get with freckles and other minute facial details.

That’s a male to female ratio somewhere between 2.08:1 and 1.2:1. The latter would be barely within the realm of ordinary populations, and even then would be highly suggestive of some kind of artificial pressure such as sex selective abortion, infanticide, widespread gender violence, a lower standard of medical care for girls, or some kind of widespread exposure, whether to pathogens or pollutants, that causes a far higher childhood fatality rate for girls than would be expected. And here you were thinking that a post about Lego minifigures was going to be a light and gentle read.

The former ratio is completely unnatural, though not completely unheard of in real life under certain contrived circumstances: certain South Asian and Middle Eastern countries have at times had male to female ratios of as high as two owing to the presence of large numbers of guest workers. In such societies, female breadwinners, let alone women traveling alone to foreign countries to send money home, is unheard of.

Such an explanation might be conceivable given a look at the lore of the city. The city is indeed a major trade port and center of commerce, with a non-negligible transient population, and also hosts a sizable military presence. By a similar token, I could simply say that there are more people that I’m not counting hiding inside all those skyscrapers that make everything come out even. Except this kind of narrative explanation dodges the question.

The strait answer is that, no, Lego cities are not particularly accurate reflections of our real life cities. This lack of absolute realism does not make Lego bad toys. Nor does it detract from their value as an artistic and storytelling medium; nor either the benefits for play therapy for patients affected with neuro-cognitive symptoms, my original reason for starting my Lego collection.

 

Technological Milestones and the Power of Mundanity

When I was fairly little, probably seven or so, I devised a short list of technologies based on what I had seen on television that I reckoned were at least plausible, and which I earmarked as milestones of sorts to measure how far human technology would progress during my lifetime. I estimated that if I was lucky, I would be able to have my hands on half of them by the time I retired. Delightfully, almost all of these have in fact already been achieved, less than fifteen years later.

Admittedly, all of these technologies that I picked were far closer than I had envisioned at the time. Living in Australia, which seemed to be the opposite side of the world from where everything happened, and living outside of the truly urban areas of Sydney which, as a consequence of international business, were kept up to date, it often seems that even though I technically grew up after the turn of the millennium, that I was raised in a place and culture that was closer to the 90s.

For example, as late as 2009, even among adults, not everyone I knew had a mobile phone. Text messaging was still “SMS”, and was generally regarded with suspicion and disdain, not least of all because not all phones were equipped to handle them, and not all phone plans included provisions for receiving them. “Smart” phones (still two words) did exist on the fringes; I know exactly one person who owned an iPhone, and two who owned a BlackBerry, at that time. But having one was still an oddity. Our public school curriculum was also notably skeptical, bordering on technophobic, about the rapid shift towards Broadband and constant connectivity, diverting much class time to decrying the evils of email and chat rooms.

These were the days when it was a moral imperative to turn off your modem at night, lest the hacker-perverts on the godless web wardial a backdoor into your computer, which weighed as much as the desk it was parked on, or your computer overheat from being left on, and catch fire (this happened to a friend of mine). Mice were wired and had little balls inside them that you could remove in order to sabotage them for the next user. Touch screens might have existed on some newer PDA models, and on some gimmicky machines in the inner city, but no one believed that they were going to replace the workstation PC.

I chose my technological milestones based on my experiences in this environment, and on television. Actually, since most of our television was the same shows that played in the United States, only a few months behind their stateside premier, they tended to be more up to date with the actual state of technology, and depictions of the near future which seemed obvious to an American audience seemed terribly optimistic and even outlandish to me at the time. So, in retrospect, it is not surprising that after I moved back to the US, I saw nearly all of my milestones commercially available within half a decade.

Tablet Computers
The idea of a single surface interface for a computer in the popular consciousness dates back almost as far as futuristic depictions of technology itself. It was an obvious technological niche that, despite numerous attempts, some semi-successful, was never truly cracked until the iPad. True, plenty of tablet computers existed before the iPad. But these were either klunky beyond use, incredibly fragile to the point of being unusable in practical circumstances, or horrifically expensive.

None of them were practical for, say, completing homework for school on, which at seven years old was kind of my litmus test for whether something was useful. I imagined that if I were lucky, I might get to go tablet shopping when it was time for me to enroll my own children. I could not imagine that affordable tablet computers would be widely available in time for me to use them for school myself. I still get a small joy every time I get to pull out my tablet in a productive niche.

Video Calling
Again, this was not a bolt from the blue. Orwell wrote about his telescreens, which amounted to two-way television, in the 1940s. By the 70s, NORAD had developed a fiber-optic based system whereby commanders could conduct video conferences during a crisis. By the time I was growing up, expensive and klunky video teleconferences were possible. But they had to be arranged and planned, and often required special equipment. Even once webcams started to appear, lessening the equipment burden, you were still often better off calling someone.

Skype and FaceTime changed that, spurred on largely by the appearance of smartphones, and later tablets, with front-facing cameras, which were designed largely for this exact purpose. Suddenly, a video call was as easy as a phone call; in some cases easier, because video calls are delivered over the Internet rather than requiring a phone line and number (something which I did not foresee).

Wearable Technology (in particular smartwatches)
This was the one that I was most skeptical of, as I got this mostly from the Jetsons, a show which isn’t exactly renowned for realism or accuracy. An argument can be made that this threshold hasn’t been fully crossed yet, since smartwatches are still niche products that haven’t caught on to the same extent as either of the previous items, and insofar as they can be used for communication like in The Jetsons, they rely on a smartphone or other device as a relay. This is a solid point, to which I have two counterarguments.

First, these are self-centered milestones. The test is not whether an average Joe can afford and use the technology, but whether it has an impact on my life. And indeed, my smart watch, which was enough and functional enough for me to use in an everyday role, does indeed have a noticeable positive impact. Second, while smartwatches may not be as ubiquitous as once portrayed, they do exist, and are commonplace enough to be largely unremarkable. The technology exists and is widely available, whether or not consumers choose to use it.

These were my three main pillars of the future. Other things which I marked down include such milestones as:

Commercial Space Travel
Sure, SpaceX and its ilk aren’t exactly the same as having shuttles to the ISS departing regularly from every major airport, with connecting service to the moon. You can’t have a romantic dinner rendezvous in orbit, gazing at the unclouded stars on one side, and the fragile planet earth on the other. But we’re remarkably close. Private sector delivery to orbit is now cheaper and more ubiquitous than public sector delivery (admittedly this has more to do with government austerity than an unexpected boom in the aerospace sector).

Large-Scale Remotely Controlled or Autonomous Vehicles
This one came from Kim Possible, and a particular episode in which our intrepid heroes got to their remote destination by a borrowed military helicopter flown remotely from a home computer. Today, we have remotely piloted military drones, and early self-driving vehicles. This one hasn’t been fully met yet, since I’ve never ridden in a self-driving vehicle myself, but it is on the horizon, and I eagerly await it.

Cyborgs
I did guess that we’d have technologically altered humans, both for medical purposes, and as part of the road to the enhanced super-humans that rule in movies and television. I never guessed at seven that in less than a decade, that I would be one of them, relying on networked machines and computer chips to keep my biological self functioning, plugging into the wall to charge my batteries when they run low, studiously avoiding magnets, EMPs, and water unless I have planned ahead and am wearing the correct configuration and armor.

This last one highlights an important factor. All of these technologies were, or at least, seemed, revolutionary. And yet today they are mundane. My tablet today is only remarkable to me because I once pegged it as a keystone of the future that I hoped would see the eradication of my then-present woes. This turned out to be overly optimistic, for two reasons.

First, it assumed that I would be happy as soon as the things that bothered me then no longer did, which is a fundamental misunderstanding of human nature. Humans do not remain happy the same way than an object in motion remains in motion until acted upon. Or perhaps it is that as creatures of constant change and reecontextualization, we are always undergoing so much change that remaining happy without constant effort is exceedingly rare. Humans always find more problems that need to be solved. On balance, this is a good thing, as it drives innovation and advancement. But it makes living life as a human rather, well, wanting.

Which lays the groundwork nicely for the second reason: novelty is necessarily fleeting. What advanced technology today marks the boundary of magic will tomorrow be a mere gimmick, and after that, a mere fact of life. Computers hundreds of millions more times more powerful than those used to wage World War II and send men to the moon are so ubiquitous that they are considered a basic necessity of modern life, like clothes, or literacy; both of which have millennia of incremental refinement and scientific striving behind them on their own.

My picture of the glorious shining future assumed that the things which seemed amazing at the time would continue to amaze once they had become commonplace. This isn’t a wholly unreasonable extrapolation on available data, even if it is childishly optimistic. Yet it is self-contradictory. The only way that such technologies could be harnessed to their full capacity would be to have them become so widely available and commonplace that it would be conceivable for product developers to integrate them into every possible facet of life. This both requires and establishes a certain level of mundanity about the technology that will eventually break the spell of novelty.

In this light, the mundanity of the technological breakthroughs that define my present life, relative to the imagined future of my past self, is not a bad thing. Disappointing, yes; and certainly it is a sobering reflection on the ungrateful character of human nature. But this very mundanity that breaks our predictions of the future (or at least, our optimistic predictions) is an integral part of the process of progress. Not only does this mundanity constantly drive us to reach for ever greater heights by making us utterly irreverent of those we have already achieved, but it allows us to keep evolving our current technologies to new applications.

Take, for example, wireless internet. I remember a time, or at least, a place, when wireless internet did not exist for practical purposes. “Wi-Fi” as a term hadn’t caught on yet; in fact, I remember the publicity campaign that was undertaken to educate our technologically backwards selves about what term meant, about how it wasn’t dangerous, and about how it would make all of our lives better, as we could connect to everything. Of course, at that time I didn’t know anyone outside of my father’s office who owned a device capable of connecting to Wi-Fi. But that was beside the point. It was the new thing. It was a shiny, exciting novelty.

And then, for a while, it was a gimmick. Newer computers began to advertise their Wi-Fi antennae, boasting that it was as good as being connected by cable. Hotels and other establishments began to advertise Wi-Fi connectivity. Phones began to connect to Wi-Fi networks, which allowed phones to truly connect to the internet even without a data plan.

Soon, Wi-Fi became not just a gimmick, but a standard. First computers, then phones, without internet began to become obsolete. Customers began to expect Wi-Fi as a standard accommodation wherever they went, for free even. Employers, teachers, and organizations began to assume that the people they were dealing with would have Wi-Fi, and therefore everyone in the house would have internet access. In ten years, the prevailing attitude around me went from “I wouldn’t feel safe having my kid playing in a building with that new Wi-Fi stuff” to “I need to make sure my kid has Wi-Fi so they can do their schoolwork”. Like television, telephones, and electricity, Wi-Fi became just another thing that needed to be had in a modern home. A mundanity.

Now, that very mundanity is driving a second wave of revolution. The “Internet of Things” as it is being called, is using the Wi-Fi networks that are already in place in every modern home to add more niche devices and appliances. We are told to expect that soon that every major device in our house will be connected to out personal network, controllable either from our mobile devices, or even by voice, and soon, gesture, if not through the devices themselves, then through artificially intelligent home assistants (Amazon echo, Google Home, and related).

It is important to realize that this second revolution could not take place while Wi-Fi was still a novelty. No one who wouldn’t otherwise buy into Wi-Fi at the beginning would have bought it because it could also control the sprinklers, or the washing machine, or what have you. Wi-Fi had to become established as a mundane building block in order to be used as the cornerstone of this latest innovation.

Research and development may be focused on the shiny and novel, but technological process on a species-wide scale depends just as much on this mundanity. Breakthroughs have to not only be helpful and exciting, but useful in everyday life, and cheap enough to be usable by everyday consumers. It is easy to get swept up in the exuberance of what is new, but the revolutionary changes happen when those new things are allowed to become mundane.

The Moral Hazard of Hope


This post is part of the series: The Debriefing. Click to read all posts in this series.


Suppose that five years from today, you would receive an extremely large windfall. The exact number isn’t important, but let’s just say it’s large enough that you’ll have to budget things again. Not technically infinite, because that would break everything, but for the purposes of one person, basically undepletable. Let’s also assume that this money becomes yours in such a way that it can’t be taxed or swindled in getting it. This is also an alternate universe where inheritance and estates don’t exist, so there’s no scheming among family, and no point in considering them in your plans. Just roll with it.

No one else knows about it, so you can’t borrow against it, nor is anyone going to treat you differently until you have the money. You still have to be alive in five years to collect and enjoy your fortune. Freak accidents can still happen, and you can still go bankrupt in the interim, or get thrown in prison, or whatever, but as long as you’re around to cash the check five years from today, you’re in the money.

How would this change your behavior in the interim? How would your priorities change from what they are?

Well, first of all, you’re probably not going to invest in retirement, or long term savings in general. After all, you won’t need to. In fact, further saving would be foolish. You’re not going to need that extra drop in the bucket, which means saving it would be wasting it. You’re legitimately economically better off living the high life and enjoying yourself as much as possible without putting yourself in such severe financial jeopardy that you would be increasing your chances of being unable to collect your money.

If this seems insane, it’s important to remember here, that your lifestyle and enjoyment are quantifiable economic factors (the keyword is “utility”) that weigh against the (relative and ultimately arbitrary) value of your money. This is the whole reason why people buy stuff they don’t strictly need to survive, and why rich people spend more money than poor people, despite not being physiologically different. Because any money you save is basically worthless, and your happiness still has value, buying happiness, expensive and temporary though it may be, is always the economically rational choice.

This is tied to an important economic concept known as Moral Hazard, a condition where the normal risks and costs involved in a decision fail to apply, encouraging riskier behavior. I’m stretching the idea a little bit here, since it usually refers to more direct situations. For example, if I have a credit card that my parents pay for to use “for emergencies”, and I know I’m never going to see the bill, because my parents care more about our family’s credit score than most anything I would think to buy, then that’s a moral hazard. I have very little incentive to do the “right” thing, and a lot of incentive to do whatever I please.

There are examples in macroeconomics as well. For example, many say that large corporations in the United States are caught in a moral hazard problem, because they know that they are “too big to fail”, and will be bailed out by the government if they get in to serious trouble. As a result, these companies may be encouraged to make riskier decisions, knowing that any profits will be massive, and any losses will be passed along.

In any case, the idea is there. When the consequences of a risky decision become uncoupled from the reward, it can be no surprise when rational actors take more riskier decisions. If you know that in five years you’re going to be basically immune to any hardship, you’re probably not going to prepare for the long term.

Now let’s take a different example. Suppose you’re rushed to the hospital after a heart attack, and diagnosed with a heart condition. The condition is minor for now, but could get worse without treatment, and will get worse as you age regardless.

The bad news is, in order to avoid having more heart attacks, and possible secondary circulatory and organ problems, you’re going to need to follow a very strict regimen, including a draconian diet, a daily exercise routine, and a series of regular injections and blood tests.

The good news, your doctor informs you, is that the scientists, who have been tucked away in their labs and getting millions in yearly funding, are closing in on a cure. In fact, there’s already a new drug that’s worked really well in mice. A researcher giving a talk at a major conference recently showed a slide of a timeline that estimated FDA approval in no more than five years. Once you’re cured, assuming everything works as advertised, you won’t have to go through the laborious process of treatment.

The cure drug won’t help if you die of a heart attack before then, and it won’t fix any problems with your other organs if your heart gets bad enough that it can’t supply them with blood, but otherwise it will be a complete cure, as though you were never diagnosed in the first place. The nurse discharging you tells you that since most organ failure doesn’t appear until patients have been going for at least a decade, so long as you can avoid dying for half that long, you’ll be fine.

So, how are you going to treat this new chronic and life threatening disease? Maybe you will be the diligent, model patient, always deferring to the most conservative and risk averse in the medical literature, certainly hopeful for a cure, but not willing to bet your life on a grad student’s hypothesis. Or maybe, knowing nothing else on the subject, you will trust what your doctor told you, and your first impression of the disease, getting by with only as much invasive treatment as you can get away with to avoid dying and being called out by your medical team for being “noncompliant” (referred to in chronic illness circles in hushed tones as “the n-word”).

If the cure does come in five years, as happens only in stories and fantasies, then either way, you’ll be set. The second version of you might be a bit happier from having more fully sucked the marrow out of life. It’s also possible that the second version would have also had to endure another (probably non-fatal) heart attack or two, and dealt with more day to day symptoms like fatigue, pains, and poor circulation. But you never would have really lost anything for being the n-word.

On the other hand, if by the time five years have elapsed, the drug hasn’t gotten approval, or quite possibly, hasn’t gotten close after the researchers discovered that curing a disease in mice didn’t also solve it in humans, then the difference between the two versions of you are going to start to compound. It may not even be noticeable after five years. But after ten, twenty, thirty years, the second version of you is going to be worse for wear. You might not be dead. But there’s a much higher chance you’re going to have had several more heart attacks, and possibly other problems as well.

This is a case of moral hazard, plain and simple, and it does appear in the attitudes of patients with chronic conditions that require constant treatment. The fact that, in this case, the perception of a lack of risk and consequences is a complete fantasy is not relevant. All risk analyses depend on the information that is given and available, not on whatever the actual facts may be. We know that the patient’s decision is ultimately misguided because we know the information they are being given is false, or at least, misleading, and because our detached perspective allows us to take a dispassionate view of the situation.

The patient does not have this information or perspective. In all probability, they are starting out scared and confused, and want nothing more than to return to their previous normal life with as few interruptions as possible. The information and advice they were given, from a medical team that they trust, and possibly have no practical way of fact checking, has led them to believe that they do not particularly need to be strict about their new regimen, because there will not be time for long term consequences to catch up.

The medical team may earnestly believe this. It is the same problem one level up; the only difference is, their information comes from pharmaceutical manufacturers, who have a marketing interest in keeping patients and doctors optimistic about upcoming products, and researchers, who may be unfamiliar with the hurdles in getting a breakthrough from the early lab discoveries to a consumer-available product, and whose funding is dependent on drumming up public support through hype.

The patient is also complicit in this system that lies to them. Nobody wants to be told that their condition is incurable, and that they will be chronically sick until they die. No one wants to hear that their new diagnosis will either cause them to die early, or live long enough for their organs to fail, because even by adhering to the most rigid medical plan, the tools available simply cannot completely mimic the human body’s natural functions. Indeed, telling a patient that they will still suffer long term complications, whether in ten, twenty, or thirty years, almost regardless of their actions today, it can be argued, will have much the same effect as telling them that they will be healthy regardless.

Given the choice between two extremes, optimism is obviously the better policy. But this policy does have a tradeoff. It creates a moral hazard of hope. Ideally, we would be able to convey an optimistic perspective that also maintains an accurate view of the medical prognosis, and balances the need for bedside manner with incentivizing patients to take the best possible care of themselves. Obviously this is not an easy balance to strike, and the balance will vary from patient to patient. The happy-go-lucky might need to be brought down a peg or two with a reality check, while the nihilistic might need a spoonful of sugar to help the medicine go down. Finding this middle ground is not a task to be accomplished by a practitioner at a single visit, but a process to be achieved over the entire course of treatment, ideally with a diverse and well experienced team including mental health specialists.

In an effort to finish on a positive note, I will point out that this is already happening, or at least, is already starting to happen. As interdisciplinary medicine gains traction, patient mental health becomes more of a focus, and as patients with chronic conditions begin to live longer, more hospitals and practices are working harder to ensure that a positive and constructive mindset for self care is a priority, alongside educating patients on the actual logistics of self-care. Support is easier to find than ever, especially with organized patient conferences and events. This problem, much like the conditions that cause it, are chronic, but are manageable with effort.