World Health Day

The following message is part of a campaign to raise public awareness and resources in light of the global threat posed by COVID-19 on World Health Day. If you have the resources, please consider contributing in any of the ways listed at the end of this post. Remember to adhere to current local health guidelines wherever you are, which may differ from those referenced in this post. 

Now that the world has woken up to the danger that we face in the Covid-19 pandemic, and world leaders have begun to grapple with the problem in policy terms, many individuals have justifiably wondered how long this crisis will last. The answer is, we don’t know. I’m going to repeat this several times, because it’s important to come to terms with this. For all meaningful purposes, we are living through an event that has never happened before. Yes, there have been pandemics this bad in the long ago, and yes, there have been various outbreaks in recent memory, but there has not been a pandemic which is as deadly, and as contagious, which we have failed to contain so spectacularly, recently enough to use it is a clear point of reference. This means that every prediction is not just speculation, but speculation born of an imperfect mosaic. 

Nevertheless, it seems clear that unless we are willing to accept tens of millions of deaths in every country, humanity will need to settle in for a long war. With the language of the US President and Queen Elizabeth, the metaphor is apt. Whether “long” may mean a few months, or into next year will depend on several factors, among them whether a culture which has for many decades been inculcated with the notion of personal whimsy and convenience is able to adapt to collective sacrifice. The longer we take to accept the gravity of the threat, the weaker our response will be, and the longer it will take us to recover. Right now all of humanity face a collective choice. Either we will stubbornly ignore reality, and pay the price with human tragedy of hitherto-fore unimaginable proportions, and repercussions for decades to come, or we will listen to experts and hunker down, give support to those who need it, and help each other through the storm. 

For those who look upon empty streets and bare shelves and proclaim the apocalypse, I have this to say: it is only the apocalypse if we make it such. Granted, it is conceivable that if we lose sight of our goals and our capabilities, either by blind panic or stubborn ignorance, we may find the structures of our society overwhelmed, and the world we know may collapse. This is indeed a possibility, but a possibility which it is entirely within our collective capacity to avoid. The data clearly shows that by taking care of ourselves at home, and avoiding contact with other people or surfaces, we can slow the spread of the virus. With the full mobilization of communities, we can starve the infection of new victims entirely. But even a partial slowing of cases buys us time. With that most valuable of currencies, we can expand hospital capacity, retool our production, and focus our tremendous scientific effort towards forging new weapons in this fight. 

Under wartime pressure, the global scientific community is making terrific strides. Every day, we are learning more about our enemy, and discovering new ways to give ourselves the advantage. Drugs which prove useful are being deployed as fast as they can be produced. With proper coordination from world leaders, production of these drugs can be expanded to give every person the best fighting chance should they become sick. The great challenges now are staying the course, winning the battle for production, and developing humanity’s super weapon.

Staying the course is fairly simple. For the average individual not working essential jobs, it means staying home, avoiding contact as much as possible, and taking care to stay healthy. For communities and organizations, it means encouraging people to stay at home by making this as easy as possible. Those working essential jobs should be given whatever resources they need to carry on safely. Those staying at home need to have the means to do so, both logistically and psychologically. Logistically, many governments are already instituting emergency financial aid to ensure the many people out of work are able to afford staying home, and many communities have used volunteers or emergency workers such as national guard troops to support deliveries of essentials, in order to keep as many people as possible at home. Psychologically, many groups are offering online activities, and many public figures have taken to providing various forms of entertainment and diversion.

Winning the battle for production is harder, but still within reach. Hospitals are very resource intensive at the best of times. Safety in a healthcare setting means the use of large amounts of single-use disposable materials, in terms of drugs and delivery mechanisms, but also personal protective equipment such as masks, gowns, and gloves. If COVID-19 is a war, ventilators are akin to tanks, but PPE are akin to ammunition. Just as it is counterproductive and harmful to ration how many bullets or grenades a soldier may need to use to win a battle, so too is it counterproductive and harmful to insist that our frontline healthcare workers make do with a limited amount of PPE. 

The size and scope of the present crisis, taken with the amount of time we have to act, demands a global industrial mobilization unprecedented during peacetime, and unseen in living memory. It demands either that individuals exhibit self discipline and a regard for the common good, or central authorities control the distribution of scarce necessities. It demands that we examine new ways of meeting production needs while minimizing the number of people who must be kept out at essential jobs. For the individual, this mobilization may require further sacrifice; during the mobilization of WWII, certain commodities such as automobiles, toys, and textiles were unavailable or out of reach. This is the price we paid to beat back the enemy at the gates, and today we find ourselves in a similar boat. All of these measures are more effective if taken calmly in advance by central government, but if they are not they will undoubtedly be taken desperately by local authorities. 

Lastly, there is the challenge of developing a tool which will put an end to the threat of millions of deaths. In terms of research, there are several avenues which may yield fruit. Many hopes are pinned on a vaccine, which would grant immunity to uninfected, and allow us to contain the spread without mass quarantine. Other researchers are looking for a drug, perhaps an antiviral or immunomodulator which might make COVID-19 treatable at home with a pill, much like Tamiflu blunted the worst of H1N1. Still others are searching for antibodies which could be synthesized en masse, to be infused to the blood of vulnerable patients. Each of these leads requires a different approach. However, they all face the common challenge of not only proving safety and effectiveness against COVID-19, but giving us an understandable mechanism of action.

Identifying the “how and why” is not merely of great academic interest, but a pressing medical concern. Coronaviruses are notoriously unstable and prone to mutation; indeed there are those who speculate that COVID-19 may be more than one strain. Finding a treatment or vaccine without understanding our enemy exposes us to the risk of other strains emerging, undoing our hard work and invalidating our collective sacrifices. Cracking the COVID-19 code is a task of great complexity, requiring a combination of human insight and brilliance, bold experimentation, luck, and enormous computational resources. And like the allied efforts against the German enigma, today’s computer scientists have given us a groundwork to build off.

Unraveling the secrets of COVID-19 requires modeling how viral proteins fold and interact with other molecules and proteins. Although protein folding follows fairly simple rules, the computational power required to actually simulate them is enormous. For this, scientists have developed the Folding@Home distributed computing project. Rather than constructing a new supercomputer which would exceed all past attempts, this project aims to harness the power of unused personal computers in a decentralized network. Since the beginning of March, Folding@Home has focused its priorities on COVID-19 related modeling, and has been inundated with people donating computing power, to the point that they had to get help from other web services companies because simulations being completed faster than their web servers could assign them.

At the beginning of March, the computing power of the entire project clocked in at around 700 petaflops, FLOPS being a unit of computing power, meaning Floating Point Operations Per Second. During the Apollo moonshot, a NASA supercomputer would average somewhere around 100,000 FLOPS. Two weeks ago, they announced a new record in the history of computing: more than an exaflop of constant distributed computing power, or 10^18 FLOPS. With the help of Oracle and Microsoft, by the end of March, Folding@Home exceeded 1.5 Exaflops. These historic and unprecedented feats are a testament to the ability of humanity to respond to a challenge. Every day this capacity is maintained or exceeded brings us closer to breaking the viral code and ending the scourge. 

Humanity’s great strength has always lay in our ability to learn, and to take collective action based on reason. Beating back COVID-19 will entail a global effort, in which every person has an important role to play. Not all of us can work in a hospital or a ventilator factory, but there’s still a way each of us can help. If you can afford to donate money, the World Health Organization’s Solidarity Fund is coordinating humanity’s response to the pandemic. Folding@Home is using the power of your personal computers to crack the COVID-19 code. And if nothing else, every person who stays healthy by staying home, washing hands, wearing homemade masks and keeping social distance is one less person to treat in the ICU. 

Incremental Progress Part 3 – For Science!

Previously, I have talked some of the ways that patients of chronic health issues and medical disabilities feel impacted by the research cycle. Part one of this ongoing series detailed a discussion I participated in at an ad-hoc support group of 18-21 year olds at a major health conference. Part two detailed some of the things I wish I had gotten a chance to add, based on my own experiences and the words of those around me, but never got the chance to due to time constraints.

After talking at length about the patient side of things, I’d like to pivot slightly to the clinical side. If we go by what most patients know about the clinical research process, here is a rough picture of how things work:

First, a conclave of elite doctors and professor gather in secret, presumably in a poorly lit conference room deep beneath the surface of the earth, and hold a brainstorming session of possible questions to study. Illicit substances may or not be involved in this process, as the creativity required to come up with such obscure and esoteric concerns as “why do certain subspecies of rats have funny looking brains?” and “why do stressful things make people act stressed out?” is immense. At the end of the session, all of the ideas are written down on pieces of parchment, thrown inside a hat, and drawn randomly to decide who will study what.

Second, money is extracted from the public at large by showing people on the street pictures of cute, sad looking children being held at needle-point by an ominously dressed person in a lab coat, with the threat that unless that person hands over all of their disposable income, the child will be forced to receive several injections per day. This process is repeated until a large enough pile of cash is acquired. The cash is then passed through a series of middlemen in dark suits smoking cigars, who all take a small cut for all their hard work of carrying the big pile of cash.

At this point, the cash is loaded onto a private jet and flown out to the remote laboratories hidden deep in the Brazilian rainforests, the barren Australian deserts, the lost islands of the arctic and Antarctic regions, and inside the active volcanoes of the pacific islands. These facilities are pristine, shining snow white and steel grey, outfitted with all the latest technology from a mid-century science fiction film. All of these facilities are outfitted either by national governments, or the rich elite of major multinational corporations, who see to all of the upkeep and grant work, leaving only the truly groundbreaking work to the trained scientists.

And who are the scientists? The scientist is a curious creature. First observed in 1543 naturalists hypothesized scientists to be former humans transmogrified by the devil himself in a Faustian bargain whereby the subject loses most interpersonal skills and material wealth in exchange for incredible intelligence a steady, monotonous career playing with glassware and measuring equipment. No one has ever seen a scientist in real life, although much footage exists of the scientist online, usually flaunting its immense funding and wearing its trademark lab coat and glasses. Because of the abundance of such footage, yet lack of real-life interactions, it has been speculated that scientists may possess some manner of cloaking which renders them invisible and inaudible outside of their native habitat.

The scientists spend their time exchanging various colored fluid between Erlenmeyer flasks and test tubes, watching to see which produces the best colors. When the best colors are found, a large brazier is lit with all of the paper currency acquired earlier. The photons from the fire reaction may, if the stars are properly aligned, hit the colored fluid in such a way as to cause the fluid to begin to bubble and change into a different color. If this happens often enough, the experiment is called a success.

The scientists spend the rest of their time meticulously recording the precise color that was achieved, which will provide the necessary data for analyst teams to divine the answers to the questions asked. These records are kept not in English, or any other commonly spoken language, but in Scientific, which is written and understood by only a handful of non-scientists, mainly doctors, teachers, and engineers. The process of translation is arduous, and in order to be fully encrypted requires several teams working in tandem. This process is called peer review, and, at least theoretically, this method makes it far more difficult to publish false information, because the arduousness of the process provides an insurmountable barrier to those motivated by anything other than the purest truth.

Now, obviously all of this is complete fiction. But the fact that I can make all of this up with a straight face speaks volumes, both about the lack of public understanding of how modern clinical research works, and the lack of transparency of the research itself. For as much as we cheer on the march of scientific advancement and technological development, for as much media attention is spent on new results hot off the presses, and for as much as the stock images and characters of the bespectacled expert adorned in a lab coat and armed with test tubes resounds in both popular culture and the popular consciousness, the actual details of what research is being done, and how it is being executed, is notably opaque.

Much of this is by design, or is a direct consequence of how research is structured. The scientific method by which we separate fact from fiction demands a level of rigor that is often antithetical to human nature, which requires extreme discipline and restraint. A properly organized double-blind controlled trial, the cornerstone of true scientific research, requires that the participants and even the scientists measuring results be kept in the dark as to what they are looking for, to prevent even the subtlest of unconscious biases from interfering. This approach, while great at testing hypotheses, means that the full story is only known to a handful of supervisors until the results are ready to be published.

The standard of scientific writing is also incredibly rigorous. In professional writing, a scientist is not permitted to make any claims or assumptions unless either they have just proven it themselves, in which case they are expected to provide full details of their data and methodology, or can directly cite a study that did so. For example, a scientist cannot simply say that the sky is blue, no matter how obvious this may seem. Nor even can a scientist refer to some other publication in which the author agreed that the sky is blue, like a journalist might while providing citations for a story. A scientist must find the original data proving that the sky is blue, that it is consistently blue, and so forth, and provide the documentation for others to cross check the claims themselves.

These standards are not only obligatory for those who wish to receive recognition and funding, but they are enforced for accreditation and publication in the first place. This mindset has only become more entrenched as economic circumstances have caused funding to become more scarce, and as political and cultural pressure have cast doubts on “mainstream institutions” like academia and major research organizations. Scientists are trained to only give the most defensible claims, in the most impersonal of words, and only in the narrow context for which they are responsible for studying. Unfortunately, although this process is unquestionably effective at testing complex hypotheses, it is antithetical to the nature of everyday discourse.

It is not, as my colleague said during our conference session said, that “scientists suck at marketing”, but rather that marketing is fundamentally incongruous with the mindset required for scientific research. Scientific literature ideally attempts to lay out the evidence with as little human perspective as possible, and let the facts speak for themselves, while marketing is in many respects the art of conjuring and manipulating human perspective, even where such perspectives may diverge from reality.

Moreover, the consumerist mindset of our capitalist society amplifies this discrepancy. The constant arms race between advertisers, media, and political factions means that we are awash in information. This information is targeted to us, adjusted to our preferences, and continually served up on a silver platter. We are taught that our arbitrary personal views are fundamentally righteous, that we have no need to change our views unless it suits us, and that if there is really something that requires any sort of action or thought on our part, that it will be similarly presented in a pleasant, custom tailored way. In essence, we are taught to ignore things that require intellectual investment, or challenge our worldview.

There is also the nature of funding. Because it is so difficult to ensure that trials are actually controlled, and to write the results in such a counterintuitive way, the costs of good research can be staggering, and finding funding can be a real struggle. Scientists may be forced to work under restrictions, or to tailor their research to only the most profitable applications. Results may not be shared to prevent infringement, or to ensure that everyone citing the results is made to pay a fee first. I could spend pages on different stories of technologies that could have benefited humanity, but were kept under wraps for commercial or political reasons.

But of course, it’s easy to rat on antisocial scientists and pharmaceutical companies. And it doesn’t really get to the heart of the problem. The problem is that, for most patients, especially those who aren’t enrolled in clinical trials, and don’t necessarily have access to the latest devices, the whole world of research is a black hole into which money is poured with no apparent benefit in return. Maybe if they follow the news, or hear about it from excited friends and relations (see previous section), they might be aware of a few very specific discoveries, usually involving curing one or two rats out of a dozen tries.

Perhaps, if they are inclined towards optimism, they will be able to look at the trend over the last several decades towards better technology and better outcomes. But in most cases, the truly everyday noticeable changes seem to only occur long after they have long been obvious to the users. The process from patient complaints with a medical device, especially in a non-critical area like usability and quality of life, that does not carry the same profit incentive for insurers to apply pressure, to a market product, is agonizingly slow.

Many of these issues aren’t research problems so much as manufacturing and distribution problems. The bottleneck in making most usability tweaks, the ones that patients notice and appreciate, isn’t in research, or even usually in engineering, but in getting a whole new product approved by executives, shareholders, and of course, regulatory bodies. (Again, this is another topic that I could, and probably will at some future date, rant on about for several pages, but suffice it to say that when US companies complain about innovation being held up by the FDA, their complaints are not entirely without merit).

Even after such processes are eventually finished, there is the problem of insurance. Insurance companies are, naturally, incredibly averse to spending money on anything unless and until it has been proven beyond a shadow of a doubt that it is not only safe, but cost effective. Especially for basic, low income plans, change can come at a glacial pace, and for state-funded services, convincing legislators to adjust statutes to permit funding for new innovations can be a major political battle. This doesn’t even begin to take into account the various negotiated deals and alliances between certain providers and manufacturers that make it harder for new breakthroughs to gain traction (Another good topic for a different post).

But these are economic problems, not research. For that matter, most of the supposed research problems are simply perception problems. Why am I talking about markets and marketing when I said I was going to talk about research?

Because for most people, the notions of “science” and “progress” are synonymous. We are constantly told, by our politicians, by our insurers, by our doctors, and by our professors that not only do we have the very best level of care that has ever been available in human history, but that we also have the most diligent, most efficient, most powerful organizations and institutions working tirelessly on our behalf to constantly push forward the frontier. If we take both of these statements at face value, then it follows that anything that we do not already have is a research problem.

For as much talk as there was during our conference sessions about how difficult life was, how so very badly we all wanted change, and how disappointed and discouraged we have felt over the lack of apparent progress, it might be easy to overlook the fact that far better technologies than are currently used by anyone in that room already exist. At this very moment, there are patients going about their lives using systems that amount to AI-controlled artificial organs. These systems react faster and more accurately than humans could ever hope to, and the clinical results are obvious.

The catch? None of these systems are commercially available. None of them have even been submitted to the FDA. A handful of these systems are open source DIY projects, and so can be cobbled together by interested patients, though in many cases this requires patients to go against medical advice, and take on more engineering and technical responsibility than is considered normal for a patient. Others are in clinical trials, or more often, have successfully completed their trials and are waiting for manufacturers to begin the FDA approval process.

This bottleneck, combined with the requisite rigor of clinical trials themselves, is what has given rise to the stereotype that modern research is primarily chasing after its own tail. This perception makes even realistic progress seem far off, and makes it all the more difficult to appreciate what incremental improvements are released.