Re: John Oliver

So I had a bunch of things to say this week. I was actually planning a gag where I was going to shut down part of the site for “Internet Maintenance Day“. Then stuff happened that I felt I wanted to talk about more urgently. Than more stuff happened, and I had to bump back the queue again. Specifically, with regards to that last one, John Oliver released a new episode that I have to talk about.  

If you don’t care to watch, the central thesis of the episode is, in a nutshell, that our medical device regulation system sucks and needs to be more robust. And he’s not wrong. The FDA is overstretched, underfunded, strung up by political diktats written by lobbyists, and above all, beset by brain drain caused by decades of bad faith and political badmouthing. The pharmaceutical and biotech lobby has an outsized influence on the legislation (as well as executive orders and departmental regulations) that are supposed to govern them.

But, and I’m going to repeat this point, the system isn’t broken. Don’t get me wrong, it’s hardly functional either, but these problems are far more often ones of execution than of structure. 

Let’s take the 510(k) exemption that is so maligned in the episode. The way it’s presented makes it seem like such a bad idea, that surely this loophole must be closed. And I’ll agree that the way it’s being exploited is patently unsafe, and needs to be stemmed. But the measure makes sense under far narrower circumstances. To use an example from real life, take insulin pumps. Suppose a pump manufacturing company realizes that it’s replacing a high number of devices because of cracked screens and cases occurring in everyday use. It takes the issue to its engineers, who spend a few days in autocad making a new chassis with reinforced corners and a better screen that’s harder to crack. The guts of the pump, the parts that deliver insulin and treat patients, are unchanged. From a technical perspective, this is the equivalent of switching phone cases.

Now, what kind of vetting process should this device, which is functionally identical to the previous iteration aside from an improved casing, have to go through before the improved model can be shipped out to replace the current flawed devices? Surely it would be enough just to show that the improvements are just cosmetic, perhaps some documentation about the new case and the materials. This is the kind of scenario where a 510(k) style fast track would be good for everyone. It saves time and taxpayer money for regulators, it gets the company’s product out sooner, and consumers get a sturdier, better device sooner. This is why having that path is a good idea.

Not that the FDA is likely to apply section 510(k) in this scenario. Insulin pumps tick all the boxes to make them some of the most regulated devices in existence, even more so than most surgical implants. Any upgrade to insulin pumps, no matter how inconsequential, or how urgently needed by patients, is subject to months of testing, clinical trials, reviews, and paperwork. The FDA can, and regularly does, send applications back for further testing, because they haven’t proven beyond a shadow of a doubt that there is no risk. As a result, improvement to crucial life support devices are artificially slowed by regulations and the market’s reaction to regulations. 

Here’s the other thing to remember about medical devices: for as much as we hear about the costs of prematurely releasing devices, there is also a cost to delaying them. And frustratingly, the ones which often have the greatest cost to delaying- devices like insulin pumps, monitors, and other life support -tend to be subject to the greatest scrutiny, and hence the longest delays. For while the FDA examines numbers and research data, real patients continue to suffer and die for want of better care. We might prevent harm by slowing down the rollout of new technologies, but we must acknowledge that we are also consigning people to preventable harm by denying them newer devices. Some argue that this is morally preferable. I staunchly disagree. More than just trying to protect people from themselves, we are denying desperate people the hope of a better life. We are stifling innovation and autonomy for the illusion of security. This isn’t only unhelpful, and counterproductive, but I would argue it’s downright un-American. 

Rest assured I’m not about to go and join the ranks of the anarchists in calling for the abolition of regulatory agencies. The FDA is slow, inefficient, and in places corrupt, but this is as much as anything due to cuts in funding, usually made by those who seek to streamline innovation, which have limited its ability to fulfill its mandate as well as ironically made processing applications slower. A lack of respect for the agency, its job, and the rules it follows, have inspired unscrupulous companies to bend rules to their breaking point, and commit gross violations of scientific and ethical standards in pursuit of profit. Because of the aforementioned lack of resources, and a political climate actively hostile to regulatory action, the FDA and the agencies responsible for enforcement have been left largely unable to follow their own rules. 

Cutting regulations is not the answer. Improving and reforming the FDA is not a bad idea, but the measures supported by (and implied to be supported by) John Oliver are more likely to delay progress for those who need it than solve the issues at hand. A half-informed politically led moral panic will only lead to bad regulations, which aside from collateral damage, are likely to be gutted at the next changing of the guard, putting us back in the same place. I like to use the phrase “attacking a fly with a sledgehammer”, but I think this is more a case of “attacking a fly with a rapier”, in that it will cause massive collateral damage and probably still miss the fly in the end.

So, how do we do it right? Well, first of all, better funding for the FDA, with an eye towards attracting more, better candidates to work as regulators. If done right, this will make the review process not only more robust, but more efficient, with shorter turnaround time for devices. It might also be a good idea to look into reducing or even abolishing some application fees, especially for those applications which follow high standards for clinical trials, and have the paper trail to prove ethical standards. At present, application fees are kept high as a means to bring in revenue and make up for budget cuts to the agency. Although this arguably does good by putting the cost of regulating on the industry, and hopefully incentivizing quality applications, it constrains the resources available to investigating applications, and gives applying companies undue influence over application studies.

Second, we need to discard this silly notion of a regulatory freeze. Regardless of how one feels about regulations, I would hope that we all agree that they should at least be clear and up to date in order to deal with modern realities. And this means more regulations amending and clarifying old ones, and dealing with new realities as they crop up. There should also be greater emphasis on enforcement, particularly during the early application process. The penalties for submissions intentionally misclassifying devices needs to be high enough to act as a deterrent. Exceptions like section 510(k) need to be kept as exceptions, for special extenuating circumstances, rather than remaining open loopholes. And violating research standards to produce intentionally misleading data needs to be treated far more seriously, perhaps with criminal penalties. This requires not only regulatory and enforcement power, which already exist on the books, but the political will to see abusers held to account. 

Third, there needs to be a much greater emphasis on post-market surveillance; that is, continued testing, auditing, and review of products after they reach consumers. This seems obvious, and from conversations with the uninitiated, I suspect it’s where most people believe the FDA spends most of its effort. But the way the regulations are written, and certainly how they’re enforced in practice, post-market surveillance is almost an afterthought. Most of it is handled by the manufacturers themselves, who have an alarming amount of latitude in their reporting. I would submit that it is this, the current lack of post-market surveillance, rather than administrative classifications, that is the gaping hole in our medical regulatory system. 

This is also a much harder sell, politically. Industry hates it, because robust surveillance often prevents them from getting away with cutting manufacturing costs after approval, when reducing costs would lead to reduced product quality, and it means they have to keep on extra QA staff for as long as they remain in business. It’s also expensive for industry because of how the current setup puts most of the cost on manufacturers. Plenty of politicians also hate post market surveillance, since it is a role that is ideally redundant when everyone does their jobs. When something goes wrong, we say that it shouldn’t have been sold in the first place, and when nothing goes wrong, why would we pay people to keep running tests? 

Incidentally, from what I have been led to understand, this is a major difference between US and EU regulatory processes. Drugs and devices tend to come out in the EU commercially before the US, because the US puts all of its eggs in the basket of premarket approval (and also underfunds that process), while the EU will approve innovations that are “good enough” with the understanding that if problems show up down the line, the system will react and swoop in, and those at fault will be held accountable. As a result, European consumers have safe and legal access to technologies still restricted as experimental in the US, while also enjoying the confidence that abusers will be prosecuted. Most of those new devices are also paid for by the government healthcare system. Just saying. 

The Laptop Manifesto

The following is an open letter to my fellow students of our local public high school, which has just recently announced, without warning, that all students will henceforth be required to buy google chromebooks at their own expense.


I consider myself a good citizen. I obey the traffic laws when I walk into town. I vote on every issue. I turn in my assignments promptly. I raise my hand and wait to be called on. When my classmates come to me at the beginning of class with a sob story about how they lost their last pencil, and the teacher won’t loan them another for the big test, I am sympathetic to their plight. With education budgets as tight as they are, I am willing to share what I have.

Yet something about the rollout of our school’s new laptop policy does not sit well with me. That the school should announce mere weeks before school begins that henceforth all students shall be mandated to have a specific, high-end device strikes me as, at best, rude, and, at worst, an undue burden on students for a service that is legally supposed to be provided by the state at no cost.

Ours is, after all, a public school. Part of being a public school is being accessible to the public. That means all members of the public. Contrary to the apparent belief of the school board and high school administration, the entire student population does not consist solely of financially wealthy and economically stable families. Despite the fact that our government at both the local and state level is apparently content to routinely leave the burden of basic classroom necessities to students and individual teachers, it is still, legally, the responsibility of the school, not the student, to see that the student is equipped to learn.

Now, I am not opposed to technology. On the contrary, I think our school is long overdue for such a 1:1 program. Nor am I particularly opposed the ongoing effort to make more class materials digitally accessible. Nor even that the school should offer their own Chromebooks to students at the student’s expense. However, there is something profoundly wrong about the school making such costs mandatory.

Public school is supposed to be the default, free option for compulsory education. To enforce compulsory education as our state does, (to the point of calling child protective services on parents of students who miss what the administration considers to be too many days,) and then enforcing the cost of that education amounts to a kind of double taxation against families that attend public schools. Moreover, this double taxation has a disproportionate impact on those who need public schools the most.

This program as it stands is unfair, unjust, and as far as I can see, indefensible. I therefore call upon my fellow students to resist this unjust and arguably illegal decree, by refusing to comply. I call in particular upon those who are otherwise able to afford such luxuries as chromebooks to resist the pressure to bow to the system, and stand up for your fellow students.

Bretton Woods

So I realized earlier this week, while staring at the return address stamped on the sign outside the small post office on the lower level of the resort my grandfather selected for us on our family trip, that we were in fact staying in the same hotel which hosted the famous Bretton Woods Conference, that resulted in the Bretton Woods System that governed post-WWII economic rebuilding around the world, and laid the groundwork for our modern economic system, helping to cement the idea of currency as we consider it today.

Needless to say, I find this intensely fascinating; both the conference itself as a gathering of some of the most powerful people at one of the major turning points in history, and the system that resulted from it. Since I can’t recall having spent any time on this subject in my high school economics course, I thought I would go over some of the highlights, along with pictures of the resort that I was able to snap.

Pictured: The Room Where It Happened

First, some background on the conference. The Bretton Woods conference took place in July of 1944, while the Second World War was still in full swing. The allied landings in Normandy, less than a month earlier, had been successful in establishing isolated beachheads, but Operation Overlord as a whole could still fail if British, Canadian, American, and Free French forces were prevented from linking up and liberating Paris.

On the Eastern European front, the Red Army had just begun Operation Bagration, the long planned grand offensive to push Nazi forces out of the Soviet Union entirely, and begin pushing offensively through occupied Eastern Europe and into Germany. Soviet victories would continue to rack up as the conference went on, as the Red Army executed the largest and most successful offensive in its history, escalating political concerns among the western allies about the role the Soviet Union and its newly “liberated” territory could play in a postwar world.

In the pacific, the Battle of Saipan was winding down towards an American victory, radically changing the strategic situation by putting the Japanese homeland in range of American strategic bombing. Even as the battles rage on, more and more leaders on both sides look increasingly to the possibility of an imminent allied victory.

As the specter of rebuilding a world ravaged by the most expensive and most devastating conflict in human history (and hopefully ever) began to seem closer, representatives of all nations in the allied powers met in a resort in Bretton Woods, New Hampshire, at the foot of Mount Washington, to discuss the economic future of a postwar world in the United Nations Monetary and Financial Conference, more commonly referred to as the Bretton Woods Conference. The site was chosen because, in addition to being vacant (since the war had effectively killed tourism), the isolation of the surrounding mountains made the site suitably defensible against any sort of attack. It was hoped that this show of hospitality and safety would assuage delegates coming from war torn and occupied parts of the world.

After being told that the hotel had only 200-odd rooms for a conference of 700-odd delegates, most delegates, naturally, decided to bring their families, an many cases bringing as many extended relatives as could be admitted on diplomatic credentials. Of course, this was probably as much about escaping the ongoing horrors in Europe and Asia as it was getting a free resort vacation.

These were just the delegates. Now imagine adding families, attachés, and technical staff.

As such, every bed within a 22 mile radius was occupied. Staff were forced out of their quarters and relocated to the stable barns to make room for delegates. Even then, guests were sleeping in chairs, bathtubs, even on the floors of the conference rooms themselves.

The conference was attended by such illustrious figures as John Maynard Keynes (yes, that Keynes) and Harry Dexter White (who, in addition to being the lead American delegate, was also almost certainly a spy for the Soviet NKVD, the forerunner to the KGB), who clashed on what, fundamentally, should be the aim of the allies to establish in a postwar economic order.

Spoiler: That guy on the right is going to keep coming up.

Everyone agreed that protectionist, mercantilist, and “economic nationalist” policies of the interwar period had contributed both to the utter collapse of the Great Depression, and the collapse of European markets, which created the socioeconomic conditions for the rise of fascism. Everyone agreed that punitive reparations placed on Germany after WWI had set up European governments for a cascade of defaults and collapses when Germany inevitably failed to pay up, and turned to playing fast and loose with its currency and trade policies to adhere to the letter of the Treaty of Versailles.

It was also agreed that even if reparations were entirely done away with, which would leave allied nations such as France, and the British commonwealth bankrupt for their noble efforts, that the sheer upfront cost of rebuilding would be nigh impossible by normal economic means, and that leaving the task of rebuilding entire continents would inevitably lead to the same kind of zero-sum competition and unsound monetary policy that had led to the prewar economic collapse in the first place. It was decided, then, that the only way to ensure economic stability through the period of rebuilding was to enforce universal trade policies, and to institute a number of centralized financial organizations under the purview of the United Nations, to oversee postwar rebuilding and monetary policy.

It was also, evidently, the beginning of the age of minituraized flags.

The devil was in the details, however. The United States, having spent the war safe from serious economic infrastructure damage, serving as the “arsenal of democracy”, and generally being the only country that had reserves of capital, wanted to use its position of relative economic supremacy to gain permanent leverage. As the host of the conference and the de-facto lead for the western allies, the US held a great deal of negotiating power, and the US delegates fully intended to use it to see that the new world order would be one friendly to American interests.

Moreover, the US, and to a lesser degree, the United Kingdom, wanted to do as much as possible to prevent the Soviet Union from coming to dominate the world after it rebuilt itself. As World War II was beginning to wind down, the Cold War was beginning to wind up. To this end, the news of daily Soviet advances, first pushing the Nazis out of its borders, and then steamrolling into Poland, Finland, and the Baltics was troubling. Even more troubling were the rumors of the ruthless NKVD suppression of non-communist partisan groups that had resisted Nazi occupation in Eastern Europe, indicating that the Soviets might be looking to establish their own postwar hegemony.

Although something tells me this friendship isn't going to last
Pictured: The beginning of a remarkable friendship between US and USSR delegates

The first major set piece of the conference agreement was relatively uncontroversial: the International Bank for Reconstruction and Development, drafted by Keynes and his committee, was established to offer grants and loans to countries recovering from the war. As an independent institution, it was hoped that the IBRD would offer flexibility to rebuilding nations that loans from other governments with their own financial and political obligations and interests could not. This was also a precursor to, and later backbone of, the Marshal Plan, in which the US would spend exorbitant amounts on foreign aid to rebuild capitalism in Europe and Asia in order to prevent the rise of communist movements fueled by lack of opportunity.

The second major set piece is where things get really complicated. I’m massively oversimplifying here, but global macroeconomic policy is inevitably complicated in places. The second major set-piece, a proposed “International Clearing Union” devised by Keynes back in 1941, was far more controversial.

The plan, as best I am able to understand it, called for all international trade to be handled through a single centralized institution, which would measure the value of all other goods and currencies relative to a standard unit, tentatively called a “bancor”. The ICU would then offer incentives to maintain trade balances relative to the size of a nation’s economy, by charging interest off of countries with a major trade surplus, and using the excess to devalue the exchange rates of countries with trade deficits, making imports more expensive and products more desirable to overseas consumers.

The Grand Ballroom was thrown into fierce debate, and the local Boy Scouts that had been conscripted to run microphones between delegates (most of the normal staff either having been drafted, or completely overloaded) struggled to keep up with these giants of economics and diplomacy.

Photo of the Grand Ballroom, slightly digitally adjusted to compensate for bad lighting during our tour

Unsurprisingly, the US delegate, White, was absolutely against Keynes’s hair brained scheme. Instead, he proposed a far less ambitious “International Monetary Fund”, which would judge trade balances, and prescribe limits for nations seeking aid from the IMF or IBRD, but otherwise would generally avoid intervening. The IMF did keep Keynes’s idea of judging trade based on a pre-set exchange rate (also obligatory for members), but avoided handing over the power to unilaterally affect the value of individual currencies to the IMF, instead leaving it in the hands of national governments, and merely insisting on certain requirements for aid and membership. It also did away with notions of an ultranational currency.

Of course, this raised the question of how to judge currency values other than against each other alone (which was still seen as a bridge too far in the eyes of many). The solution, proposed by White, was simple: judge other currencies against the US dollar. After all, the United States was already the largest and most developed economy. And since other countries had spent the duration of the war buying materiel from the US, it also held the world’s largest reserves of almost every currency, including gold and silver, and sovereign debt. The US was the only country to come out of WWII with enough gold in reserve to stay on the gold standard and also finance postwar rebuilding, which made it a perfect candidate as a default currency.

US, Canadian, and Soviet delegates discuss the merits of Free Trade

Now, you can see this move either as a sensible compromise for a world of countries that couldn’t have gone back to their old ways if they tried, or as a master stroke attempt by the US government to cement its supremacy at the beginning of the Cold War. Either way, it worked as a solution, both in the short term, and in the long term, creating a perfect balance of stability and flexibility in monetary policy for a postwar economic boom, not just in the US, but throughout the capitalist world.

The third set piece was a proposed “International Trade Organization”, which was to oversee implementation and enforcement of the sort of universal free trade policies that almost everyone agreed would be most conducive not only to prosperity, but to peace as a whole. Perhaps surprisingly, this wasn’t terribly divisive at the conference.

The final agreement for the ITO, however, was eventually shot down when the US Senate refused to ratify its charter, partly because the final conference had been administered in Havana under Keynes, who used the opportunity to incorporate many of his earlier ideas on an International Clearing Union. Much of the basic policies of the ITO, however, influenced the successful General Agreements on Tarriffs and Trade, which would later be replaced by the World Trade Organization.

Pictured: The main hallway as seen from the Grand Ballroom. Notice the moose on the right, above the fireplace.

The Bretton Woods agreement was signed by the allied delegates in the resort’s Gold Room. Not all countries that signed immediately ratified. The Soviet Union, perhaps unsurprisingly, reversed its position on the agreement, calling the new international organizations “a branch of Wall Street”, going on to found the Council for Mutual Economic Assistance, a forerunner to the Warsaw Pact, within five years. The British Empire, particularly its overseas possessions, also took time in ratifying, owing to the longstanding colonial trade policies that had to be dismantled in order for free trade requirements to be met.

The consensus of most economists is that Bretton Woods was a success. The system more or less ceased to exist when Nixon, prompted by Cold War drains on US resources, and French schemes to exchange all of its reserve US dollars for gold, suspended the Gold Standard for the US dollar, effectively ushering in the age of free-floating fiat currencies; that is, money that has value because we all collectively accept that it does; an assumption that underlies most of our modern economic thinking.

There’s a plaque on the door to the room in which the agreement was signed. I’m sure there’s something metaphorical in there.

While it certainly didn’t last forever, the Bretton Woods system did accomplish its primary goal of setting the groundwork for a stable world economy, capable of rebuilding and maintaining the peace. This is a pretty lofty achievement when one considers the background against which the conference took place, the vast differences between the players, and the general uncertainty about the future.

The vision set forth in the Bretton Woods Conference was an incredibly optimistic, even idealistic, one. It’s easy to scoff at the idea of hammering out an entire global economic system, in less than a month, at a backwoods hotel in the White Mountains, but I think it speaks to the intense optimism and hope for the future that is often left out of the narrative of those dark moments. The belief that we can, out of chaos and despair, forge a brighter future not just for ourselves, but for all, is not in itself crazy, and the relative success of the Bretton Woods System, flawed though it certainly was, speaks to that.

A beautiful picture of Mt. Washington at sunset from the hotel’s lounge

Works Consulted

IMF. “60th Anniversary of Bretton Woods.” 60th Anniversary – Background Information, what is the Bretton Woods Conference. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://external.worldbankimflib.org/Bwf/whatisbw.htm>.

“Cooperation and Reconstruction (1944-71).” About the IMF: History. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://www.imf.org/external/about/histcoop.htm>

YouTube. Extra Credits, n.d. Web. 10 Aug. 2017. <http://www.youtube.com/playlist?list=PLhyKYa0YJ_5CL-krstYn532QY1Ayo27s1>.

Burant, Stephen R. East Germany, a country study. Washington, D.C.: The Division, 1988. Library of Congress. Web. 10 Aug. 2017. <https://archive.org/details/eastgermanycount00bura_0>.

US Department of State. “Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944.” Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944 – FRASER – St. Louis Fed. N.p., n.d. Web. 10 Aug. 2017. <https://fraser.stlouisfed.org/title/430>.

Additional information provided by resort staff and exhibitions visitited in person.

Reflections on International Women’s Day

I stated previously that I intended to bring this blog offline once again in solidarity with the Day Without Women Strike for International Women’s Day on March 8th. Two things have convinced me to alter my plans slightly. First, the strike organizers seem to be calling for only women to actually strike today, and are encouraging men to participate in other ways. This is fair enough. After all, it’s not my voice being put down, and I would have a hard time coming up with a tangible example of a time that gender discrimination has impacted me directly (It impacts me indirectly all the time, by holding back scientific progress by the selective suppression of certain groups’ advancement, but I digress).

Second, and arguably more important, is the point that, while striking and industrial action may be effective means of grabbing headlines, the point of these exercises is not to elicit silence, but conversation. Given that people seem to have this notion that I am a moderately talented communicator, and have chosen to listen to me, it stands to reason that a more appropriate response might be to attempt to add to the conversation myself.

It’s easy not to notice something that doesn’t affect oneself directly. Humans, it seems, possess an extraordinary talent for ignoring things that they feel do not concern them, particularly where knowledge of those things would make their lives and understanding of how the world works more complicated. This is probably a good thing on the whole, as it allows us to get through the day without having an existential crisis over the impending heat death of the universe, and feeling continually depressed about the state of affairs for our fellow humans in the developing world. On the other hand, it also makes it distressingly easy for us to overlook challenges to others when they do not have a direct impact on us.

Recently, I was invited to attend an event regarding the ongoing development and implementation of the Women’s Empowerment Principles at the United Nations. Now, as much as I like to believe that I am a progressive person capable of and inclined to provide and advocate for equal opportunity, it is impossible to deny the simple fact that I am male. And while I can name all kinds of discrimination that I have myself encountered, none of them relate to my sexual and gender identity. And so when it comes to suggesting ways to remedy present injustices, I do not really have a solid background to draw from.

I probably could have gotten away with what I already know. After all, with my limited experience in educating others on specific issues, and with my commitment to the principles of equality in general, surely I have enough context to be able to, if not contribute on my own, then at least, to pay homage to the general notion of women’s struggles?

Perhaps. But, I know enough people whom I respect, for whom this is a serious issue worthy of dedicating entire careers to. Additionally, I like to make a point to be an informed interlocutor. It is my firm position that all opinions worthy of serious discussion ought to have a firm factual and logical backing. And given that, in this case unlike most others, I do not have a personal background experience to draw upon, it seems only correct that I do my due diligence research so that I may make responsible and informed conclusions.

Thus, it transpired that I set myself the goal of becoming, if not an expert, then at least competent, in the field of gender relations and sexual inequality around the world in the space of just over two weeks. A lofty goal, to be sure, but a worthy one. My reading list included an assortment of United Nations, governmental and NGO reports, various statistical analyses, news stories, and a few proper books. Actually, calling it a reading list is a tad misleading, as, in order to cram as much information into as short a time as possible, most of the material in question was consumed in audio format, played at double or triple speed. This is a very effective way of gleaning the key facts without having to waste time on wasteful frivolities like enjoying the plot.

Most of my initial digging started in various UN organizations, chiefly the media center of the World Health Organization. While not always as in depth as respective national organizations, the WHO is useful inasmuch as it provides decent cursory summaries for the global perspective. What was most fascinating to me was that there were surprisingly few hard statistics. The biggest problem listed, particularly in the developing world, was not that women received a necessarily lower quality of healthcare, but that most did not receive health care at all, and therefore properly compiled statistics on gender discrepancies in health were notoriously hard to come by. Rather than telling a story, the data simply does not exist.

In a bitter irony, the more likely data was to exist for a specific region, the less likely significant gender discrepancies were to be shown to exist, at least in healthcare. That is to say that by the time that rigorous evidence could be compiled, the worst elements of inequality had been subdued. This makes a kind of sense. After all, if the problem is that women aren’t being allowed to participate in public, how exactly are you going to survey them? This also hinted at a theme that would continue to crop up: different regions and cultures are starting at tackling gender inequality from radically different starting points, and face accordingly different challenges.

My second major revelation came while listening to I Am Malala. For those who may have been living under a rock during that timeframe, here is the background: In 2012, Malala Yousefzai, a human rights and women’s education activist in rural Pakistan was shot by the taliban, sparking international outrage and renewed interest in the plight of women in the Middle East. Malala survived after being airlifted to the United Kingdom, and has since garnered celebrity status, becoming a goodwill ambassador for the United Nations’ women’s empowerment initiatives.

I have still not yet made up my mind on whether I will go so far as to say that I liked the book. I do not know that is the sort of book that is meant to be liked. I did, however, find it quite enlightening. The book is a first person biography; a kind of story that I have never been quite as interested in as the classic anecdote. If I am completely honest, I found most of the beginning rather dry. The story felt to me as though it had grown rather repetitive: Malala would have some dream or ambition that would seem fairly modest to those of us living in the developed world, which would naturally be made extremely contentious and difficult because she was a girl living in her particular culture.

It got a the point where I could practically narrate alongside the audiobook. And then, halfway through the twelfth or so incident where Malala came up short owing to her gender and her culture, it hit me: that’s the whole point. Yes, it is tedious, to the point of being frustrating to the narrative. That’s the point here. No part of this book would have happened, if not for the constant, grating frustration of sexist attitudes and policies. The story couldn’t progress because of those obstacles, and every time it seemed like one hurdle had been surmounted, another one cropped up. Because that’s what it’s like. And if I, the reader, was frustrated trying to hear the story, just imagine what it would be like to deal with the real deal.

A second revelation also occurred to me. In trying to tell of my tribulations in living with physical disabilities, I have often been accused of overstating the scope of their impact, to the point of copping blame for stirring up unnecessary trouble. People believed, or at least, suspected, that while life might be more difficult in a few select areas, surely it couldn’t effect absolutely everything in the way that I suggested it did. Perhaps, then, the problem lay not with the actual task at hand, but in the fact that my perception had been tainted. Perhaps I was not truly as disabled as I claimed, but merely suffered from a sort of persecution complex. I realized that I had unintentionally, unconsciously, made the same mistake in my reading of Malala’s story.

This also helped to answer another important question: In the developed world, we often hear bickering over to what degree we still “need” the women’s empowerment movement. After all, we have full suffrage, and equality before the law. Discrimination on the matter of sex is illegal, if it can be proven. Given how much better life is for women in the developed world than the developing, is it reasonable to expect more? Are these western advocates simply suffering from a persecution complex? Certainly there are those whose concerns are more immediately applicable and actionable than others, and certainly there are those who will insist no matter how much is done, that it isn’t enough. Such is the nature of politics, and on this the women’s empowerment movement in the developed world is not any different from any other political movement. But on the general question over whether genuine, actionable, inequities exist, it seems now far less unreasonable to me to accept that there may yet be more work to be done than I might have initially been led to believe.

I expect that even this conclusion will be contentious. I expect that I shall be told in short order that I have drawn conclusions from the data which I have aggregated which are faulty, or else that the data itself is biased or misleading. On this point I concede that I am still quite young in my in-depth study of this particular field, and, as mentioned previously, far better minds than mine have devoted entire careers to ironing out the finer points. Reasonable minds may, and indeed do, disagree about specifics. However, if there is one thing which my cursory research and analysis thereof has confirmed in my mind, it is that, on matters of general policy, I would rather err on the side of empathy, choosing rather to be too trusting in the good faith of others, than to ignore and unintentionally oppress.

It follows, then, that I should find myself wholeheartedly endorsing and supporting the observation and celebration of today, International Women’s Day, and reaffirming my support for continuation and expansion of the UN’s Women’s Empowerment Principles.

You Have The Right To An Education

I am not sold on the going assumption seemingly embraced by the new US presidential administration which characterizes education as an industry, at least, not in the sense that the United States government has traditionally approached other industries. While I can appreciate that there may be a great deal which market competition can improve in the field, I feel it is dangerous to categorize education as merely an economic service rather than an essential civil service and government duty. Because if it is an industry, then it ceases to be a government duty.
The idea that education is a human right is not new, nor is it particularly contentious as human rights go. Article 26 of the Universal Declaration of Human Rights reads in part as follows:

Everyone has the right to education. Education shall be free […] Technical and professional education shall be made generally available and higher education shall be equally accessible […] Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.

The United States lobbied strongly for the adoption and promotion of this declaration, and for many years touted it as one of the great distinctions which separated the “free world” from the Soviet Union and its allies. Americans were proud that their country could uphold the promise of free education. The United States remains bound to these promises under international law, but more importantly, is bound by the promise to its own citizens.

Of course, there are other, more nationalist grounds for opposing the erosion of the government’s responsibility to its citizens in this regard. Within the United States, it has long been established that, upon arrest, in order for due process to be observed, that a certain exchange must take place between the accused and the authorities. This exchange, known as the Miranda Warning, is well-documented in American crime shows.

The ubiquity of the Miranda Warning is not merely a coincidental procedure, but is in fact an enforced safeguard designed to protect the constitutional rights of the accused. Established in 1966 in the US Supreme Court Case Miranda vs. Arizona, the actual wording is less important than the notion that the accused must be made aware, and must indicate their understanding of, their constitutional rights regarding due process. Failure to do so, even for the most trivial of offenses, is a failure of the government to uphold those rights, and can constitute grounds for a mistrial.

The decision, then, establishes an important premise: Citizens who are not educated about their rights cannot reliably exercise them, and this failure of education represents sufficient legal grounds as to permit reasonable doubt on the execution of justice. It also establishes that this education is the duty of the government, and that a failure here represents an existential failure of that government. It follows, then, that the government and the government alone holds a duty to ensure that each citizen is at least so educated as to reasonably ensure that they can reliably exercise their constitutional rights.

What then, should we make about talk of turning education into a free-for-all “industry”? Can the government still claim that it is fulfilling its constitutional obligations if it is outsourcing them to third parties? Can that government still claim to be of and by the people if it’s essential functions are being overseen and administered by publicly unaccountable individuals? And what happens when one of these organizations fails to educate its students to a reasonable standard? Can the government be held accountable for the subsequent miscarriage of justice if the necessary measures to prevent it were undertaken in such a convolutedly outsourced manner as to make direct culpability meaningless?

As usual, I don’t know the answer, although I fear at our present rate, we may need to look at a newer, more comprehensive Miranda Warning.

On the Affordable Care Act

Heal the sick, cleanse the lepers, raise the dead, cast out devils: freely ye have received, freely give.” – Book of Matthew 10:8

Here then is the origin and rise of government; namely, a mode rendered necessary by the inability of moral virtue to govern the world” – Thomas Paine, Common Sense

I do not particularly like the Affordable Care Act. It is unwieldy, needlessly complex, and yes, it costs more than it probably needs to. But at the same time, and this is crucial, it is a vast improvement over the previous state of affairs. Not only this, but the continued coverage of our most vulnerable citizens by the Affordable Care Act is not only a moral necessity, but is critical to maintaining our democratic way of life.

While there is no law that states that a republic need aim to suppress inequality, there is a basic rule in economics and sociology that states that those who are truly impoverished; that is, those who cannot meet their basic needs, also cannot reasonably participate, in an informed way, at least, in a democratic process [6][7]. After all, if one needs to work continuously in order to continue to pay for life support, when exactly is one expected to register to vote, research candidates, call representatives, and actually vote?

It follows, then, that if the function and duty of the democratic-republican government is foremost to safeguard our inalienable natural rights against tyranny, as the founding documents and rhetoric of the United States seem to maintain [8], then the same government also has a mandate and a duty to ensure that citizens are at least not so crushed by poverty and circumstance as to effectively impugn upon those rights.

Such is the moral and constitutional basis for the Affordable Care Act. And while it may be argued that the program is not necessarily as efficient as we feel it perhaps ought to be, these are problems to be solved with a scalpel rather than a hatchet. The simple fact remains that without any sort of similar protection, millions of Americans afflicted with chronic conditions would not be in a state to exercise their rights to self-determination. Given that all but the most ardent anarchists maintain that it is the duty of the government to defend the rights of its vulnerable citizens, it follows that it is also the responsibility of the government to, if not provide healthcare outright, then to at least ensure that it does not become so much of a crushing burden as to prevent the free exercise of citizens’ rights.

To the patriotic, there is also the matter of showing that the United States is a civilized, developed nation capable of taking care of its citizens. It is no secret that the American healthcare system ranks extremely unfavorably with its fellow developed nations, and has often become the butt of jokes in such countries [9]. While the Affordable Care Act will in no way solve this discrepancy singlehandedly, it does go a ways towards closing the gap.

There are, of course, other benefits to a robust and accessible medical system more enticing to the self-interested. For starters, ensuring widespread, if not universal, coverage, will help mitigate the effects of the next major disease outbreak [5]. Given the distinct possibility that the next major outbreak will also be the pandemic that brings human civilization to the brink of collapse, a la the bubonic plague, having a healthcare system which allows for the timely containment and treatment of infected individuals is probably a worthwhile investment [1][5]. Given this, it is not unreasonable to equate the funding of the Affordable Care Act to that of Civil Defense, now under the auspices of Homeland Security. Notably, very few seem eager to defund the DHS.

It is also worth reiterating that the additional government investment in healthcare subsequent to the Affordable Care Act, has in fact brought in net savings. It is estimated that each dollar invested yields a return of approximately $1.35 [2],either in direct savings, fewer welfare payments, or increased tax revenues from newly enabled workers. Money spent on preventative care, such as vaccinations, well-visits, and related, which are notably the things least likely to be purchased by those who are not covered, yield returns of $5 for every $1 invested [3]. Spending on care for those with chronic preexisting conditions, who are only covered in the first place because of the Affordable Care Act, yield an ROI of approximately $3 for every $1 invested, not including additional benefits gained from the prevention of such conditions in vulnerable populations [1][2][4].

But all of this pales in comparison to the moral imperative to help one’s neighbor. Fascinatingly, many of the same figures who now exalt the Bible as the ultimate source of governmental direction seem to also be selectively ignoring the biblical mandate to help the poor and vulnerable. The Bible, for its part, is quite clear on the responsibility for all Christians; indeed, for all moral people, to provide for the humane treatment of the sick.

When I lived in Australia, healthcare was provided by the government as a matter of course. After all, how could a government provide freedom to a citizenry that was crippled by disease? How could anyone support a government which had the means to save the lives of its citizens, but chose not to for political reasons? How could anyone be proud of, or be expected to serve that country? Providing healthcare was viewed as part of what it meant to be a functional, first-world government.

As stated previously, I do not particularly like the Affordable Care Act. I think it was a lily-livered compromise. I am in agreement with the Universal Declaration of Human Rights that health, like life, liberty, and the pursuit of happiness, are inalienable human rights, and that anything short of a full guarantee to protect these rights is a failure of our government and society at a fundamental level. However, given the choice between the Affordable Care Act and what existed before it, I feel compelled to defend the ACA. If is a stopgap, to be sure, and an unwieldy one at that, but until such time as a reasonable replacement emerges, it is in the best interests of all involved to ensure that it remains in effect.

Works Cited:

1. “How Americans can get a better return on their health care investments.” Centers for Disease Control and Prevention. Centers for Disease Control and Prevention, n.d. Web. 13 Jan. 2017.

2. Abrams, Melinda Abrams, Stuart Guterman Guterman, Rachel Nuzum Nuzum, Jamie Ryan Ryan, Mark Zezza Zezza, and Jordan Kiszla Kiszla. “The Affordable Care Act’s Payment and Delivery System Reforms: A Progress Report at Five Years.” (2015): n. pag. Web.

3. Armstrong, Edward P. “Economic Benefits and Costs Associated With Target Vaccinations.” Journal of Managed Care Pharmacy 13.7 Supp B (2007): 12-15. Web.

4. “Sustained Benefit of Continuous Glucose Monitoring on A1C, Glucose Profiles, and Hypoglycemia in Adults With Type 1 Diabetes.” American Diabetes Association. ADA, n.d. Web. 13 Jan. 2017.

5. “Infection prevention and control in health care for preparedness and response to outbreaks.” WHO. World Health Organization, n.d. Web. 13 Jan. 2017.

6. “Poverty Traps.” Research – Knowledge in Development Note: Poverty Traps. World Bank, n.d. Web. 13 Jan. 2017.

7. Whitley, E., D. Gunnell, D. Dorling, and G. D. Smith. “Ecological study of social fragmentation, poverty, and suicide.” Bmj 319.7216 (1999): 1034-037. Web.

8. United States of America. Continental Congress. The Declaration of Independence. By Thomas Jefferson. Washington, DC: National Archives and Records Administration, 1992. Print.

9. Munro, Dan. “U.S. Healthcare Ranked Dead Last Compared To 10 Other Countries.” Forbes. Forbes Magazine, 03 Feb. 2015. Web. 13 Jan. 2017.