On Hippocratic Oaths

I’ve been thinking about the Hippocratic Oath this week. This came up while wandering around campus during downtime, when I encountered a mural showing a group of nurses posing heroically, amid a collage of vaguely related items, between old timey nurse recruitment posters. In the background, the words of the Hippocratic Oath were typed behind the larger than life figures. I imagine they took cues from military posters that occasionally do similar things with oaths of enlistment. 

I took special note of this, because strictly speaking, the Hippocratic Oath isn’t meant for nurses. It could arguably apply to paramedics or EMTs, since, epistemologically at least, a paramedic is a watered down doctor, the first ambulances being an extension of the military hospitals and hence under the aegis of surgeons and doctors rather than nurses. But that kind of pedantic argument not only ignores actual modern day training requirements, since in most jurisdictions the requirements for nurses are more stringent than EMTs and at least as stringent as paramedics, but shortchanges nurses, a group to whom I owe an enormous gratitude and for whom I hold an immense respect. 

Besides which, whether or not the Hippocratic Oath – or rather, since the oath recorded by Hippocrates himself is recognized as being outdated, and has been almost universally superseded by more modern oaths – is necessarily binding to nurses, it is hard to argue that the basic principles aren’t applicable. Whether or not modern nurses have at their disposal the same curative tools as their doctorate-holding counterparts, they still play an enormous role in patient outcomes. In fact, by some scientific estimates, the quality of nursing staff may actually matter more than the actions undertaken by doctors. 

Moreover, all of the ethical considerations still apply. Perhaps most obviously, respect for patients and patient confidentiality. After all, how politely the doctor treats you in their ten minutes of rounds isn’t going to outweigh your direct overseers for the rest of the day. And as far as confidentiality, whom are you more concerned about gossiping: the nerd who reads your charts and writes out your prescription, or the nurse who’s in your room, undressing you to inject the drugs into the subcutaneous tissue where the sun doesn’t shine? 

So I don’t actually mind if nurses are taking the Hippocratic Oath, whether or not it historically applies. But that’s not why it’s been rattling around my mind the last week. 

See, my final paper in sociology is approaching. Actually, it’s been approaching; at this point the paper is waiting impatiently at the door to be let in. My present thinking is that I will follow the suggestion laid down in the syllabus and create a survey for my paper. My current topic regards medical identification. Plenty of studies in the medical field have exalted medical identification as a simple, cost-effective means of promoting patient safety. But compelling people to wear something that identifies them as being part of a historically oppressed minority group has serious implications that I think are being overlooked when we treat people who refuse to wear medical identification in the same group as people who refuse to get vaccinated, or take prescribed medication.

What I want to find out in my survey is why people who don’t wear medical identification choose not to. But to really prove (or disprove, as the case may be, since a proper scientific approach demands that possibility) my point, I need to get at the sensitive matters at the heart of this issue: medical issues and minority status. This involves a lot of sensitive topics, and consequently gathering data on it means collecting potentially sensitive information. 

This leaves me in an interesting position. The fact that I am doing this for a class at an accredited academic institution gives me credibility, if more-so with the lay public than among those who know enough about modern science to realize that I have no real earned credentials. But the point remains, if I posted online that I was conducting a survey for my institution, which falls within a stretched interpretation of the truth, I could probably get many people to disclose otherwise confidential information to me. 

Since I have never taken an oath, and have essentially no oversight in the execution n if this survey, other than the bare minimum privacy safeguards required by the FCC in my use of the internet, which I can satisfy through a simple checkbox in the United States. If I were so inclined, I could take this information entrusted to me, and either sell it, or use it for personal gain. I couldn’t deliberately target individual subjects, more because that would be criminal harassment than because of any breach of trust. But I might be able to get away with posting it online and letting the internet wreak what havoc it will. This would be grossly unethical and bordering on illegal, but I could probably get away with it. 

I would never do that, of course. Besides being wrong on so many different counts, including betraying the trust of my friends, my community, and my university, it would undermine trust in the academic and scientific communities, at a time where they have come under political attack by those who have a vested interest in discrediting truth. And as a person waiting on a breakthrough cure that will allow me to once again be a fully functional human being, I have a vested interest in supporting these institutions. But I could do it, without breaking any laws, or oaths.

Would an oath stop me? If, at the beginning of my sociology class, I had stood alongside my fellow students, with my hand on the Bible I received in scripture class, in which I have sought comfort and wisdom in dark hours, and swore an oath like the Hippocratic one or its modern equivalents to adhere to ethical best practices and keep to my responsibilities as a student and scientist, albeit of sociology rather than one of the more sciency sciences, would that stop me if I had already decided to sell out my friends?

I actually can’t say with confidence. I’m inclined to say it would, but this is coming from the version of me that wouldn’t do that anyway. The version of me that would cross that line is probably closer to my early-teenage self, whom my modern self has come to regard with a mixture of shame and contempt, who essentially believed that promises were made to be broken. I can’t say for sure what this version of myself would have done. He shared a lot of my respect for science and protocol, and there’s a chance he might’ve been really into the whole oath vibe. So it could’ve worked. On the other hand, it he thought he would’ve gained more than he had to lose, I can imagine how he would’ve justified it to himself. 

Of course, the question of the Hippocratic oath isn’t really about the individual that takes it, so much as it is the society around it. It’s not even so much about how the society enforces oaths and punished oath-breakers. With the exception of perjury, we’ve kind of moved away from Greco-Roman style sacred blood oaths. Adultery and divorce, for instance, are both oath-breaking, but apart from the occasional tut-tut, as a society we’ve more or less just agreed to let it slide. Perhaps as a consequence of longer and more diverse lives, we don’t really care about oaths.

Perjury is another interesting case, though. Because contrary to the occasionally held belief, the crime of perjury isn’t actually affected by whether the lie in question is about some other crime. If you’re on the stand for another charge of which you’re innocent, and your alibi is being at Steak Shack, but you say you were at Veggie Villa, that’s exactly as much perjury as if you had been at the scene of the crime and lied about that. This is because witness testimony is treated legally as fact. The crime of perjury isn’t about trying to get out of being punished. It’s about the integrity of the system. That’s why there’s an oath, and why that oath is taken seriously.

The revival of the Hippocratic Oath as an essential part of the culture of medicine came after World War II, at least partially in response to the conclusion of the Nuremberg Trials and revelations about the holocaust. Particularly horrifying was how Nazi doctors had been involved in the process, both in the acute terms of unethical human experimentation, and in providing medical expertise to ensure that the apparatus of extermination was as efficient as possible. The Red Cross was particularly alarmed- here were people who had dedicated their lives to an understanding of the human condition, and had either sacrificed all sense of morality in the interest of satiating base curiosity, or had actively taken the tools of human progress to inflict destruction in service of an evil end. 

Doctors were, and are, protected under the Geneva Convention. Despite Hollywood and video games, shooting a medic wearing medical symbol, even if they are coming off a landing craft towards your country, is a war crime. As a society, we give them enormous power, with the expectation that they will use that power and their knowledge and skills to help us. This isn’t just some set of privileges we give doctors because they’re smart, though; that trust is essential to their job. Doctors can’t perform surgery if they aren’t trusted with knives, and we can’t eradicate polio if no one is willing to be inoculated.

The first of the modern wave of revisions of the Hippocratic Oath to make it relevant and appropriate for today started with the Red Cross after World War II. The goal was twofold. First: establish trust in medical professionals by setting down a simple, overriding set of basic ethical principles that can be distilled down to a simple oath, so that it can be understood by everyone. Second: make this oath not only universal within the field, but culturally ubiquitous, so as to make it effectively self-enforcing. 

It’s hard to say whether this gambit has worked. I’m not sure how you’d design a study to test it. But my gut feeling is that most people trust their own doctors, certainly more than, say, pharmacologists, meteorologists, or economists, at least partially because of the idea of the Hippocratic Oath. The general public understands that doctors are bound by an oath of ethical principles, and this creates trust. It also means that stories about individual incidents of malpractice or ethics breaches tend to be attributed to sole bad actors, rather than large scale conspiracies. After all, there was an oath, and they broke it; clearly it’s on that person, not the people that came up with the oath.

Other fields, of course, have their own ethical standards. And since, in most places, funding for experiments are contingent on approval from an ethics board, they’re reasonably well enforced. A rogue astrophysicist, for instance, would find themselves hard pressed to find the cash on their own to unleash their dark matter particle accelerator, or whatever, if they aren’t getting their funding to pay for electricity. This is arguably a more fail-safe model than the medical field, where with the exception of big, experimental projects, ethical reviews mostly happen after something goes wrong. 

But if you ask people around the world to rate the trustworthiness of both physicians and astrophysicists, I’d wager a decent sum that more people will say they trust the medical doctor more. It’s not because the ethical review infrastructure keeps doctors better in check, it’s not because doctors are any better educated in their field, and it’s certainly not anything about the field itself that makes medicine more consistent or less error prone. It’s because medical doctors have an oath. And whether or not we treat oaths as a big deal these days, they make a clear and understandable line in the sand. 

I don’t know whether other sciences need their own oath. In terms of reducing ethical ethical breaches, I doubt it will have a serious impact. But it might help with the public trust and relatability probables that the scientific community seems to be suffering. If there was an oath that made it apparent how the language of scientists, unlike pundits, is seldom speculative, but always couched in facts; how scientists almost never defend their work even when they believe in it, preferring to let the data speak for itself; and how the best scientists already hold themselves to an inhumanly rigid standard of ethics and impartiality in their work, I think it could go a ways towards improving appreciation of science, and our discourse as a whole.

Unreachable

I suspect that my friends think that I lie to them about being unreachable as an excuse to simply ignore them. In the modern world there are only a small handful situations in which a person genuinely can’t be expected to be connected and accessible.

Hospitals, which used to be a communications dead zone on account of no cell-phone policies, have largely been assimilated into the civilized world with the introduction of guest WiFi networks. Airplanes are going the same way, although as of yet WiFi is still a paid commodity, and in that is sufficiently expensive as to make it still a reasonable excuse.

International travel used to be a good excuse, but nowadays even countries that don’t offer affordable and consistent cellular data have WiFi hotspots at cafes and hotels. The only travel destinations that are real getaways in this sense- that allow you to get away from the modern life by disconnecting you from the outside world -are developing countries without infrastructure, and the high seas. This is the best and worst part of cruise ships, which charge truly extortionate rates for slow, limited internet access.

The best bet for those who truly don’t want to be reached is still probably the unspoilt wilderness. Any sufficiently rural area will have poor cell reception, but areas which are undeveloped now are still vulnerable to future development. After all, much of the rural farming areas of the Midwest are flat and open. It only takes one cell tower to get decent, if not necessarily fast, service over most of the area.

Contrast this to the geography of the Appalachian or Rocky Mountains, which block even nearby towers from reaching too far, and in many cases are protected by regulations. Better yet, the geography of Alaska combines several of these approaches, being sufficiently distant from the American heartland that many phone companies consider it foreign territory, as well as being physically huge, challenging to develop, and covered in mountains and fjords that block signals.

I enjoy cruises, and my grandparents enjoy inviting us youngsters up into the mountains of the northeast, and so I spend what is probably for someone of my generation, a disproportionate amount of time disconnected from digital life. For most of my life, this was an annoyance, but not a problem, mostly because my parents handled anything important enough to have serious consequences, but partially because, if not before social media, then at least before smartphones, being unreachable was a perfectly acceptable and even expected response to attempts at contact.

Much as I still loath the idea of a phone call, and will in all cases prefer to text someone, the phone call, even unanswered, did provide a level of closure that an unanswered text message simply doesn’t. Even if you got the answering machine, it was clear that you had done your part, and you could rest easy knowing that they would call you back at their leisure; or if it was urgent, you kept calling until you got them, or it became apparent that they were truly unreachable. There was no ambiguity whether you had talked to them or not; whether your message had really reached them and they were acting on it, or you had only spoken to a machine.

Okay, sure, there was some ambiguity. Humans have a way of creating ambiguity and drama through whatever form we use. But these were edge cases, rather than seemingly being a design feature of text messages. But I think this paradigm shift is more than just the technology. Even among asynchronous means, we have seen a shift in expectations.

Take the humble letter, the format that we analogize our modern instant messages (and more directly, e-mail) to most frequently and easily. Back in the day when writing letters was a default means of communication, writing a letter was an action undertaken on the part of the sender, and a thing that happened to the receiver. Responding to a letter by mail was polite where appropriate, but not compulsory. This much he format shares with our modern messages.

But unlike our modern systems, with a letter it was understood that when it arrived, it would be received, opened, read, and replied to all in due course, in the fullness of time, when it was practical for the recipient, and not a moment sooner. To expect a recipient to find a letter, tear it open then and there, and drop everything to write out a full reply at that moment, before rushing it off to the post office was outright silly. If a recipient had company, it would be likely that they would not even open the letter until after their business was concluded, unlike today, where text messages are read and replied to even in the middle of conversation.

Furthermore, it was accepted that a reply, even to a letter of some priority, might take some several days to compose, redraft, and send, and it was considered normal to wait until one had a moment to sit down and write out a proper letter, for which one was always sure to have something meaningful to say. Part of this is an artifact of classic retrospect, thinking that in the olden day’s people knew the art of conversation better, and much of it that isn’t is a consequence of economics. Letters cost postage, while today text messaging is often included in phone plans, and in any case social media offers suitable replacements for free.

Except that, for a while at least, the convention held in online spaces too. Back in the early days of email, back when it was E-mail (note the capitalization and hyphenation), and considered a digital facsimile of postage rather than a slightly more formal text message, the accepted convention was that you would sit down to your email, read it thoroughly, and compose your response carefully and in due course, just as you would on hard copy stationary. Indeed, our online etiquette classes*, we were told as much. Our instructors made clear that it was better to take time in responding to queries with a proper reply than get back with a mere one or two sentences.

*Yes, my primary school had online etiquette classes, officially described as “nettiquete courses”, but no one used that term except ironically. The courses were instituted after a scandal in parliament, first about students’ education being outmoded in the 21st century, and second about innocent children being unprepared for the dangers of the web, where, as we all know, ruffians and thugs lurk behind every URL. The curriculum was outdated the moment it was made, and it was discontinued only a few years after we finished the program, but aside from that, and a level of internet paranoia that made Club Penguin look lassaiz faire, it was helpful and accurately described how things worked.

In retrospect, I think this training helps explain a lot of the anxieties I face with modern social media, and the troubles I have with text messages and email. I am acclaimed by others as an excellent writer and speaker, but brevity is not my strong suit. I can cut a swathe through paragraphs and pages, but I stumble over sentences. When I sit down to write an email, and I do, without fail, actually sit down to do so, I approach the matter with as much gravity as though I were writing with quill and parchment, with all the careful and time-consuming redrafting, and categorical verbosity that the format entails.

But email and especially text messages are not the modern reincarnation of the bygone letter, nor even the postcard, with it’s shorter format and reduced formality. Aside from a short length that is matched in history perhaps only by the telegram, the modern text message has nearly totally forgone not only the trappings of all previous formats, but indeed, has seemed to forgo the trappings of form altogether.

Text messages have seemed to become accepted not as a form of communication so much as an avenue of ordinary conversation. Except this is a modern romanticization of text messages. Because while text messages might well be the closest textual approximation of a face to face conversation that doesn’t involve people actually speaking simultaneously, it is still not a synchronous conversation.

More importantly than the associated pleasantries of the genre, text messages work on an entirely different timescale than letters. Where once, with a letter, it might be entirely reasonable for a reply to take a fortnight, nowadays a delay in responding to a text message between friends beyond a single day is a cause for concern and anxiety.

And if it were really a conversation, if two people were conversing in person, or even over the phone, and one person without apparent reason failed to respond to the other’s prompts for a prolonged period, this would indeed be cause for alarm. But even ignoring the obvious worry that I would feel if my friend walking alongside me in the street suddenly stopped answering me, in an ordinary conversation, the tempo is an important, if underrated, form of communication.

To take an extreme example, suppose one person asks another to marry them. What does it say if the other person pauses? If they wait before answering? How is the first person supposed to feel, as opposed to an immediate and enthusiastic response? We play this game all the time in spoken conversation, drawing out words or spacing out sentences, punctuating paragraphs to illustrate our point in ways that are not easily translated to text, at least, not without the advantage of being able to space out one’s entire narrative in a longform monologue.

We treat text messages less like correspondence, and more like conversation, but have failed to account for the effects of asyncronicity on tempo. It is too easy to infer something that was not meant by gaps in messages; to interpret a failure to respond as a deliberate act, to mistake slow typing for an intentional dramatic pause, and so forth.

I am in the woods this week, which means I am effectively cut off from communication with the outside world. For older forms of communication, this is not very concerning. My mail will still be there when I return, and any calls to the home phone will be logged and recorded to be returned at my leisure. Those who sent letters, or reached an answering machine know, or else can guess, that I am away from home, and can rest easy knowing that their missives will be visible when I return.

My text messages and email inbox, on the other hand, concern me, because of the very real possibility that someone will contact me thinking I am reading messages immediately, since my habit of keeping my phone within arm’s reach at all times is well known, and interpreting my failure to respond as a deliberate snub, when in reality I am out of cell service. Smart phones and text messages have become so ubiquitous and accepted that we seem to have silently arrived at the convention that shooting off a text message to someone is as good as calling them, either on the phone or even in person. Indeed, we say it is better, because text messages give the recipient the option of postponing a reply, even though we all quietly judge those people who take time to respond to messages, and will go ahead and imply all the social signals of a sudden conversational pause in the interim, while decrying those who use text messages to write monologues.

I’ll say it again, because it bears repeating after all the complaints I’ve given: I like text messages, and I even prefer them as a communication format. I even like, or at least tolerate, social media messaging platforms, despite having lost my appreciation for social media as a whole. But I am concerned that we, as a society, and as the first generation to really build the digital world into the foundations of our lives, are setting ourselves up for failure in our collective treatment of our means of communication.

When we fail to appreciate the limits of our technological means, and as a result, fail to create social conventions that are realistic and constructive, we create needless ambiguity and distress. When we assign social signals to pauses in communication that as often as not have more to do with the manner of communication than the participants or their intentions, we do a disservice to ourselves and others. We may not mention it aloud, we may not even consciously consider it, but it lingers in our attitudes and impressions. And I would wager that soon enough we will see a general rise in anxiety and ill will towards others.

Boring

I had an earth-shattering revelation the other day: I am a fundamentally boring person.

I’ve known, or at least suspected, this much deep in my heart for some time. The evidence has been mounting for a while. For one, despite enjoying varied cuisine, my stomach can not tolerate even moderately spicy food, and I generally avoid it. I do not enjoy roller coasters, nor horror movies, as I do not find joy in tricking my brain into believing that I am in danger from something that poses no threat. I prefer books at home to crowded parties. I take walks to get exercise, but do not work out, and would never run a marathon. I immensely enjoy vanilla ice cream.
For a while I justified these eccentricities in various ways. I would cite instructions from my gastroenterologist to avoid foods that aggravate my symptoms, and claim that this dictate must automatically override my preferences. I would say that roller coasters exacerbated my vertigo and dizziness, which is true inasmuch as any movement exacerbates them, and that the signs say for people with disabilities ought not ride. I would argue that my preference for simple and stereotypically bookish behaviors reflected an intellectual or even spiritual superiority which is only perceptible to others of a similar stature.
The far simpler explanation is that I am simply boring. My life may be interesting; I may go interesting places, meet cool people, have grand adventures and experiences, and through this amalgam I may, in conjunction with a decent intelligence and competence in storytelling, be able to weave yarns and tall tales that portray myself as a rugged and fascinating protagonist, but stripped of extenuating circumstances, it seems like I am mostly just a boring guy.
My moment of revelation happened the other night. It was at a party, organized by the conference I was attending. Obviously, professionally arranged conferences aren’t prone to generating the most spectacular ragers, but these are about as close as I get to the stereotypical saturnalia. Going in, I was handed a card valid for one free drink. I turned it over in my hand, considering the possibilities. Given the ingredients behind the bar, there were hundreds of possible permutations to try, even forgoing alcohol because of its interactions with my seizure medication. Plenty of different options to explore.
I considered the value of the card, both compared to the dollar cost of drinks, and in terms of drinks as a social currency. Buying someone else a drink is, after all, as relevant to the social dynamic as having one for oneself. I looked around the room, noting the personalities. If I were a smooth operator, I could buy a drink for one of the ladies as an icebreaker. If I sought adventure and entertainment, I could use my free drink to entice one of the more interesting personalities, perhaps the professional actor, or the climber who submitted Everest for the discovery channel, to tell me a story. Or, if I wanted to work on networking, I could approach one of the movers and shakers in the room, buying myself a good first impression.
Instead, I took the boring option. I slid the card into my pocket and walked over to the forlorn cart by the edge of the room, and poured myself a glass of crystal lite red punch, which, just to hammer the point home, I watered it down by about half. As I walked over towards an empty standing table, the revelation hit me, and the evidence that had been building up calcified into a pattern. I was boring.
Now, as character traits go, being boring is far from the worst. In many respects, it is an unsung positive. As my pediatrician once explained to me on week three of hospital quarantine, and my eighth group of residents coming through my room to see if anyone could provide new guesses about my condition and prognosis, abnormal or fascinating is, from a medical perspective, bad news. Boring people don’t get arrested, hunted by security services, and disappeared in the night. Boring people get through alright.
Indeed, Catherine the Great, empress of Russia, once described her second great lover (and the first to avoid being disgraced from court and/or assassinated after the fact), Alexander Vasilchikov, as “the most boring man in all of Russia”. Contemporary accounts generally concur with this assessment, describing him as polite, intellectual, and effete. Though he was eventually replaced at court by the war hero Potemkin, in receiving what amounted to bribes to leave court quietly, he was awarded a lavish Moscow estate, a pile of rubles, and a generous pension. He married some years later, and by historical accounts, lived happily ever after.
Now, sure, Vasilchikov isn’t as well remembered as Potemkin or Orlov, but he wound up alright. Orlov was stripped of his rank unceremoniously by an enraged Empress, and Potemkin’s historical legacy is forever marred by rumors created by the ottomans, and his opponents at court, about his deep insecurities, culminating in his inventing Potemkin villages to impress the Empress on her inspection tours of conquered territory.
So being boring isn’t inherently bad. And should I choose to own it, it means I can opt to focus on the things that I do enjoy, rather than compelling myself to attend parties where I feel totally alienated.

Bretton Woods

So I realized earlier this week, while staring at the return address stamped on the sign outside the small post office on the lower level of the resort my grandfather selected for us on our family trip, that we were in fact staying in the same hotel which hosted the famous Bretton Woods Conference, that resulted in the Bretton Woods System that governed post-WWII economic rebuilding around the world, and laid the groundwork for our modern economic system, helping to cement the idea of currency as we consider it today.

Needless to say, I find this intensely fascinating; both the conference itself as a gathering of some of the most powerful people at one of the major turning points in history, and the system that resulted from it. Since I can’t recall having spent any time on this subject in my high school economics course, I thought I would go over some of the highlights, along with pictures of the resort that I was able to snap.

Pictured: The Room Where It Happened

First, some background on the conference. The Bretton Woods conference took place in July of 1944, while the Second World War was still in full swing. The allied landings in Normandy, less than a month earlier, had been successful in establishing isolated beachheads, but Operation Overlord as a whole could still fail if British, Canadian, American, and Free French forces were prevented from linking up and liberating Paris.

On the Eastern European front, the Red Army had just begun Operation Bagration, the long planned grand offensive to push Nazi forces out of the Soviet Union entirely, and begin pushing offensively through occupied Eastern Europe and into Germany. Soviet victories would continue to rack up as the conference went on, as the Red Army executed the largest and most successful offensive in its history, escalating political concerns among the western allies about the role the Soviet Union and its newly “liberated” territory could play in a postwar world.

In the pacific, the Battle of Saipan was winding down towards an American victory, radically changing the strategic situation by putting the Japanese homeland in range of American strategic bombing. Even as the battles rage on, more and more leaders on both sides look increasingly to the possibility of an imminent allied victory.

As the specter of rebuilding a world ravaged by the most expensive and most devastating conflict in human history (and hopefully ever) began to seem closer, representatives of all nations in the allied powers met in a resort in Bretton Woods, New Hampshire, at the foot of Mount Washington, to discuss the economic future of a postwar world in the United Nations Monetary and Financial Conference, more commonly referred to as the Bretton Woods Conference. The site was chosen because, in addition to being vacant (since the war had effectively killed tourism), the isolation of the surrounding mountains made the site suitably defensible against any sort of attack. It was hoped that this show of hospitality and safety would assuage delegates coming from war torn and occupied parts of the world.

After being told that the hotel had only 200-odd rooms for a conference of 700-odd delegates, most delegates, naturally, decided to bring their families, an many cases bringing as many extended relatives as could be admitted on diplomatic credentials. Of course, this was probably as much about escaping the ongoing horrors in Europe and Asia as it was getting a free resort vacation.

These were just the delegates. Now imagine adding families, attachés, and technical staff.

As such, every bed within a 22 mile radius was occupied. Staff were forced out of their quarters and relocated to the stable barns to make room for delegates. Even then, guests were sleeping in chairs, bathtubs, even on the floors of the conference rooms themselves.

The conference was attended by such illustrious figures as John Maynard Keynes (yes, that Keynes) and Harry Dexter White (who, in addition to being the lead American delegate, was also almost certainly a spy for the Soviet NKVD, the forerunner to the KGB), who clashed on what, fundamentally, should be the aim of the allies to establish in a postwar economic order.

Spoiler: That guy on the right is going to keep coming up.

Everyone agreed that protectionist, mercantilist, and “economic nationalist” policies of the interwar period had contributed both to the utter collapse of the Great Depression, and the collapse of European markets, which created the socioeconomic conditions for the rise of fascism. Everyone agreed that punitive reparations placed on Germany after WWI had set up European governments for a cascade of defaults and collapses when Germany inevitably failed to pay up, and turned to playing fast and loose with its currency and trade policies to adhere to the letter of the Treaty of Versailles.

It was also agreed that even if reparations were entirely done away with, which would leave allied nations such as France, and the British commonwealth bankrupt for their noble efforts, that the sheer upfront cost of rebuilding would be nigh impossible by normal economic means, and that leaving the task of rebuilding entire continents would inevitably lead to the same kind of zero-sum competition and unsound monetary policy that had led to the prewar economic collapse in the first place. It was decided, then, that the only way to ensure economic stability through the period of rebuilding was to enforce universal trade policies, and to institute a number of centralized financial organizations under the purview of the United Nations, to oversee postwar rebuilding and monetary policy.

It was also, evidently, the beginning of the age of minituraized flags.

The devil was in the details, however. The United States, having spent the war safe from serious economic infrastructure damage, serving as the “arsenal of democracy”, and generally being the only country that had reserves of capital, wanted to use its position of relative economic supremacy to gain permanent leverage. As the host of the conference and the de-facto lead for the western allies, the US held a great deal of negotiating power, and the US delegates fully intended to use it to see that the new world order would be one friendly to American interests.

Moreover, the US, and to a lesser degree, the United Kingdom, wanted to do as much as possible to prevent the Soviet Union from coming to dominate the world after it rebuilt itself. As World War II was beginning to wind down, the Cold War was beginning to wind up. To this end, the news of daily Soviet advances, first pushing the Nazis out of its borders, and then steamrolling into Poland, Finland, and the Baltics was troubling. Even more troubling were the rumors of the ruthless NKVD suppression of non-communist partisan groups that had resisted Nazi occupation in Eastern Europe, indicating that the Soviets might be looking to establish their own postwar hegemony.

Although something tells me this friendship isn't going to last
Pictured: The beginning of a remarkable friendship between US and USSR delegates

The first major set piece of the conference agreement was relatively uncontroversial: the International Bank for Reconstruction and Development, drafted by Keynes and his committee, was established to offer grants and loans to countries recovering from the war. As an independent institution, it was hoped that the IBRD would offer flexibility to rebuilding nations that loans from other governments with their own financial and political obligations and interests could not. This was also a precursor to, and later backbone of, the Marshal Plan, in which the US would spend exorbitant amounts on foreign aid to rebuild capitalism in Europe and Asia in order to prevent the rise of communist movements fueled by lack of opportunity.

The second major set piece is where things get really complicated. I’m massively oversimplifying here, but global macroeconomic policy is inevitably complicated in places. The second major set-piece, a proposed “International Clearing Union” devised by Keynes back in 1941, was far more controversial.

The plan, as best I am able to understand it, called for all international trade to be handled through a single centralized institution, which would measure the value of all other goods and currencies relative to a standard unit, tentatively called a “bancor”. The ICU would then offer incentives to maintain trade balances relative to the size of a nation’s economy, by charging interest off of countries with a major trade surplus, and using the excess to devalue the exchange rates of countries with trade deficits, making imports more expensive and products more desirable to overseas consumers.

The Grand Ballroom was thrown into fierce debate, and the local Boy Scouts that had been conscripted to run microphones between delegates (most of the normal staff either having been drafted, or completely overloaded) struggled to keep up with these giants of economics and diplomacy.

Photo of the Grand Ballroom, slightly digitally adjusted to compensate for bad lighting during our tour

Unsurprisingly, the US delegate, White, was absolutely against Keynes’s hair brained scheme. Instead, he proposed a far less ambitious “International Monetary Fund”, which would judge trade balances, and prescribe limits for nations seeking aid from the IMF or IBRD, but otherwise would generally avoid intervening. The IMF did keep Keynes’s idea of judging trade based on a pre-set exchange rate (also obligatory for members), but avoided handing over the power to unilaterally affect the value of individual currencies to the IMF, instead leaving it in the hands of national governments, and merely insisting on certain requirements for aid and membership. It also did away with notions of an ultranational currency.

Of course, this raised the question of how to judge currency values other than against each other alone (which was still seen as a bridge too far in the eyes of many). The solution, proposed by White, was simple: judge other currencies against the US dollar. After all, the United States was already the largest and most developed economy. And since other countries had spent the duration of the war buying materiel from the US, it also held the world’s largest reserves of almost every currency, including gold and silver, and sovereign debt. The US was the only country to come out of WWII with enough gold in reserve to stay on the gold standard and also finance postwar rebuilding, which made it a perfect candidate as a default currency.

US, Canadian, and Soviet delegates discuss the merits of Free Trade

Now, you can see this move either as a sensible compromise for a world of countries that couldn’t have gone back to their old ways if they tried, or as a master stroke attempt by the US government to cement its supremacy at the beginning of the Cold War. Either way, it worked as a solution, both in the short term, and in the long term, creating a perfect balance of stability and flexibility in monetary policy for a postwar economic boom, not just in the US, but throughout the capitalist world.

The third set piece was a proposed “International Trade Organization”, which was to oversee implementation and enforcement of the sort of universal free trade policies that almost everyone agreed would be most conducive not only to prosperity, but to peace as a whole. Perhaps surprisingly, this wasn’t terribly divisive at the conference.

The final agreement for the ITO, however, was eventually shot down when the US Senate refused to ratify its charter, partly because the final conference had been administered in Havana under Keynes, who used the opportunity to incorporate many of his earlier ideas on an International Clearing Union. Much of the basic policies of the ITO, however, influenced the successful General Agreements on Tarriffs and Trade, which would later be replaced by the World Trade Organization.

Pictured: The main hallway as seen from the Grand Ballroom. Notice the moose on the right, above the fireplace.

The Bretton Woods agreement was signed by the allied delegates in the resort’s Gold Room. Not all countries that signed immediately ratified. The Soviet Union, perhaps unsurprisingly, reversed its position on the agreement, calling the new international organizations “a branch of Wall Street”, going on to found the Council for Mutual Economic Assistance, a forerunner to the Warsaw Pact, within five years. The British Empire, particularly its overseas possessions, also took time in ratifying, owing to the longstanding colonial trade policies that had to be dismantled in order for free trade requirements to be met.

The consensus of most economists is that Bretton Woods was a success. The system more or less ceased to exist when Nixon, prompted by Cold War drains on US resources, and French schemes to exchange all of its reserve US dollars for gold, suspended the Gold Standard for the US dollar, effectively ushering in the age of free-floating fiat currencies; that is, money that has value because we all collectively accept that it does; an assumption that underlies most of our modern economic thinking.

There’s a plaque on the door to the room in which the agreement was signed. I’m sure there’s something metaphorical in there.

While it certainly didn’t last forever, the Bretton Woods system did accomplish its primary goal of setting the groundwork for a stable world economy, capable of rebuilding and maintaining the peace. This is a pretty lofty achievement when one considers the background against which the conference took place, the vast differences between the players, and the general uncertainty about the future.

The vision set forth in the Bretton Woods Conference was an incredibly optimistic, even idealistic, one. It’s easy to scoff at the idea of hammering out an entire global economic system, in less than a month, at a backwoods hotel in the White Mountains, but I think it speaks to the intense optimism and hope for the future that is often left out of the narrative of those dark moments. The belief that we can, out of chaos and despair, forge a brighter future not just for ourselves, but for all, is not in itself crazy, and the relative success of the Bretton Woods System, flawed though it certainly was, speaks to that.

A beautiful picture of Mt. Washington at sunset from the hotel’s lounge

Works Consulted

IMF. “60th Anniversary of Bretton Woods.” 60th Anniversary – Background Information, what is the Bretton Woods Conference. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://external.worldbankimflib.org/Bwf/whatisbw.htm>.

“Cooperation and Reconstruction (1944-71).” About the IMF: History. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://www.imf.org/external/about/histcoop.htm>

YouTube. Extra Credits, n.d. Web. 10 Aug. 2017. <http://www.youtube.com/playlist?list=PLhyKYa0YJ_5CL-krstYn532QY1Ayo27s1>.

Burant, Stephen R. East Germany, a country study. Washington, D.C.: The Division, 1988. Library of Congress. Web. 10 Aug. 2017. <https://archive.org/details/eastgermanycount00bura_0>.

US Department of State. “Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944.” Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944 – FRASER – St. Louis Fed. N.p., n.d. Web. 10 Aug. 2017. <https://fraser.stlouisfed.org/title/430>.

Additional information provided by resort staff and exhibitions visitited in person.

History Has its Eyes on You

In case it isn’t obvious from some of my recent writings, I’ve been thinking a lot about history. This has been mostly the fault of John Green, who decided in a recent step of his ongoing scavenger hunt, to pitch the age old question: “[I]s it enough to behold the universe of which we are part, or must we leave a footprint in the moondust for it all to have been worthwhile?” It’s a question that I have personally struggled with a great deal, more so recently as my health and circumstances have made it clear that trying to follow the usual school > college > career > marriage > 2.5 children > retirement and in that order thank you very much life path is a losing proposition.

The current political climate also has me thinking about the larger historical context of the present moment. Most people, regardless of their political affiliation, agree that our present drama is unprecedented, and the manner in which it plays out will certainly be significant to future generations. There seems to be a feeling in the air, a zeitgeist, if you will, that we are living in a critical time.

I recognize that this kind of talk isn’t new. Nearly a millennium ago, the participants of the first crusade, on both sides, believed they were living in the end times. The fall of Rome was acknowledged by most contemporary European scholars to be the end of history. Both world wars were regarded as the war to end all wars, and for many, including the famed George Orwell, the postwar destruction was regarded as the insurmountable beginning of the end for human progress and civilization. Every generation has believed that their problems were of such magnitude that they would irreparably change the course of the species.

Yet for every one of these times when a group has mistakenly believed that radical change is imminent, there has been another revolution that has arrived virtually unannounced because people assumed that life would always go on as it always had gone on. Until the 20th century, imperial rule was the way of the world, and European empires were expected to last for hundreds or even thousands of years. In the space of a single century, Marxism-Leninism went from being viewed as a fringe phenomenon, to a global threat expected to last well into the time when mankind was colonizing other worlds, to a discredited historical footnote. Computers could never replace humans in thinking jobs, until they suddenly began to do so in large numbers.

It is easy to look at history with perfect hindsight, and be led to believe that this is the way that things would always have gone regardless. This is especially true for anyone born in the past twenty five years, in an age after superpowers, where the biggest threat to the current world order has always been fringe radicals living in caves. I mean, really, am I just supposed to believe that there were two Germanies that both hated each other, and that everyone thought this was perfectly normal and would go on forever? Sure, there are still two Koreas, but no one really takes that division much seriously anymore, except maybe for the Koreans.

I’ve never been quite sure where I personally fit into history, and I’m sure a large part of that is because nothing of real capital-H Historical Importance has happened close to me in my lifetime. With the exception of the September 11th attacks, which happened so early in my life, and while I was living overseas, that they may as well have happened a decade earlier during the Cold War, and the rise of smartphones and social media, which happened only just as I turned old enough to never have known an adolescence without Facebook, things have, for the most part, been the same historical setting for my whole life.

The old people in my life have told me about watching or hearing about the moon landing, or the fall of the Berlin Wall, and about how it was a special moment because everyone knew that this was history unfolding in front of them. Until quite recently, the closest experiences I had in that vein were New Year’s celebrations, which always carry with them a certain air of historicity, and getting to stay up late (in Australian time) to watch a shuttle launch on television. Lately, though, this has changed, and I feel more and more that the news I am seeing today may well turn out to be a turning point in the historical narrative that I will tell my children and grandchildren.

Moreover, I increasingly feel a sensation that I can only describe as historical pressure; the feeling that this turmoil and chaos may well be the moment that leaves my footprint in the moondust, depending on how I act. The feeling that the world is in crisis, and it is up to me to cast my lot in with one cause or another.

One of my friends encapsulated this feeling with a quote, often attributed to Vladimir Lenin, but which it appears is quite likely from some later scholar or translator.
“There are decades where nothing happens; and there are weeks where decades happen.”
Although I’m not sure I entirely agree with this sentiment (I can’t, to my mind, think of a single decade where absolutely nothing happened), I think this illustrates the point that I am trying to make quite well. We seem to be living in a time where change is moving quickly, in many cases too quickly to properly contextualize and adjust, and we are being asked to pick a position and hold it. There is no time for rational middle ground because there is no time for rational contemplation.

Or, to put it another way: It is the best of times, it is the worst of times, it is the age of wisdom, it is the age of foolishness, it is the epoch of belief, it is the epoch of incredulity, it is the season of Light, it is the season of Darkness, it is the spring of hope, it is the winter of despair, we have everything before us, we have nothing before us, we are all going direct to Heaven, we are all going direct the other way – in short, the period is so far like the present period, that some of its noisiest authorities insist on its being received, for good or for evil, in the superlative degree of comparison only.

How, then, will this period be remembered? How will my actions, and the actions of my peers, go down in the larger historical story? Perhaps in future media, the year 2017 will be thought of as “just before that terrible thing happened, when everyone knew something bad was happening but none yet had the courage to face it”, the way we think of the early 1930s. Or will 2017 be remembered like the 1950s, as the beginning of a brave new era which saw humanity in general and the west in particular reach new heights?

It seems to be a recurring theme in these sorts of posts that I finish with something to the effect of “I don’t know, but maybe I’m fine not knowing in this instance”. This remains true, but I also certainly wish to avoid encouraging complacency. Not knowing the answers is okay, it’s human, even. But not continuing to question in the first place is how we wind up with a far worse future.

Keep Calm and Carry On

Today, we know that poster as a, well, poster, of quintessential Britishness. It is simply another of our twenty-first century truisms, not unlike checking oneself before wrecking oneself. Yet this phrase has a far darker history.

In 1940, war hysteria in the British Isles was at its zenith. To the surprise of everyone, Nazi forces had overcome the Maginot line and steamrolled into Paris. British expeditionary forces at Dunkirk had faced large casualties, and been forced to abandon most of their equipment during the hastily organized evacuation. In Great Britain itself, the Home Guard had been activated, and overeager ministers began arming them with pikes and other medieval weapons [10]. For many, a German invasion of the home isles was deemed imminent.

Impelled by public fear and worried politicians, the British government began drawing up its contingency plans for its last stand on the British Isles. Few military strategists honestly believed that the German invasion would materialize. Allied intelligence made it clear that the Germans did not possess an invasion fleet, nor the necessary manpower, support aircraft, and logistical capacity to sustain more than a few minor probing raids [5]. Then again, few had expected France to fall so quickly. And given the Nazi’s track record so far, no one was willing to take chances [3].

Signposts were removed across the country to confuse invading forces. Evacuation plans for key government officials and the royal family were drawn up. Potential landing sites for a seaborne invasion were identified, and marked for saturation with every chemical weapon in the British stockpile. So far the threat of mutually assured destruction has prevented the large scale use of chemical weapons as seen in WWI. However, if an invasion of the homelands had begun, all bets would be off. Anti-invasion plans call for the massive use of chemical weapons against invading forces, and both chemical and biological weapons against German cities, intended to depopulate and render much of Europe uninhabitable [4][7][8].

Strategists studying prior German attacks, in particular the combined arms shock tactics which allowed Nazi forces to overcome superior numbers and fortifications, become convinced that the successful defence of the realm is dependent on avoiding confusion and stampedes of refugees from the civilian population, as seen in France and the Low Countries. To this end, the Ministry of Information is tasked with suppressing panic and ensuring that civilians are compliant with government and military instructions. Official pamphlets reiterate that citizens must not evacuate unless and until instructed to do so.

IF THE GERMANS COME […] YOU MUST REMAIN WHERE YOU ARE. THE ORDER IS “STAY PUT”. […] BE READY TO HELP THE MILITARY IN ANY WAY. […] THINK BEFORE YOU ACT. BUT THINK ALWAYS OF YOUR COUNTRY BEFORE YOU THINK OF YOURSELF. [9]

Yet some remained worried that this message would get lost in the confusion on invasion day. People would be scared, and perhaps need to be reminded. “[T]he British public were suspicious of lofty sentiment and reasoned argument. […] Of necessity, the wording and design had to be simple, for prompt reproduction and quick absorption.”[1]. So plans were made to make sure that the message is unmistakable and omnipresent. Instead of a long, logical pamphlet, a simple, clear message in a visually distinctive manner. The message, a mere five words, captures the entire spirit of the British home front in a single poster.

KEEP CALM AND CARRY ON

The poster was never widely distributed during World War II. The Luftwaffe, believing that it was not making enough progress towards the total air supremacy that was deemed as crucial for any serious invasion, switched its strategy from targeting RAF assets, to terror bombing campaigns against British cities. Luckily for the British, who by their own assessment were two or three weeks of losses away from ceding air superiority [5], this strategy, though it inflicted more civilian casualties, eased pressure on the RAF and military infrastructure enough to recover. Moreover, as the British people began to adapt to “the Blitz”, allied resolve strengthened rather than shattered.

German invasion never materialized. And as air raids became more a fact of life, and hence less terrifying and disorienting to civilians, the need for a propaganda offensive to quell panic and confusion subsided. As the RAF recovered, and particularly as German offensive forces began to shift to the new Soviet front, fears of a British collapse subsided. Most of the prepared “Keep Calm” posters were gradually recycled as part of the paper shortage.

With perfect historical retrospect, it is easy to recognize the fact that a large scale German invasion and occupation of the British Isles would have been exceedingly unlikely, and victory against an entrenched and organized British resistance would have been nigh impossible. The British government was on point when it stated that the key to victory against an invasion was level-headedness. Given popular reaction to the rediscovered copies of the “Keep Calm” design, it also seems that they were on the mark there.

The poster and the phrase it immortalized have long since become decoupled from its historical context. Yet not, interestingly, the essence it sought to convey. It is telling that many of the new appropriations of the phrase, as seen by a targeted image search, have to do with zombies, or other staples of the post-apocalyptic genre. In its original design, the poster adorns places where anxiety is commonplace, such as workplaces and dorm rooms, and has become go-to advice for those under stressful situations.

This last week in particular has been something of a roller coaster for me. I feel characteristically anxious about the future, and yet at the same time lack sufficient information to make a workable action plan to see me through these troubling times. At a doctor’s appointment, I was asked what my plan was for the near future. With no other option, I picked a response which has served both myself and my forebears well during dark hours: Keep Calm and Carry On.

Works Consulted

1) “Undergraduate Dissertation – WWII Poster Designs, 1997.” Drbexl.co.uk. N.p., 23 Jan. 2016. Web. 11 May 2017. <http://drbexl.co.uk/1997/07/11/undergraduate-dissertation-1997/>.

2) “Dunkirk rescue is over – Churchill defiant.” BBC News. British Broadcasting Corporation, 04 June 1940. Web. 11 May 2017. <http://news.bbc.co.uk/onthisday/hi/dates/stories/june/4/newsid_3500000/3500865.stm>.

3) Inman, Richard. “Fighting for Britain.” Wolverhampton History – Wolverhampton History. Wolverhampton City Council, 13 Dec. 2005. Web. 11 May 2017. <http://www.wolverhamptonhistory.org.uk/people/at_war/ww2/fighting3>.

4) Bellamy, Christopher. “Sixty secret mustard gas sites uncovered.” The Independent. Independent Digital News and Media, 03 June 1996. Web. 11 May 2017. <http://www.independent.co.uk/news/sixty-secret-mustard-gas-sites-uncovered-1335343.html>.

5) “Invasion Imminent.” Invasion Imminent – Suffolk Anti-invasion defences. N.p., n.d. Web. 11 May 2017. <http://pillboxes-suffolk.webeden.co.uk/invasion-imminent/4553642028>.

6) “Large bomb found at ex-Navy base.” BBC News. British Broadcasting Corporation, 22 Apr. 2006. Web. 11 May 2017. <http://news.bbc.co.uk/2/hi/uk_news/england/hampshire/4934102.stm>.

7) Ministry of Information. CIVIL DEFENCE – BRITAIN’S WARTIME DEFENCES, 1940. Digital image. Imperial War Museums. n.d. Web. 11 May 2017. <http://www.iwm.org.uk/collections/item/object/205019014>.

8) “Living with anthrax island.” BBC News. British Broadcasting Corporation, 08 Nov. 2001. Web. 11 May 2017. <http://news.bbc.co.uk/2/hi/uk_news/1643031.stm>.

9) Ministry of Information. If the Invader Comes. 1940. Print.

10) RAMSEY, SYED. TOOLS OF WAR;HISTORY OF WEAPONS IN MEDIEVAL TIMES. N.p.: ALPHA EDITIONS., n.d. Print.