Some Like It Temperate

I want to share something that took me a while to understand, but once I did, it changed my understanding of the world around me. I’m not a scientist, so I’m probably not going to get this exactly perfect, and I’ll defer to professional judgment, but maybe I can help illustrate the underlying concept.

So temperature is not the same thing as hot and cold. In fact, temperature and heat aren’t really bound together inherently. On earth, they’re usually correlated, and as humans, our sensory organs perceive them through the same mechanism in relative terms, which is why we usually think of them together. This sensory shortcut works for most of the human experience, but it can become confusing and counterintuitive when we try to look at systems of physics outside the scope of an everyday life. 

So what is temperature? Well, in the purest sense, temperature is a measure of the average kinetic energy among a group of particles. How fast are they going, how often are they bumping into each other, and how much energy are they giving off when they do? This is how temperature and phase of matter correlate. So liquid water has a higher temperature than ice because its molecules are moving around more, with more energy. Because the molecules are moving around more, liquid water is less dense, which it’s easier to cut through water than ice. Likewise, it’s easier still to cut through steam than water. Temperature is a measure of molecular energy, not hotness. Got it? Good, because it’s about to get complicated.

So something with more energy has a higher temperature. This works for everything we’re used to thinking about as being hot, but it applies in a wider context. Take radioactive material. Or don’t, because they’re dangerous. Radioactivity is dangerous because it has a lot of energy, and is throwing it off in random directions. Something that’s radioactive won’t necessarily feel hot, because the way it gives off radiation isn’t the way our sensory organs are calibrated. You can pick up an object with enough radiated energy to shred through the material in your cells and kill you, and have it feel like room temperature. That’s what happened to the firemen at Chernobyl. 

In a technical sense, radioactive materials have a high temperature, since they’re giving off lots of energy. That’s what makes them dangerous. At the same time, though, you could get right up next to highly enriched nuclear materials (and under no circumstances should you ever try this), without feeling warm. You will feel something eventually, as your cells react to being ripped apart by, a hail of neutrons and other subatomic particles. You might feel heat as your cells become irradiated and give off their own energy, but not from the nuclear materials themselves. Also if this happens, it’s too late to get help. So temperature isn’t necessarily what we think about it.

Space is another good example. We call space “cold”, because water freezes when exposed to it. And space will feel cold, since it will immediately suck all the carefully hoarded energy out of any body part exposed to it. But actually, space, at least within the solar system, has a very high temperature wherever it encounters particles, for the same reason as above. The sun is a massive ongoing thermonuclear explosion that makes even our largest atom bombs jealous. There is a great deal of energy flying around the empty space of the solar system at any given moment, it just doesn’t have any particles to give its energy to. This is why the top layer of the atmosphere, the thermosphere, has a very high temperature, despite being totally inhospitable, and why astronauts are at increased cancer risk. 

This confusion is why most scientists who are dealing with fields like chemistry, physics, or astronomy use the Kelvin scale. One degree in the Kelvin scale, or one kelvin, is equivalent to one degree Celsius. However, unlike Celsius, where zero is the freezing point of water, zero kelvins is known as Absolute Zero, a so-far theoretical temperature where there is no movement among the involved particles. This is harder to achieve than it sounds, for a variety of complicated quantum reasons, but consider that body temperature is 310 K, in a scale where one hundred is the entire difference between freezing and boiling. Some of our attempts so far to reach absolute zero have involved slowing down individual particles by suspending them in lasers, which has gotten us close, but those last few degrees are especially tricky. 

Kelvin scale hasn’t really caught on in the same way as Celsius, perhaps because it’s an unwieldy three digits for anything in the normal human range. And given that the US is still dragging their feet about Celsius, which goes back to the French Revolution, not a lot of people are willing to die on that hill. But the Kelvin scale does underline an important point of distinction between temperature as a universal property of physics, from the relative, subjective, inconsistent way that we’re used to feeling it in our bodies.

Which is perhaps interesting, but I said this was relevant to looking at the world, so how’s that true? Sure, it might be more scientifically rigorous, but that’s not always essential. If you’re a redneck farm boy about to jump into the crick, Newtonian gravity is enough without getting into quantum theory and spacetime distortion, right?
Well, we’re having a debate on this planet right now about something referred to as “climate change”, a term which has been promoted in favor over the previous term “global warming”. Advocates of doing nothing have pointed out that, despite all the graphs, it doesn’t feel noticeably warmer. Certainly, they point out, the weather hasn’t been warmer, at least not consistently, on a human timescale. How can we be worried about increased temperature if it’s not warmer?

And, for as much as I suspect the people presenting these arguments to the public have ulterior motives, whether they are economic or political, it doesn’t feel especially warmer, and it’s hard to dispute that. Scientists, for their part, have pointed out that they’re examining the average temperature over a prolonged period, producing graphs which show the trend. They have gone to great lengths to explain the biggest culprit, the greenhouse effect, which fortunately does click nicely with our intuitive human understanding. Greenhouses make things warmer, neat. But not everyone follows before and after that. 

I think part of what’s missing is that scientists are assuming that everyone is working from the same physics-textbook understanding of temperature and energy. This is a recurring problem for academics and researchers, especially when the 24-hour news cycle (and academic publicists that feed them) jump the gun and snatch results from scientific publications without translating the jargon for the layman. If temperature is just how hot it feels, and global warming means it’s going to feel a couple degrees hotter outside, it’s hard to see how that gets to doomsday predictions, and requires me to give up plastic bags and straws. 

But as we’ve seen, temperature can be a lot more than just feeling hot and cold. You won’t feel hot if you’re exposed to radiation, and firing a laser at something seems like a bad way to freeze it. We are dealing on a scale that requires a more consistent rule than our normal human shortcuts. Despite being only a couple of degrees temperature, the amount of energy we’re talking about here is massive. If we say the atmosphere is roughly 5×10^18 kilograms, and the amount of energy it takes to raise a kilogram of air one kelvin is about 1Kj, then we’re looking at 5,000,000,000,000,000,000 Kilojoules. 

That’s a big number; what does it mean? Well, if my math is right, that’s about 1.1 million megatons of TNT. A megaton is a unit used to measure the explosive yield of strategic nuclear weapons. The nuclear bomb dropped on Nagasaki, the bigger of the two, was somewhere in the ballpark of 0.02 megatons. The largest bomb ever detonated, the Tsar Bomba, was 50 megatons. The total energy expenditure of all nuclear testing worldwide is estimated at about 510 megatons, or about 0.05% of the energy we’re introducing with each degree of climate change. 

Humanity’s entire current nuclear arsenal is estimated somewhere in the ballpark of 14,000 bombs. This is very much a ballpark figure, since some countries are almost certainly bluffing about what weapons they do and don’t have, and how many. The majority of these, presumably, are cheaper, lower-yield tactical weapons. Some, on the other hand, will be over-the-top monstrosities like the Tsar Bomba. Let’s generously assume that these highs and lows average out to about one megaton apiece. Suppose we detonated all of those at once. I’m not saying we should do this; in fact, I’m going to go on record as saying we shouldn’t. But let’s suppose we do, releasing 14,000 megatons of raw, unadulterated atom-splitting power in a grand, civilization-ending bonanza. In that instant, we would do have unleashed approximately one percent of the energy as we are adding in each degree of climate change. 

This additional energy means more power for every hurricane, wildfire, flood, tornado, drought, blizzard, and weather system everywhere on earth. The additional energy is being absorbed by glaciers, which then have too much energy to remain frozen, and so are melting, raising sea levels. The chain of causation is complicated, and involves understanding of phenomena which are highly specialized and counterintuitive to our experience from most of human existence. Yet when we examine all of the data, it is the pattern that seems to emerge. Whether or not we fully understand the patterns at work, this is the precarious situation in which our species finds itself. 

College Tidbits

After returning from the wild woods of upstate, my house is currently caught in the scramble of preparing for college classes. On the whole, I think I am in decent shape. But since it has been the only thing on my mind, here are some assorted pieces of advice which new college students may find useful; tidbits I wish I had known, or in the cases where I did know them, wish I had been able to get them through my thick skull earlier. 

Get an umbrella
Sure, there are more important things to make sure you have before going to college. But most of those things are obvious: backpacks, laptops, writing instruments, and so on. No one talks about back to school umbrellas, though. Of the items I have added to my school bag, my collapsible umbrella is the most useful, least obvious. To explain its great use, I will appropriate a quote from one of my favorite pieces of literature:

Partly it has great practical value. You can open it up to scare off birds and small children; you can wield it like a nightstick in hand to hand combat; use it as a prop in a comedy sketch for an unannounced improv event on the quad; turn it inside out as an improvised parabolic dish to repair a satellite antenna; use it as an excuse to snuggle up next to a crush as you walk them through the rain to their next class; you can wave your umbrella in emergencies as a distress signal, and of course, keep yourself dry with it if it doesn’t seem too worn out.

More importantly, an umbrella has immense psychological value. For some reason, if a Prof discovers that a student has their umbrella with them, they will automatically assume that they are also in possession of a notebook, pencil, pen, tin of biscuits, water bottle, phone charger, map, ball of string, gnat spray, wet weather gear, homework assignment etc., etc. Furthermore, the Prof will then happily lend the student any of these or a dozen other items that the student might accidentally have “lost.” What the Prof will think is that any student who can walk the length and breadth of the campus, rough it, slum it, struggle against terrible odds, win through, and still know where their umbrella is, is clearly a force to be reckoned with.

Find out what programs your school uses, and get acquainted with them
The appropriate time to learn about the format your school requires for assignments is not the night before your essay is due. The time for that is now, before classes, or at least, before you get bogged down in work. Figure out your school email account, and whether that comes with some kind of subscription to Microsoft or google or whatever; if so, those are the programs you’ll be expected to use. Learn how to use them, in accordance with whatever style guide (probably MLA or APA) your school and departments prefer. 

You can, of course, keep using a private email or service for non-school stuff. In fact, I recommend it, because sometimes school networks go down, and it can be difficult to figure out what’s happening if your only mode of communication is down. But don’t risk violating handbook or technology policies by using your personal accounts for what’s supposed to be school business. And if you’re in a group project, don’t be that one guy who insists on only being contacted only through their personal favorite format despite everyone else using the official channels. 

Try not to get swept up in future problems
Going into college, you are an adult now. You may still have the training wheels on, but the controls are in your hands. If you’re like me, this is exhilarating, but also immensely terrifying, because you’ve been under the impression this whole time that adults were supposed to know all the answers intuitively, and be put together, and you don’t feel like you meet those criteria. You’re suddenly in the driver’s seat, and you’re worried that you never got a license, or even know how not to crash. If this is you, I want you to take a deep breath. Then another. Get a cup of tea, treat yourself to a nice cookie. You can do that, after all, being an adult. True, it might be nutritionally inadvisable to have, say, a dozen cookies, but if that’s what you need, go ahead. You need only your own permission. Take a moment. 

Despite the ease of analogies, adulthood isn’t like driving, at least not how I think of driving. There aren’t traffic laws, or cops to pull you over and take away your license. I mean, there are both of those things in the world at large, but bear with me. Adulthood isn’t about you being responsible to others, though that’s certainly a feature. Adulthood is about being responsible as a whole, first and foremost to yourself. In college, you will be responsible for many things, from the trivial to the life altering. Your actions will have consequences. But with a few exceptions, these are all things that you get to decide how they affect you. 

My college, at least, tried to impress the, let’s say, extreme advisability, of following their plans upon freshmen by emphasizing the consequences otherwise. But to me, it was the opposite of helpful, since hearing an outside voice tell me I need to be worried about something immediately  plants the seeds of failure and doubt in my head. Instead, what helped me stay sane was realizing that I could walk away if I wanted. Sure, failing my classes would carry a price I would have to work out later. But it was my decision whether that price was worth it. 

Talk to Your Professors
The other thing worth mentioning here is that you may find, once you prove your good faith and awesome potential, that many items you were led to believe were immutable pillars of the adult world… aren’t so immutable. Assignment requirements can be bent to accommodate a clever take. Grades on a test can be rounded up for a student that makes a good showing. Bureaucracy can, on occasion, be circumvented through a chat with the right person. Not always, but often enough that it’s worth making a good impression with staff and faculty. 

This is actually a good piece of life advice in general. I’ve heard from people who work that no one notices that you’re coming in late if you come in bearing donuts, and I have every reason to believe this is true. I’ve brought cookies in to all of my classes and professors before exams, and so far, I’ve done quite well on all of them. 

Year One Done

This week has been quite the ride for me. Even though I’m the best student I know, finals are stressful. Perhaps I am just particularly susceptible to pressure in this vein, but it seems like the mere institution of final exams are profoundly anxiety-inducing. Add in some medical issues and a healthy sprinkling of social drama, and what was supposed to be a slam dunk became a chaotic tumble across the finish line. 

Thus ends my first full year of college. There are many lessons to unpack, and I expect I shall spend the next few weeks doing so. In the meantime, however, I have blocked off some much needed time for glorious nothingness, followed by the launch of my summer projects. 

In other news, I am proud of my brother. This week was his eighteenth birthday. By lucky coincidence, the day of his birthday was also a local budget referendum. 
In the scheme of things, a plebiscite to ratify the issuing of bonds to finance a new air conditioning unit for the recreation center is far from the most important vote. The kind of people who wind up voting in such things tend to either be obsessed with local politics, or people who have committed to always voting, no matter the issue. 

My brother and I voted. I did so because I see it as my civic and patriotic duty to cast my ballot for what I believe to be the greater good. And I am proud of him, because he joined me in casting a ballot, even though he didn’t have to, even though it was a perfectly nice day out and he had other things to do, as a matter of pride. 

Those are the headlines for this week, or at least the ones that I can get through at the moment. Focusing everything on getting out the other end of the semester has set back my writing and drained my mind of topics for a time. 

The Project Problem

You ever find yourself start something on a lark, and then the more you work on it, the bigger it gets, until suddenly it’s this whole big thing that you don’t really know how to work with? And then you’re left with the choice of either taking to your work with a hatchet in order to bring it down to a manageable size, and suturing up the wounds to make a finished, but far less grand final product, or letting it keep growing until eventually it becomes totally unsustainable. I don’t know whether this happens to other people, but it happens to me constantly. Most of my projects die this way, either unable to survive the hatcheting process, or with me not having the heart to put them out of their misery.

This includes everything from weekend activities to final class projects. Reigning in this tendency to overcomplicate has been a serious challenge for me academically. For instance, I will get an idea for a research paper topic, dive into the literature, and come back with a twenty page essay and four pages of citations, when the assignment calls for seven pages maximum, and five cited sources. Or I will be assigned to write something in a foreign language for a class presentation, and will end up writing something which, while perfectly correct, uses vocabulary several semesters beyond the rest of the class. 

Arguably this single-mindedness and overachievement is a strength. After all, I’ve never known someone to fail an assignment because they overdid their project. By contrast, I know plenty of people who have failed assignments that weren’t long enough, or where it was clear the student didn’t care. On the other hand, a seeming inability to do the easy thing and go from point A to point B on projects sounds like the kind of lesson that eventually has to be learned through hard failure and bitter tears. Overdoing is not always beneficial, and it is certainly not always efficient.

In any case, I seem to possess, if nothing else, a striking ability to make more work for myself. This is what has prevented me from posting over the past weeks- the projects which I began with good intentions and high ambitions are coming due, and it is crunch time to finish the necessary legwork to meet initial promises. Every moment of available time from now until the end of finals must be put towards these pursuits if I am to clinch the A that I know I deserve. My entire media consumption is being geared towards research and study; each ounce of my wordsmithing retooled towards finishing and refining papers and presentations. 

To be fair, I did plan all of this, more or less. I mean, I didn’t plan to put myself up against the wall. I never do. But I did choose ambitious topics. I knew I was signing myself up to do more work than was probably required, because in addition to getting an A, I wanted, and still want, to be working on something that I care about, rather than hammering away at busywork. After the dumpster fire that was my high school experience, I decided I would rather be proud and excited about something than get full marks. But contrary to the popular myth, loving your work does not obviate the work itself. Which leaves me where I am now, frantically scrambling to make good on my projects. 

So that’s what’s happened, and why I haven’t posted. I started working on my final projects more than a month ago, and the work got away from me and ate up my time. I’d love to say that I’ll be getting back to posting immediately, but until finals are over and I catch up on rest I’ve been putting off, I’m not going to make any promises. I will be trying to post, though. And I expect that once I am no longer directing every waking moment towards study, that I shall have more to say. 

Fully Automated Luxury Disk Jockeys

Here’s an interesting observation with regards to automation- with the exception of purely atmospheric concerns, we have basically automated the disk jockey, that is, the DJ, out of a job. Pandora’s music genome project, Google’s music bots, Apple’s Genius playlists, and whatever system Spotify uses, are close enough for most everyday purposes. 

Case in point- my university got it in their heads that students were becoming overwhelmed with finals. This is obviously not a major revelation, but student stress has become something of a moral and political hot issue of late. The purported reason for this is the alarmingly high rate of suicides among college students- something which other universities have started to act on. Canada, for instance, has added more school breaks throughout the year during the weeks when suicide rates peak. Other schools have taken more pragmatic measures, like installing suicide nets under tall bridges. 

Of course, the unspoken reason for this sudden focus on mental health is because the national political administration has made school shootings a matter of mental health prevention, rather than, say, making guns harder to get. My university, like my hometown, lies in the shadow of Newtown, and plenty of students here lost people firsthand. Most of us went to schools that went on lockdown that day. The university itself has had two false alarms, and police teams clad in armor and machine guns already patrol the campus regularly. So naturally, the university is throwing money at mental health initiatives. 

Rather than do something novel, like staggering exam schedules, routine audits to prevent teachers from creating too much work for students, or even abolishing final exams altogether as has been occasionally proposed, the powers that be settled on the “Stress Free Finals Week” Initiative, whereby the school adds more events to the exam week schedule. I’m not sure how adding more social events to a time when students are already pressed to cram is supposed to help, but it’s what they did. 

As a commuter and part time student, most of this happens on the periphery for me. But when, through a series of events, I wound up on campus without anything to do nor a ride home, I decided I may as well drop by. After all, they were allegedly offering free snacks if I could find the location, and being a college student, even though I had just gorged myself on holiday cookies and donuts at the Social Sciences Department Holiday Party, I was already slightly peckish. They were also advertising music.

I got there to find the mid-afternoon equivalent of a continental breakfast- chips, popcorn, donuts, and cookies. Perhaps I had expected, what with all the research on the gut-brain connection, that an event purporting to focus on mental health would have a better selection. But no matter. There was a place to sit away from the sleet outside, and free snacks. The advertised DJ was up front blasting music of questionable taste at a borderline objectionable volume, which is to say, normal for a college campus.

Except the DJ wasn’t actually playing the music. He didn’t interact with any of the equipment on the table during the time I watched, and on several occasions he surrendered any pretense of actually working by leaving the table to raid the snacks, and then sat down at another table to eat while the music handled itself. No one else seemed to think this strange, but it struck me to think that for all I know, he might have just set up a YouTube Music playlist and let the thing run, and earned money for it. Heck, he wouldn’t even have to select the playlist manually- bots can do that part too. 

There are two takeaway observations here. The first is that computer automation is happening here and now, and the first wave of that automation is hitting now. I feel it worth noting that this isn’t just manual labor being made obsolete by mechanized muscle. While it might not exactly be white collar, a modern DJ is a thinking job. Sure, it doesn’t take a genius to hit shuffle on iTunes, but actually selecting songs that match a mood and atmosphere for a given event, following up with an appropriate song, and knowing to match the differing volumes on recordings with the desired speaker volume takes at least some level of heuristic thinking. We’ve made it fairly low-hanging fruit for bots in the way we label songs by genre already, but the fact we’re already here should be worrying for people that are worried about automation-driven mass unemployment. 

The second takeaway is a sort of caveat to the first, namely; even if this guy’s job was automated, he still got paid. An argument can be made that this is a function of bureaucratic inefficiency and an enterprising fellow playing the system in order to get paid to be lazy. And while this would be a fair observation, there’s another interpretation that I think is yet more interesting. Because the way I see it, it’s not like the university misunderstood what they were buying. They advertised having a DJ. They could have found any idiot to hook up an iPhone speaker and press shuffle, but instead they hired someone. 

There was a cartoon a while back that supposed that in the future, rather than automation causing a total revolution in man’s relationship with work, that we would simple start to put more value into more obscure and esoteric commodities. The example provided was a computer running on “artisanal bits” – that is, a Turing-complete setup of humans holding up signs for ones and zeroes. The implication is that increasing wealth inequality will drive more artificial distinctions in patterns of consumption. Rich people become pickier the richer they get, and since they’re rich, they can drive demand for more niche products to provide work for the masses.

This hypothesis would fit with current observations. Not only would it explain why institutions are still willing to hire human DJs in the age of music bots, but it would explain why trends like organic foods, fair trade textiles, and so on seem to be gaining economic ground. It’s an interesting counter argument to the notion that we’re shaping up for mass unemployment.

I still think this is a horribly optimistic outlook. After all, if the owning minority can’t be bothered to pay living wages in the process of making their wealth in the first place, why would they feel a need to employ a substantial number of people after the fact? There’s also a fundamental limit on how much a single person can consume*, and the number of people who can be gainfully employed in service of a single person’s whims has a functional limit, after which employment stops being accurately described as work, and is more like private welfare. Which makes this not so much a repudiation of the problem of the problem of automation induced mass unemployment, as another possible solution. Still, it’s a thing to keep in mind, and for me, a good reminder to pay attention to what’s actually happening around me as well as what models and experts say should happen.

*Technically, this isn’t true. A person with infinite money could easily spend infinite money by purchasing items for which the prices are artificially inflated through imposed scarcity, speculative bubbles, and other economic buzzwords. But these reflect peculiarities in the market, and not the number of people involved in their production, or the quality of their work. 

Operation Treetopper

The last few weeks have been dominated by Operation Treetopper, my efforts to ensure that I close the year out successfully. Operation Treetopper started basically the moment that Operation Marketplace, my plan to avoid mid-semester complacency and ensure I was squared away when picking classes for next semester, ended. Operation Marketplace, in turn, followed Operation Overture, which covered the first few weeks of college classes. The point is that Treetopper has been a culmination of slightly more than seven months of effort and planning. 

The two primary objectives of Operation Treetopper were my German final exam, and my Sociology final paper. The German exam was fairly straightforward. I do my best to avoid serious studying for tests, because at least in my experience, it tends to do me more harm than good. Instead, I focus on learning any material that I haven’t already learned, because even though I miraculously avoiding missing class, there were a handful of occasions when circumstances conspired to prevent me from learning the material, and I had to bluff my way through assignments. 

Once that’s done, I make sure I know my German by putting on the top hits list for German speaking countries, and reading Der Spiegel, which translates as The Mirror, and is a major newspaper akin to the Times. I know that I’m making progress when I find the music distracting me from reading. I only ever find the music distracting when the words I’m hearing cross wires with words I’m reading or writing, so when the foreign songs cross that threshold, I know my brain has absorbed enough German that it considers them to be part of a language rather than just words. I don’t know if everyone’s brain works that way, or if it’s just mine, but this is how I pull off learning new languages without studying.
My sociology paper was another story entirely.

With retrospect, I can safely say that I overdid it for my sociology final paper. I mean that in both a positive and negative way. I picked a topic that I was really passionate about, which proved fertile for both research and commentary, and in the process created far more work for myself than was necessary or even prudent. 

With the benefit of retrospect, and having gotten a few glimpses of what other classmates eventually settled on, there are things I would’ve done differently. Off the bat, I would’ve started earlier. I wouldn’t say I procrastinated, because I gave myself more time than I thought I’d need. Except, I severely underestimated the amount of time and effort this project would require, and the number of documents, pages, spreadsheets, tables, and graphs it would generate for me to juggle and cull down into a final paper. 

Case in point: the assignment was to write four or five pages, and I assumed that I would have to fall back on my old authorial filibustering knack to fill space. In reality, I wound up barely butchering the final product down to five pages of writing, plus an additional eighteen pages of tables, graphs, data, and references that had to be put in an annex because I couldn’t fit them in the main paper, but needed them to make my point succinctly.
Also with the full benefit of hindsight, I would have been well served to take the professor up on his offer to meet and discuss refining and operationalizing topics for my survey. I didn’t do this because I thought my topic was already a sufficiently niche topic that it didn’t need refining. I was wrong about this, partly because, as previously mentioned, I vastly underestimated the length it would take to tackle my topic, but also partially because I expected that I would be working on well-trodden ground, scientifically, when in fact, near as I could tell from the literature review, my investigation proved to be fairly novel.

The other reason I didn’t want to refine my topic was because I was excited about it. I knew that I wanted to tackle medical identification and the factors going into adherence essentially since the moment I read the syllabus, and saw that we could do a survey for our paper. I knew it was topical to the course, and I knew that my background with the topic would enable me to write a paper that would earn me a good grade pretty much regardless of the details, but more than that, I wanted to tackle the topic. I was excited to use my newfound skills to tackle a real problem from my life that profoundly affects people I care about. 

This is, at least for me, the point of education. It’s why I go to class, even when I don’t feel well, or have better things to do. I want to learn, so that I can fix the world. And in my experience, when an exciting topic like this appears fully-formed, those are usually the best projects. Because whether or not they wind up getting the best score, they make the work of an assignment fun, and they’re good opportunities to learn something.

Moreover, this enthusiasm shows in the end product. There’s sometimes a trade off if following the thing you’re interested in means bending the criteria of the assignment (see again: twenty-three pages instead of five), but I’ve always found this to be a worthwhile trade off it it means I can give my best effort on something I care about. And I think most teachers feel the same way.

I don’t regret my choice of topic. And understanding that I made them based on what I knew at the time, I don’t regret the decisions I made about my paper. My biggest, and really only, complaint with the end product, is that I overestimated how long five pages was, and had to cut down my writing when I quite would’ve enjoyed expounding further on the results. 
Operation Treetopper has accomplished something that I wasn’t sure was possible- I have been able to declare victory for the end of the semester without any lingering make-up assignments, or uncertainty about whether I’m really done. It’s anticlimactic, and a little unreal to me. But it’s the best Christmas gift I could have wished for, and it gives me hope that next year can be even better.

A Lesson in Credulity

Last week I made a claim that, on review, might be untrue. This was bound to happen sooner or later. I do research these posts, but except for the posts where I actually include a bibliography, I’m not fact checking every statement I make. 


One of the dangers of being smart, of being told that you’re smart, and of repeatedly getting good grades or otherwise being vindicated on matters of intelligence, is that it can lead to a sense of complacency. I’m usually right, I think to myself, and when I think I know a fact, it’s often true, so unless I have some reason to suspect I’m wrong, I don’t generally check. For example, take the statement: there are more people that voted for republicans in the last election living to the south of me than to the north. 

I am almost certain this is true, even without checking. I would probably bet money on it. I live north of New York City, so there aren’t even that many people north of me, let alone republican voters. It’s objectively possible that I’m wrong. I might be missing some piece of information, like a large population of absentee Republicans in Canada, or the state of Alaska. Or I might simply be mistaken. Maybe the map I’m picturing in my head misrepresents how far north I am compared to other northern border states like North Dakota, Michigan, and Wisconsin. But I’m pretty sure I’m still right here, and until I started second guessing myself for the sake of argument, I would have confidently asserted that statement as fact, and even staked a sizable sum on it. 

Last week I made the following claim: Plenty of studies in the medical field have exalted medical identification as a simple, cost-effective means of promoting patient safety. 

I figured that this had to be true. After all, doctors recommend wearing medical identification almost universally. It’s one of those things, like brushing your teeth, or eating your vegetables that’s such common advice that we assume it to be proven truth. After all, if there wasn’t some compelling study to show it to be worthwhile, why would doctors continue to breath down the necks of patients? Why would patients themselves put up with it? Why would insurance companies, which are some of the most ruthlessly skeptical entities in existence, especially when it comes to paying for preventative measures, shell out for medical identification unless it was already demonstrated to be a good deal in the long run?

Turns out I may have overestimated science and economics here. Because in writing my paper, I searched for that definitive overarching study or meta analysis that conclusively proved that medical identification had a measurable positive impact. I searched broadly on google, and also through the EBSCO search engine, which my trusty research librarian told me was the best agglomeration of scientific and academic literature tuition can buy. I went through papers from NIH immunohematolgy researchers to the Army Medical Corps; from clinics in the Canadian high arctic to the developing regions of Southeast Asia. I read through translations of papers originally published in French and Chinese, in the most prestigious journals of their home countries. And I found no conclusive answers.

 There was plenty of circumstantial evidence. Every paper I found supported the use of medical identification. Most papers I found were actually about other issues, and merely alluded to medical identification by describing how they used it in their own protocols. In most clinics, it’s now an automatic part of the checklist to refer newly diagnosed patients to wear medical identification; almost always through the MedicAlert Foundation.

The two papers I found that addressed the issue head on were a Canadian study about children wearing MedicAlert bracelets being bullied, and a paper in an emergency services journal about differing standards in medical identification. Both of these studies, though, seemed to skirt around the quantifiable efficacy of medical identification and were more interested in the tangential effects.

There was a third paper that dealt more directly as well, but there was something fishy about it. The title was “MedicAlert: Speaking for Patients When They Can’t”, and the language and graphics were suspiciously similar to the advertising used by the MedicAlert Foundation website. By the time I had gotten to this point, I was already running late with my paper. EBSCO listed the paper as “peer reviewed”, which my trusty research librarian said meant it was credible (or at least, credible enough), and it basically said exactly the things that I needed a source for, so I included it in my bibliography. But looking back, I’m worried that I’ve fallen into the Citogenesis trap, just  this time with a private entity rather than Wikipedia.
The conspiracy theorist in me wants to jump to the conclusion that I’ve uncovered a massive ruse; that the MedicAlert Foundation has created and perpetuated a myth about the efficacy of their services, and the sheeple of the medical-industrial complex are unwitting collaborators. Something something database with our medical records something something hail hydra. This pretty blatantly fails Occam’s Razor, so I’m inclined to write it off. The most likely scenario here is that there is a study lying around that I simply missed in my search, and it’s so old and foundational that later research has just accepted it as common knowledge. Or maybe it was buried deep in the bibliographies of other papers I read, and I just missed it. 

Still, the fact that I didn’t find this study when explicitly looking for it raises questions. Which leads me to the next most likely scenario: I have found a rare spot of massive oversight in the medical scientific community. After all, the idea that wearing medical identification is helpful in an emergency situation is common sense, bordering on self-evident. And there’s no shortage of anecdotes from paramedics and ER doctors that medical identification can help save lives. Even in the literature, while I can’t find an overview, there are several individual case studies. It’s not difficult to imagine that doctors have simply taken medical identification as a logical given, and gone ahead and implemented it into their protocols.

In that case, it would make sense that MedicAlert would jump on the bandwagon. If anything, having a single standard makes the process more rigorous. I’m a little skeptical that insurance companies just went along with it; it’s not like common sense has ever stopped them from penny-pinching before. But who knows, maybe this is the one time they took doctors at their word. Maybe, through some common consensus, this has just become a massive blind spot for research. After all, I only noticed it when I was looking into something tangential to it. 
So where does this leave us? If the data is really out there somewhere, then the only problem is that I need a better search engine. If this is part of a blind spot, if the research has never been done and everyone has just accepted it as common sense, then it needs to be put in the queue for an overarching study. Not that I expect that such a study won’t find a correlation between wearing medical identification and better health outcomes. After all, it’s common sense. But we can do better than just acting on common sense and gut instincts. We have to do better if we want to advance as a species.

The other reason why we need to have hard, verifiable numbers with regards to efficacy, besides the possibility we might discover our assumptions were wrong, is to have a way to justify the trade off. My whole paper has been about trying to prove the trade off a person makes when deciding to wear medical identification, in terms of stigma, self perception, and comfort. We often brush this off as being immaterial. And maybe it is. Maybe, next to an overwhelming consensus of evidence showing a large and measurable positive impact on health outcomes, some minor discomfort wearing a bracelet for life is easily outweighed. 

Then again, what if the positive impact is fairly minor? If the statistical difference amounts only to, let’s say, a few extra hours life expectancy, is that worth a lifetime of having everyone know that you’re disabled wherever you go? People I know would disagree on this matter. But until we can say definitively the medical impact on the one hand, we can’t justify it against the social impact on the other. We can’t have a real debate based on folk wisdom versus anecdotes. 

On Hippocratic Oaths

I’ve been thinking about the Hippocratic Oath this week. This came up while wandering around campus during downtime, when I encountered a mural showing a group of nurses posing heroically, amid a collage of vaguely related items, between old timey nurse recruitment posters. In the background, the words of the Hippocratic Oath were typed behind the larger than life figures. I imagine they took cues from military posters that occasionally do similar things with oaths of enlistment. 

I took special note of this, because strictly speaking, the Hippocratic Oath isn’t meant for nurses. It could arguably apply to paramedics or EMTs, since, epistemologically at least, a paramedic is a watered down doctor, the first ambulances being an extension of the military hospitals and hence under the aegis of surgeons and doctors rather than nurses. But that kind of pedantic argument not only ignores actual modern day training requirements, since in most jurisdictions the requirements for nurses are more stringent than EMTs and at least as stringent as paramedics, but shortchanges nurses, a group to whom I owe an enormous gratitude and for whom I hold an immense respect. 

Besides which, whether or not the Hippocratic Oath – or rather, since the oath recorded by Hippocrates himself is recognized as being outdated, and has been almost universally superseded by more modern oaths – is necessarily binding to nurses, it is hard to argue that the basic principles aren’t applicable. Whether or not modern nurses have at their disposal the same curative tools as their doctorate-holding counterparts, they still play an enormous role in patient outcomes. In fact, by some scientific estimates, the quality of nursing staff may actually matter more than the actions undertaken by doctors. 

Moreover, all of the ethical considerations still apply. Perhaps most obviously, respect for patients and patient confidentiality. After all, how politely the doctor treats you in their ten minutes of rounds isn’t going to outweigh your direct overseers for the rest of the day. And as far as confidentiality, whom are you more concerned about gossiping: the nerd who reads your charts and writes out your prescription, or the nurse who’s in your room, undressing you to inject the drugs into the subcutaneous tissue where the sun doesn’t shine? 

So I don’t actually mind if nurses are taking the Hippocratic Oath, whether or not it historically applies. But that’s not why it’s been rattling around my mind the last week. 

See, my final paper in sociology is approaching. Actually, it’s been approaching; at this point the paper is waiting impatiently at the door to be let in. My present thinking is that I will follow the suggestion laid down in the syllabus and create a survey for my paper. My current topic regards medical identification. Plenty of studies in the medical field have exalted medical identification as a simple, cost-effective means of promoting patient safety. But compelling people to wear something that identifies them as being part of a historically oppressed minority group has serious implications that I think are being overlooked when we treat people who refuse to wear medical identification in the same group as people who refuse to get vaccinated, or take prescribed medication.

What I want to find out in my survey is why people who don’t wear medical identification choose not to. But to really prove (or disprove, as the case may be, since a proper scientific approach demands that possibility) my point, I need to get at the sensitive matters at the heart of this issue: medical issues and minority status. This involves a lot of sensitive topics, and consequently gathering data on it means collecting potentially sensitive information. 

This leaves me in an interesting position. The fact that I am doing this for a class at an accredited academic institution gives me credibility, if more-so with the lay public than among those who know enough about modern science to realize that I have no real earned credentials. But the point remains, if I posted online that I was conducting a survey for my institution, which falls within a stretched interpretation of the truth, I could probably get many people to disclose otherwise confidential information to me. 

Since I have never taken an oath, and have essentially no oversight in the execution n if this survey, other than the bare minimum privacy safeguards required by the FCC in my use of the internet, which I can satisfy through a simple checkbox in the United States. If I were so inclined, I could take this information entrusted to me, and either sell it, or use it for personal gain. I couldn’t deliberately target individual subjects, more because that would be criminal harassment than because of any breach of trust. But I might be able to get away with posting it online and letting the internet wreak what havoc it will. This would be grossly unethical and bordering on illegal, but I could probably get away with it. 

I would never do that, of course. Besides being wrong on so many different counts, including betraying the trust of my friends, my community, and my university, it would undermine trust in the academic and scientific communities, at a time where they have come under political attack by those who have a vested interest in discrediting truth. And as a person waiting on a breakthrough cure that will allow me to once again be a fully functional human being, I have a vested interest in supporting these institutions. But I could do it, without breaking any laws, or oaths.

Would an oath stop me? If, at the beginning of my sociology class, I had stood alongside my fellow students, with my hand on the Bible I received in scripture class, in which I have sought comfort and wisdom in dark hours, and swore an oath like the Hippocratic one or its modern equivalents to adhere to ethical best practices and keep to my responsibilities as a student and scientist, albeit of sociology rather than one of the more sciency sciences, would that stop me if I had already decided to sell out my friends?

I actually can’t say with confidence. I’m inclined to say it would, but this is coming from the version of me that wouldn’t do that anyway. The version of me that would cross that line is probably closer to my early-teenage self, whom my modern self has come to regard with a mixture of shame and contempt, who essentially believed that promises were made to be broken. I can’t say for sure what this version of myself would have done. He shared a lot of my respect for science and protocol, and there’s a chance he might’ve been really into the whole oath vibe. So it could’ve worked. On the other hand, it he thought he would’ve gained more than he had to lose, I can imagine how he would’ve justified it to himself. 

Of course, the question of the Hippocratic oath isn’t really about the individual that takes it, so much as it is the society around it. It’s not even so much about how the society enforces oaths and punished oath-breakers. With the exception of perjury, we’ve kind of moved away from Greco-Roman style sacred blood oaths. Adultery and divorce, for instance, are both oath-breaking, but apart from the occasional tut-tut, as a society we’ve more or less just agreed to let it slide. Perhaps as a consequence of longer and more diverse lives, we don’t really care about oaths.

Perjury is another interesting case, though. Because contrary to the occasionally held belief, the crime of perjury isn’t actually affected by whether the lie in question is about some other crime. If you’re on the stand for another charge of which you’re innocent, and your alibi is being at Steak Shack, but you say you were at Veggie Villa, that’s exactly as much perjury as if you had been at the scene of the crime and lied about that. This is because witness testimony is treated legally as fact. The crime of perjury isn’t about trying to get out of being punished. It’s about the integrity of the system. That’s why there’s an oath, and why that oath is taken seriously.

The revival of the Hippocratic Oath as an essential part of the culture of medicine came after World War II, at least partially in response to the conclusion of the Nuremberg Trials and revelations about the holocaust. Particularly horrifying was how Nazi doctors had been involved in the process, both in the acute terms of unethical human experimentation, and in providing medical expertise to ensure that the apparatus of extermination was as efficient as possible. The Red Cross was particularly alarmed- here were people who had dedicated their lives to an understanding of the human condition, and had either sacrificed all sense of morality in the interest of satiating base curiosity, or had actively taken the tools of human progress to inflict destruction in service of an evil end. 

Doctors were, and are, protected under the Geneva Convention. Despite Hollywood and video games, shooting a medic wearing medical symbol, even if they are coming off a landing craft towards your country, is a war crime. As a society, we give them enormous power, with the expectation that they will use that power and their knowledge and skills to help us. This isn’t just some set of privileges we give doctors because they’re smart, though; that trust is essential to their job. Doctors can’t perform surgery if they aren’t trusted with knives, and we can’t eradicate polio if no one is willing to be inoculated.

The first of the modern wave of revisions of the Hippocratic Oath to make it relevant and appropriate for today started with the Red Cross after World War II. The goal was twofold. First: establish trust in medical professionals by setting down a simple, overriding set of basic ethical principles that can be distilled down to a simple oath, so that it can be understood by everyone. Second: make this oath not only universal within the field, but culturally ubiquitous, so as to make it effectively self-enforcing. 

It’s hard to say whether this gambit has worked. I’m not sure how you’d design a study to test it. But my gut feeling is that most people trust their own doctors, certainly more than, say, pharmacologists, meteorologists, or economists, at least partially because of the idea of the Hippocratic Oath. The general public understands that doctors are bound by an oath of ethical principles, and this creates trust. It also means that stories about individual incidents of malpractice or ethics breaches tend to be attributed to sole bad actors, rather than large scale conspiracies. After all, there was an oath, and they broke it; clearly it’s on that person, not the people that came up with the oath.

Other fields, of course, have their own ethical standards. And since, in most places, funding for experiments are contingent on approval from an ethics board, they’re reasonably well enforced. A rogue astrophysicist, for instance, would find themselves hard pressed to find the cash on their own to unleash their dark matter particle accelerator, or whatever, if they aren’t getting their funding to pay for electricity. This is arguably a more fail-safe model than the medical field, where with the exception of big, experimental projects, ethical reviews mostly happen after something goes wrong. 

But if you ask people around the world to rate the trustworthiness of both physicians and astrophysicists, I’d wager a decent sum that more people will say they trust the medical doctor more. It’s not because the ethical review infrastructure keeps doctors better in check, it’s not because doctors are any better educated in their field, and it’s certainly not anything about the field itself that makes medicine more consistent or less error prone. It’s because medical doctors have an oath. And whether or not we treat oaths as a big deal these days, they make a clear and understandable line in the sand. 

I don’t know whether other sciences need their own oath. In terms of reducing ethical ethical breaches, I doubt it will have a serious impact. But it might help with the public trust and relatability probables that the scientific community seems to be suffering. If there was an oath that made it apparent how the language of scientists, unlike pundits, is seldom speculative, but always couched in facts; how scientists almost never defend their work even when they believe in it, preferring to let the data speak for itself; and how the best scientists already hold themselves to an inhumanly rigid standard of ethics and impartiality in their work, I think it could go a ways towards improving appreciation of science, and our discourse as a whole.

Learning Abilities

If I have a special talent, it is that I am very good at learning a lot of things quickly. This isn’t the same thing as being a fast learner; I’m not a fast learner. If something doesn’t click the first time I’m exposed to it, there’s a very good chance it’s going to take me a long time to wrap my head around it. I suppose that makes me lucky, then, that most things tend to click. My real talent is being able to work with lots of information in a format where everything is new, and rapidly put together connected pieces in order to deduce the underlying patterns.

I realized I had this talent in High School, where it served the purpose of helping me bluff my way through classes in which I had no business participating. Most egregiously, in English class, where my class participation counted for a disproportionate percentage of my grade, and my chronic illnesses meant I frequently arrived back just as the class had finished reading a book of which I hadn’t received a copy. On many occasions, I would earn points by building off or reflecting upon points raised by other students. On two occasions, I wrote essays about books I had never held, much less read. I got A’s on both essays, and never scored below an 87% (which was only ever so low because the teacher counted two missed exams as zeros rather than allowing me to retake them) in English as a whole.
Some friends of mine have called this cheating. I disagree. I never claimed that I read the books in question. On the contrary, on the occasions that I mentioned the fact that I had never received a copy to my teachers, I was told simply to try my best to keep up with the class in the meantime while they tracked down an extra copy. So the teachers were aware, or should have been aware, that I was talking off the cuff. I never consulted some other source, like sparknotes, that wretched hive of plagiarist scoundrels and academic villainy.
In any case, I have found this talent to be most useful when diving into a new area. I may not be able to become an expert faster than anyone else, but I can usually string enough information together to sound like I know that of which I speak, and ensure that my questions are insightful and topical, befitting an enlightened discussion, rather than shallow and obvious questions betraying a fresh initiate to the field. This means that I am, perhaps ironically, best in my element when I am furthest behind. I learn more faster by throwing myself into the deep end of something I know nothing about, than reviewing stuff I mostly know.
Secretly, I suspect this is actually not a unique talent. I think most, or at least, many people, learn effectively this way. But whether through a school system designed on a model intended more to promote martial regimentation than intellectual striving, or a culture that punishes failure far more sharply than it incentivizes the entrepreneurial experimentation necessary for personal academic success, we have taught ourselves to avoid this kind of behavior. But whether this talent is mine alone, or I have merely been the first to recognize that the emperor has, in fact, no clothes, this places me in a unique situation.
The problem comes when called upon to follow up on initial successes. Usually this is, in practice, a moot point, because this is precisely where I get sick, miss class, and wind up behind again, where I can capitalize on my skill set and come rocketing back in the nick of time. But this year, with a few exceptions, I have been healthy, or at least, healthy enough to keep up. It turns out that when you follow a course at the intended pace of one week per week, instead of missing months in a febrile delirium and frantically tearing through the textbook in the space of a frantic fortnight, things are, for the most part, manageable.
This is a novel, if not inherently difficult, problem for me- learning at an ordinary pace, instead of a crash course. It’s the informational difference between a week long car trip and an overnight flight. You’d think that learning in such an environment, with one new thing among eleven things I already know, would be easier than taking in twelve new things. But I find that this isn’t necessarily true. I’m good at taking in information,but rubbish at prioritizing information.

My Time Management Problem

I have issues with time management. That sentence is ambiguous, so let me clarify: my issue isn’t with the management of my own time. Sure, I have plenty of flaws in that field, but I think I make it work most of the time, and am reasonably happy with my situation in that respect. I mean to say that I take issue with the field of time management; with the idea that through a combination of log keeping, filling in schedules, rigid prioritization, and large volumes of willpower, it is possible to reclaim every moment of one’s existence.

The problems with this line of thinking should be readily apparent. Humans control only a fraction of the circumstances that affect their time, even moreso on an individual scale. Traffic, weather, infrastructure failure, logistical issues, and even acts of god can throw even the best laid plans into chaos. In most cases, it is not even possible to know which factors will present a challenge. For example, even if I have an inkling that traffic will be a problem, I cannot know with certainty what the weather conditions will be. A plan that does not factor in weather risks being unraveled by a snowstorm, while a plan that does so needlessly is inefficient, and hence, redundant.

But plenty of time-management moderates acknowledge this, and so I’m willing to let it slide for the sake of argument. My problem with these people is that they tend to assume everyone has a routine, or at least, that their needs and tasks are predictable and consistent. It is also assumed, usually not even aloud, but by implication, that one’s abilities are predictable and consistent. This gets my goat, because it’s not true, certainly not in my case.

The reason I try as hard as possible to avoid schedules, and where necessary to accomplish tasks, resort to prioritized checklists rather than set times, is not a decision made for my own satisfaction, but an acknowledgement of a reality over which I have no control. The reality is that my medical condition changes on a minute to minute basis that the most advanced predictive algorithms can only make guess ranging half an hour or so into the future, and only with flawless biometric data coming in live. This is a very recent improvement over the previous status quo, whereby people with similar conditions were known from time to time to quite simply drop dead without any warning whatsoever.

I understand that this is a fairly difficult concept to internalize, so let me provide a slightly more tangible example. Suppose every fifteen minutes you exist, whatever you’re doing, whether awake or asleep, a twenty-sided die is rolled. You can see the results of this die so long at you remember to look at it, but it won’t do anything to inform you of its result. If, at any time, the result of the roll is a one, you have to, let’s say, do jumping jacks for ten minutes while singing the alphabet backwards. If at any point, you fail to do so, after ten minutes your vision will get progressively blurrier until you become legally blind. Some time after that, let’s say, thirty minutes after the initial roll, if you haven’t finished your jumping jack alphabet routine, then your heart will stop.

Now, one in twenty isn’t a lot. At any given moment, you’re more likely than not to get away with doing noting. But this happens every fifteen minutes of every day. That’s ninety six times a day. If the one in twenty holds true, odds are you’ll spend somewhere in the ballpark of fifty minutes of each day dealing with this issue. You won’t know when. You might have to wake up in the middle of the night to do jumping jacks, or make a fool of yourself in front of friends or colleagues to prevent your heart from stopping. You don’t even know with any certainty whether you’ll have to spend fifty minutes, several hours, or no time at all on a given day.

Now try to imagine what this does to a schedule. Obviously, it rules out a very tight regimen that makes use of every minute, because you need to have the time available in case you wind up doing jumping jacks for hours. But more than that, it makes even light schedules difficult to follow. Because even if you have only one thing on your agenda, if that one thing happens to be the moment you need to do jumping jacks, if that thing is something big, like an appointment with a busy person, or a flight, chances are your plans won’t work out.

This is bad enough that you’re probably going to be a bit skeptical of major time management programs. But there’s another part of the equation that’s important to consider. Because yes, there are people who have variable time commitments. New parents, for example, can’t very well pick when their children cry and need to be fed and changed. Most of these people will agree that rigid schedules under such circumstances are for the birds. But some people, a small subset of seemingly superhuman go-getters are able to make the Herculean sacrifices necessary to live according to a timetable despite such handicaps.

The missing piece here is variability in ability as well as task. Because there are plenty of things about my medical issues that won’t directly threaten my life, but will make actual productivity difficult. So going back to the earlier hypothetical, let’s suppose that in addition to having to do jumping jacks on a roll of one, on any roll below three, you get a headache for fifteen minutes.

A three gives you an annoying, albeit mostly manageable headache- a four or five on the standard 1-10 scale. Working through such pain is possible with some added concentration, but you’re a little slower on the uptake, it takes you longer to do things, and you probably won’t have any million dollar ideas. It’s definitely a handicap, but the sort of thing you can usually tough out quietly. If you roll a three while asleep, you won’t stir, but you may not feel as rested as normal.

Rolling a two is more serious- a five or even a six on the 1-10 pain scale. The kind of painkillers it takes to make the pain truly go away are the sorts of meds that don’t let you operate machinery. Keeping your focus on anything for too long is difficult, and your ability to complete anything more cognitively taxing than an online personality quiz is badly impacted. You can slog through rote work, and with great effort you can keep working on something you’re writing, provided you’ve already started it and are just following up, rather than trying to make new points, but in either case, it’s not your best work, it’ll take you far longer than usual to accomplish, and if what you’re doing is remotely important, you’ll want to check it back when you’re feeling better to make sure it’s up to snuff.

This obviously isn’t a realistic scenario. Real life medical issues don’t obey strict rules or even consistent probabilities. It’s difficult to explain that the reason I will never have control of my time is that I don’t have control of me; my needs and abilities to meet those needs change minute by minute. Well, it’s easy to explain, but difficult to appreciate.