College Tidbits

After returning from the wild woods of upstate, my house is currently caught in the scramble of preparing for college classes. On the whole, I think I am in decent shape. But since it has been the only thing on my mind, here are some assorted pieces of advice which new college students may find useful; tidbits I wish I had known, or in the cases where I did know them, wish I had been able to get them through my thick skull earlier. 

Get an umbrella
Sure, there are more important things to make sure you have before going to college. But most of those things are obvious: backpacks, laptops, writing instruments, and so on. No one talks about back to school umbrellas, though. Of the items I have added to my school bag, my collapsible umbrella is the most useful, least obvious. To explain its great use, I will appropriate a quote from one of my favorite pieces of literature:

Partly it has great practical value. You can open it up to scare off birds and small children; you can wield it like a nightstick in hand to hand combat; use it as a prop in a comedy sketch for an unannounced improv event on the quad; turn it inside out as an improvised parabolic dish to repair a satellite antenna; use it as an excuse to snuggle up next to a crush as you walk them through the rain to their next class; you can wave your umbrella in emergencies as a distress signal, and of course, keep yourself dry with it if it doesn’t seem too worn out.

More importantly, an umbrella has immense psychological value. For some reason, if a Prof discovers that a student has their umbrella with them, they will automatically assume that they are also in possession of a notebook, pencil, pen, tin of biscuits, water bottle, phone charger, map, ball of string, gnat spray, wet weather gear, homework assignment etc., etc. Furthermore, the Prof will then happily lend the student any of these or a dozen other items that the student might accidentally have “lost.” What the Prof will think is that any student who can walk the length and breadth of the campus, rough it, slum it, struggle against terrible odds, win through, and still know where their umbrella is, is clearly a force to be reckoned with.

Find out what programs your school uses, and get acquainted with them
The appropriate time to learn about the format your school requires for assignments is not the night before your essay is due. The time for that is now, before classes, or at least, before you get bogged down in work. Figure out your school email account, and whether that comes with some kind of subscription to Microsoft or google or whatever; if so, those are the programs you’ll be expected to use. Learn how to use them, in accordance with whatever style guide (probably MLA or APA) your school and departments prefer. 

You can, of course, keep using a private email or service for non-school stuff. In fact, I recommend it, because sometimes school networks go down, and it can be difficult to figure out what’s happening if your only mode of communication is down. But don’t risk violating handbook or technology policies by using your personal accounts for what’s supposed to be school business. And if you’re in a group project, don’t be that one guy who insists on only being contacted only through their personal favorite format despite everyone else using the official channels. 

Try not to get swept up in future problems
Going into college, you are an adult now. You may still have the training wheels on, but the controls are in your hands. If you’re like me, this is exhilarating, but also immensely terrifying, because you’ve been under the impression this whole time that adults were supposed to know all the answers intuitively, and be put together, and you don’t feel like you meet those criteria. You’re suddenly in the driver’s seat, and you’re worried that you never got a license, or even know how not to crash. If this is you, I want you to take a deep breath. Then another. Get a cup of tea, treat yourself to a nice cookie. You can do that, after all, being an adult. True, it might be nutritionally inadvisable to have, say, a dozen cookies, but if that’s what you need, go ahead. You need only your own permission. Take a moment. 

Despite the ease of analogies, adulthood isn’t like driving, at least not how I think of driving. There aren’t traffic laws, or cops to pull you over and take away your license. I mean, there are both of those things in the world at large, but bear with me. Adulthood isn’t about you being responsible to others, though that’s certainly a feature. Adulthood is about being responsible as a whole, first and foremost to yourself. In college, you will be responsible for many things, from the trivial to the life altering. Your actions will have consequences. But with a few exceptions, these are all things that you get to decide how they affect you. 

My college, at least, tried to impress the, let’s say, extreme advisability, of following their plans upon freshmen by emphasizing the consequences otherwise. But to me, it was the opposite of helpful, since hearing an outside voice tell me I need to be worried about something immediately  plants the seeds of failure and doubt in my head. Instead, what helped me stay sane was realizing that I could walk away if I wanted. Sure, failing my classes would carry a price I would have to work out later. But it was my decision whether that price was worth it. 

Talk to Your Professors
The other thing worth mentioning here is that you may find, once you prove your good faith and awesome potential, that many items you were led to believe were immutable pillars of the adult world… aren’t so immutable. Assignment requirements can be bent to accommodate a clever take. Grades on a test can be rounded up for a student that makes a good showing. Bureaucracy can, on occasion, be circumvented through a chat with the right person. Not always, but often enough that it’s worth making a good impression with staff and faculty. 

This is actually a good piece of life advice in general. I’ve heard from people who work that no one notices that you’re coming in late if you come in bearing donuts, and I have every reason to believe this is true. I’ve brought cookies in to all of my classes and professors before exams, and so far, I’ve done quite well on all of them. 

Looking Smart

How do you appear smart? I get this question, in some form or another, often enough. I try very hard not to brag about my abilities, for a variety of reasons, but most sources agree that I’m smarter than the average cyborg. Being the smart guy comes with quite a few perks, and people want to know what my secret is. Why do professors wait to call on me until after other people have struck out, and offer to give me prerequisite overrides to get into accelerated courses? What gives me the uncanny ability to pull bits of trivia about anything? How can I just sit down and write a fully formed essay without any draft process?
Well, to be honest, I don’t know. I’ve tried to distill various answers over the years, but haven’t got anything that anyone can consciously put into action. Given the shifting nature of how we define intelligence, there may never be an answer. Shortest post ever, right? Except I don’t want to leave it at that. That’s a cop out. People want advice on how to improve themselves, to reach the same privilege that I’ve been granted by chance. The least I can do is delve into it a bit.
Sadly, I can’t tell you why I’m able to pull vocabulary and facts out of my brain. I’ve spent more than two decades with it, and it still mystifies me with how it will latch onto things like soldiers’ nicknames for WWI artillery pieces (Miss Minnie Waffer was a popular moniker given by American doughboys to German mortars, a corruption of the German term “Minenwerfer”, or mine-thrower), but drop names and faces into the void (My language professor, for instance, whom I’ve had for nearly a year, is still nameless unless I consult my syllabus). Why does it do this? I don’t know. Is it because I’m brain damaged? Yeah, actually, that would make a lot of sense.
The reason I’m good at writing, for instance, is that most of the time, the words just kind of… come together. In my brain, they have a certain feel to them, like a mental texture. They have a certain, I’m going to say, pull, in one or several directions, depending on context, connotations, mood, and so forth. A word can be heavier or lighter, brighter or darker, pulling the sentence in one direction or another, slowing the sequence of thoughts down or accelerating them. As I reach for them and feel them in my brain, they can bring up other words along with them, like pieces of candy stuck together coming out of a jar. This can continue for entire paragraphs of perfectly formed language, and oftentimes if I allow myself, I wind up writing something entirely different than I had intended when I first went looking. This is actually how most of my posts get written.
I used to think that everyone had this sense about language. I’ve met a few people who I am definitely sure have it. But I’ve also been told that this kind of thinking is actually limited to people with irregular brain patterns. So when people ask me how I write and speak so well, I have to answer that, honestly, I just do. I get an idea of what I want to express, or the impression I want to give, and I find the words that match that description, and see what friends they bring along with them. This ability to write full sentences competently, wedded to a smidgeon of poise and a dash of self confidence, is in my experience all that it takes to write essays, or for that matter, give speeches. 
If there’s a downside to this, it’s that by this point I’m totally dependent on this sense, which can desert me when I start to feel under the weather. This sense tends to be impacted before any other part of my health, and without it I can become quickly helpless, unable to string more than the most basic sentences together, and totally unable to apply any degree of intellectual effort to anything. In extreme cases, I will begin a sentence assured that the words will come to me, and halfway through begin to sputter and stare into space, as in my mind I try to reach for a word or concept that just isn’t quite there.
This sense works best for words, but it can work with ideas too. Ideas, especially things like historical facts, of principles of physics, have a similar shape and pull. Like an online encyclopedia with hyperlinks riddled on every page, one idea or fact connects to another, which connects to another, and so forth, making a giant web of things in my brain. I can learn new facts easily by connecting them to existing branches, and sometimes, I can fill in details based on the gaps. All brains do this, constantly. This is why you can “see” in parts of your vision where you’re not directly looking, such as the gap where your nose should be. Except I can feel my brain doing it with concepts, helping me learn things by building connections and filling in gaps, allowing me to absorb lessons, at least those that stick, much more easily.
But there’s more to it than that. Because plenty of people are good at building connections and learning things quickly. So what makes me good at using it? Is there a key difference in my approach that someone else might be able to replicate? 
Let’s ask the same question a different way. What’s the difference between someone who knows a lot of trivia, and someone who’s smart? Or possibly intelligent? There’s not a clear semantic line here, unless we want to try and stick to clinical measurements like IQ, which all come with their own baggage. The assumption here, which I hope you agree with, is that there’s something fundamentally different between having a great number of facts memorized, and being a smart person; the difference between, for instance, being able to solve a mathematical equation on a test, and being able to crunch numbers for a real world problem.
There’s a now famous part in Hitchhiker’s Guide to the Galaxy, wherein (spoilers follow) mice attempt to build a supercomputer to find the answer to life, the universe, and everything. The answer? 42. Yes, that’s definitely the answer. The problem is, it’s not useful or meaningful, because it’s for the wrong question. See, “life, the universe, and everything” isn’t really a good question itself. An answer can’t make sense without the right question, so the fact that 42 is the answer doesn’t help anyone. 
So, what is the difference between being knowledgeable and being smart? Asking good questions. Being knowledgeable merely requires being to parrot back key points, but asking good questions requires understanding, insight, inquiry and communication. It also shows other people that you are paying attention to them, care enough to ask them, and are interested in learning. And most of the time, if you start asking questions that are novel and on-point, people will just assume that you have a complete background in the area, making you seem like an expert.
Unlike natural talent, this is a skill that can be honed. Asking really good questions often relies on having some background information about the topic, but not as much as one might think. You don’t have to memorize a collection of trivia to seem intelligent, just demonstrate an ability to handle and respond to new information in an intelligent way. 

Change Your Passwords

In accordance with our privacy policy, and relevant international legislation, this website is obligated to disclose any security breach as soon as we can do so without worsening the problem. At present we have no reason to believe that any breach has occurred. But the fact that another account of mine was recently caught up in a security breach with an entirely different account raises the possibility, albeit remote, that a bad actor may have had the opportunity to access to normally secure areas of this site. The probability of such an attack is, in my opinion, acceptably minute, and honestly I expect it much more likely that there’s something else going on that I haven’t noticed at all than this particular vulnerability. 

So you don’t need to worry about locking down your accounts, at least not from this site, at the moment. Except, maybe you should. Because the truth is, by the time you’re being notified about a data breach, it’s often too late. To be clear, if a security expert tells you to change your passwords because of something that happened, you should do that. But you should also be doing that anyways. You don’t buy a fire extinguisher after your house is on fire, so don’t wait to change your passwords until after a breach.

Also, use better passwords. Computer security and counter surveillance experts advise using pass phrases rather than ordinary words, or even random but short strings. Something like Myfavoritebookis#1984byGeorgeOrwell or mypasswordis110%secureagainsthackers. These kinds of pass phrases are often just as easy to memorize as a short random string, and stand up better to standard dictionary attacks by adding entropy through length. Sure, if a shady government comes after you, it won’t hold up against the resources they have. But then again, if it’s governments or big time professional hackers coming after you… well, nice knowing you. 

In the 21st century, protecting against digital crime is arguably more urgent than in person crime. Sure, a mugger might steal your wallet, but a hacker could drain your bank account, max out your credit ruining your credit score forever, expose your private information to anyone and everyone everyone who might have an axe to grind, use your identity to commit crimes, and frame you for illegal or indecent acts, essentially instantaneously, perhaps even automatically, without ever setting foot in the same jurisdiction as you. As technology becomes more omnipresent and integrated into our lives, this threat will only grow worse. 

Millennial Nostalgia

Evidently the major revelation of 2019 is that I am getting old. 

When they started having 1990s music as a distinct category, and “90s nostalgia” became an unironic trend in the same vein as people dressing upon the styles of the roaring 20s, or whatever the 50s were, I was able to brush it aside. After all, most of the 90s were safely before I was born. Besides, I told myself, there are clear cultural and historical delineation between the 90s they hearken back to, and my own era. I mean, after all, the 90s started still having the Soviet Union exist, and most of the cadence of them was defined by the vacuum created immediately thereafter. 

If this seems like an odd thing to latch onto, perhaps it’s worth spelling out that for me, growing up, the Soviet Union became a sort of benchmark for whether something is better considered news or history. The fall of the Soviet Union was the last thing mentioned on the last page of the first history textbooks I received, and so in my head, if something was older than that, it was history rather than just a thing that happened. 

Anyways, I reasoned, the 90s were history. The fact that I recognized most of the songs from my childhood I was able to safely reason away as a consequence of the trans-Pacific culture delay of living in Australia. Since time zones make live broadcasts from the US impractical, and VHS, CDs, and DVDs take time to ship across an ocean, Australia has always been at least a few months behind major cultural shifts. The internet age changed this, but not fundamentally, since media companies have a potential financial benefit if they are able to stagger release dates around the world to spread out hype and profits. So of course I would recognize some of the songs, even perhaps identify with some of them from childhood, that were listed as being from an age I considered mentally closer to antiquity than modernity. 

The first references to “2000s” culture I can recall as early as 2012, but most of these struck me as toungue-in-cheek. A witty commentary on our culture’s tendency to group trends into decades, and attribute an overriding zeitgeist upon which we can gaze through rose-tinted retrospect, and from which we can draw caricatural outfits for themed parties. I chuckled along and brushed aside the mild disconcertion. Those few that weren’t obviously tongue in cheek were purely for categorization; grouping songs by the year of release, rather than attempting to bundle together the products of my childhood to put them on the shelf next to every other decade in history, and treat them with about the same regard. 

A few stray references to “new millennium” or “millennial” culture I was able to dismiss, either on the grounds that it was relying on the labels provided by generational theory, or because it was referring not to the decade from 2000-2010, but that peculiar moment right around January 1st, 2000, or Y2K if you prefer, between when the euphoria of the end of the Cold War made many proclaim that we had reached the end of history, the the events of September 11th, 2001 made it painfully clear that, no, we hadn’t. 

This didn’t bother me, even if the references and music increasingly struck home. It was just the cultural delay, I reasoned. The year 2000 was, in my mind, really just an epilogue to the 1990s, rather than a new chapter. Besides that, I couldn’t remember the year 2000. I mean, I’m sure things that I remember happened in that year, but there aren’t any memories tied to a particular date before 2001. 
Unfortunately for me and my pleasant self-delusions, we’ve reached a tipping point. Collections of “2000s songs” are now being manually pulled together by connoisseurs and dilettantes with the intent of capturing a historical moment now passed, without the slightest wink or trace of irony. There are suggestions of how to throw a millennial party in the same way as one might a 20s gala, without any distinction between the two.

Moreover, and most alarming to my pride, there are people reading, commenting, and sharing these playlists and articles saying they weren’t born yet to hear the music when it came out, but wish they had been.
While I’m a bit skeptical that the people leaving these comments are actually so young (I suspect they were already born, but just weren’t old enough to remember or be listening to music), it’s not impossible. For some of the songs I remember watching the premiere of the music video with friends, a person born that year would now be old enough that in many states they could drive themselves to their 2000s themed party. In parts of Europe, they’d be old enough to drink at the party. 
We’ve now reached a point where I can no longer have my entire life have happened recently, in the same historical era. Much of the music and culture I recall being new, cutting edge, and relevant, is not only no longer hip and happening, but has come out the other end, and is now vintage and historical. In a single sentence, I am no longer young, or at lest not as young as I would like to think myself.

In a sense, I knew this was coming. But having it illustrated is still a gut punch. It’s not so much that I think of myself as young and with it as a part of my identity, and this shift has shaken part of me. I know I’m not the life fast die young party animal our culture likes to applaud and poke fun at. I never have been, and probably never will be. That ship hasn’t so much sailed, as suffered failure on launch, with the champagne bottle at the ceremony causing a valve to come loose in the reactor room. 

I might have held out hope that it could someday be salvaged; that a few years from now when my life support technology is more autonomous, I would have the opportunity to go to parties and get blackout drunk without having to worry that between medication side effects, and the risk of life support shenanigans while blacked out, the affair would probably kill me. But if that goes down as the tradeoff- if I never go to a real five alarm teen party, but instead I live to 100, I could grit my teeth and accept it.

What does bother me is the notion that I am getting properly old. To be more specific, the notion that I’ve stopped growing up and have started aging is alarming, because it suggests that I’ve hit my peak, at least physiologically. It suggests that things aren’t going to get any better than they are now, and are only going to get worse with time. 

This is a problem. My back and joints already ache enough on a good day to give me serious pause. My circulation is poor, my heart and lungs struggle to match supply and demand, and my nervous system has a rebellious streak that leads my hands to shake and my knees to buckle. My immune system puts me in the same category as a chemotherapy patient, let alone an elderly person. In short, I don’t have a lot to lose should y faculties start to decline. So long as I’m young, that’s not a problem. There remains the possibility that I might grow out of some of my issues. And if I don’t, there’s a good chance that medical technology will catch up to meet me and solve my problems. 

But the medical advances on the table now promise only to halt further degradation. We have some ideas about how to prevent age-related tissue damage, but we still won’t be able to reverse harm that’s already been done. People that are still still young when the technology is discovered might be able to love that way forever, but short of another unseen and unimagined breakthrough, those who are old enough to feel the effects of aging won’t be able to be young again, and might simply be out of luck. 

A clever epistemologist might point out here that this problem isn’t actually unique. The speculative technology angle might add a new dimension to the consideration, but the central issue is not a novel dilemma. After all, this existentialist dread at one’s own aging and mortality is perhaps the oldest quandary of the human experience. I may perhaps feel it somewhat more acutely relative to where my chronological age would place me in modern society, but my complaints are still far from original.

Unsurprisingly, the knowledge that my problems are older than dirt, and have been faced by every sapient being, is not comforting. What solidarity I might feel with my predecessors is drastically outweighed by my knowledge that they were right to fear age, since it did get them in the end. 

This knowledge does contain one useful and actionable nugget of wisdom- namely, that if the best minds of the last twelve millennia have philosophized inconclusively for countless lifetimes, I am unlikely to reach a satisfactory end on my own. Fighting against the tide of time, railing against 2000s nostalgia, is futile and worthless. Acting indignant and distressed about the whole affair, while apparently natural to every generation and perhaps unavoidable as a matter of psychology, is not a helpful attitude to cultivate. The only thing left, then, is to embrace it.

Why Vote?

Yes, this is a theme. Enough of my friends and acquaintances are on the fence on the issue of voting that I have been stirred into a patriotic fervor. Like Captain America, I have, despite my adversities, arisen to defend democracy in its hour of need. Or at least, I have decided to write about voting until my friends are motivated to get out and vote.


Why vote? In today’s America, why bother to go out and vote? Elections these days are won and lost not at the ballot, but on maps and budget sheets, with faraway oligarchs drawing boundary lines that defy all logic to ensure their own job security, and shadowy mega corporations spending more on media campaigns designed to confuse and disorient you to their advantage than the GDP of several small nations. The mathematics of first past the post voting means that our elections are, and for the foreseeable future, always will be, picking the lesser of two evils.

Statistically, you live not only in a state that is safely for one party or another, but an electoral district that has already been gerrymandered. Depending on where you live, there may be laws designed to target certain demographics, making it harder or easier for certain groups to get to the polls. The effort required to cast a ballot varies from place to place; it might be as easy as dropping by a polling place at your leisure, or it might involve waiting for hours in line, being harassed by officials and election monitors, all in order to fill out a piece of paper the effect of which is unlikely to make a major difference.

So why bother? Why not stay home, and take some well deserved time off.

It’s an older poster, but it checks out

Obviously, this logic wouldn’t work if everyone applied it. But that’s not a compelling reason why you specifically ought go to the effort of voting. Because it is an effort, and much as I might take it for granted that the effort is worthwhile, to participate in and safeguard the future of democracy, not everyone does.

Well, I’ll start by attacking the argument itself. Because yes, massive efforts have been made, and are being made, by those who have power and wish to keep it, and by those who seek power and are willing to gamble on it, to sway the odds in your favor. But consider for a moment these efforts. Would corporations, which are, if nothing else, ruthlessly efficient and stingy, spend such amounts if they really thought victory was assured? Would politicians expend so much effort and political capital campaigning, mudslinging, and yes, cheating through gerrymandering, registration deadlines, and ID laws, if they believed it wasn’t absolutely necessary?

The funny thing about voting trends is, the richer a person is, the more likely they are to vote. Surely, if elections were bought and paid for, the reverse would be true? Instead, the consistent trend is that those who allegedly need to vote the least do so the most.

The game may not be fair, or right, but it is not preordained. It may be biased, but it is not rigged. If it were rigged, the powers that be wouldn’t be making the effort. They are making an effort, on the assumption that they can overcome your will to defend your right to vote by apathy and antipathy. Like any right, your right to vote is only good when exercised.

Secret of Adulthood

For the entirety of my life until quite recently I was utterly convinced of the idea that all “grown-ups”, by nature of their grownupness, had the whole world figured out. It seemed to me essentially up until the week of my eighteenth birthday that there was some intrinsic difference between so-called “children” and these “adults”, where the latter categorically knew something the former didn’t and never the other way around.

I was never quite positive what it was that gave adults this intrinsic knowledge of how the world worked. I assumed it was something covered in a particular school class, or perhaps the secret was contained within those late night broadcasts only over-eighteens were permitted to watch. Whatever it was, I was confident that by the time I reached that mystical age of eighteen, I too would have the world figured out. After all, how else would I be able to consider myself qualified for such important duties as voting, paying taxes, and jury duty.

While I am still open to the fact that one of these days I shall wake up in bed and find myself suddenly equipped with all the knowledge and skills necessary to make my way in the world, I am becoming increasingly convinced that this is, in fact, not how it works. Rather the growing body of evidence is pointing towards the conclusion that all of my intrinsic abilities, which in truth I do not feel have grown significantly since about six years old, are the only toolset with which I will ever be equipped to deal with the world.

This quite terrifying conclusion has been cemented by the relative ease (compared with what I might have imagined) of registering to vote. There was no IQ test, barely any cross-examination of my identity papers, and most shockingly, no SWAT team descending from the heavens to inform me that, no, sorry, there must’ve been some mistake because I can’t possibly be qualified to genuinely help decide the future of our country and the world at large.

Despite being nominally an adult, I still have this habit of basically assuming that every other adult still knows something I don’t. So when, for example, my brother gets into the car to drive to Florida without sunglasses and wearing wool clothing, I broadly assume that he is aware of all these issues, and has some plan to combat them. It is then frustrating when he realizes later that he left his sunglasses on the counter and asks to borrow my backup pair.

This habit also makes it annoyingly easy to believe that anyone acting with confidence must have some grounds for acting so. While I have come to accept that I am merely faking this whole adulthood thing, it is a whole other matter entirely to convince myself that not only am I flying by the seat of my pants, but so is everyone else.

There has been one minor silver lining in this otherwise terrifying revelation. Namely, it is the realization that, with no intrinsic confidence to distinguish those who genuinely know what they’re doing from those who haven’t the foggiest clue, nine times out of ten one can get away with whatever one desires provided one can act sufficiently confident while doing so. That is to say that with few exceptions, it is fairly easy to convince others that you know better than they do what needs to be done.

Overall, while I wish it were true that adulthood brought with it some intrinsic wisdom of how to make it in the world, I also recognize that, this not being the case, I should at least try to work on my ability to look like I know what I’m doing. Because that is the secret of adulthood. It is all an act, and how you act determines whether people treat you as a know-nothing little boy or a wise young man.

The Moral Hazard of Hope


This post is part of the series: The Debriefing. Click to read all posts in this series.


Suppose that five years from today, you would receive an extremely large windfall. The exact number isn’t important, but let’s just say it’s large enough that you’ll have to budget things again. Not technically infinite, because that would break everything, but for the purposes of one person, basically undepletable. Let’s also assume that this money becomes yours in such a way that it can’t be taxed or swindled in getting it. This is also an alternate universe where inheritance and estates don’t exist, so there’s no scheming among family, and no point in considering them in your plans. Just roll with it.

No one else knows about it, so you can’t borrow against it, nor is anyone going to treat you differently until you have the money. You still have to be alive in five years to collect and enjoy your fortune. Freak accidents can still happen, and you can still go bankrupt in the interim, or get thrown in prison, or whatever, but as long as you’re around to cash the check five years from today, you’re in the money.

How would this change your behavior in the interim? How would your priorities change from what they are?

Well, first of all, you’re probably not going to invest in retirement, or long term savings in general. After all, you won’t need to. In fact, further saving would be foolish. You’re not going to need that extra drop in the bucket, which means saving it would be wasting it. You’re legitimately economically better off living the high life and enjoying yourself as much as possible without putting yourself in such severe financial jeopardy that you would be increasing your chances of being unable to collect your money.

If this seems insane, it’s important to remember here, that your lifestyle and enjoyment are quantifiable economic factors (the keyword is “utility”) that weigh against the (relative and ultimately arbitrary) value of your money. This is the whole reason why people buy stuff they don’t strictly need to survive, and why rich people spend more money than poor people, despite not being physiologically different. Because any money you save is basically worthless, and your happiness still has value, buying happiness, expensive and temporary though it may be, is always the economically rational choice.

This is tied to an important economic concept known as Moral Hazard, a condition where the normal risks and costs involved in a decision fail to apply, encouraging riskier behavior. I’m stretching the idea a little bit here, since it usually refers to more direct situations. For example, if I have a credit card that my parents pay for to use “for emergencies”, and I know I’m never going to see the bill, because my parents care more about our family’s credit score than most anything I would think to buy, then that’s a moral hazard. I have very little incentive to do the “right” thing, and a lot of incentive to do whatever I please.

There are examples in macroeconomics as well. For example, many say that large corporations in the United States are caught in a moral hazard problem, because they know that they are “too big to fail”, and will be bailed out by the government if they get in to serious trouble. As a result, these companies may be encouraged to make riskier decisions, knowing that any profits will be massive, and any losses will be passed along.

In any case, the idea is there. When the consequences of a risky decision become uncoupled from the reward, it can be no surprise when rational actors take more riskier decisions. If you know that in five years you’re going to be basically immune to any hardship, you’re probably not going to prepare for the long term.

Now let’s take a different example. Suppose you’re rushed to the hospital after a heart attack, and diagnosed with a heart condition. The condition is minor for now, but could get worse without treatment, and will get worse as you age regardless.

The bad news is, in order to avoid having more heart attacks, and possible secondary circulatory and organ problems, you’re going to need to follow a very strict regimen, including a draconian diet, a daily exercise routine, and a series of regular injections and blood tests.

The good news, your doctor informs you, is that the scientists, who have been tucked away in their labs and getting millions in yearly funding, are closing in on a cure. In fact, there’s already a new drug that’s worked really well in mice. A researcher giving a talk at a major conference recently showed a slide of a timeline that estimated FDA approval in no more than five years. Once you’re cured, assuming everything works as advertised, you won’t have to go through the laborious process of treatment.

The cure drug won’t help if you die of a heart attack before then, and it won’t fix any problems with your other organs if your heart gets bad enough that it can’t supply them with blood, but otherwise it will be a complete cure, as though you were never diagnosed in the first place. The nurse discharging you tells you that since most organ failure doesn’t appear until patients have been going for at least a decade, so long as you can avoid dying for half that long, you’ll be fine.

So, how are you going to treat this new chronic and life threatening disease? Maybe you will be the diligent, model patient, always deferring to the most conservative and risk averse in the medical literature, certainly hopeful for a cure, but not willing to bet your life on a grad student’s hypothesis. Or maybe, knowing nothing else on the subject, you will trust what your doctor told you, and your first impression of the disease, getting by with only as much invasive treatment as you can get away with to avoid dying and being called out by your medical team for being “noncompliant” (referred to in chronic illness circles in hushed tones as “the n-word”).

If the cure does come in five years, as happens only in stories and fantasies, then either way, you’ll be set. The second version of you might be a bit happier from having more fully sucked the marrow out of life. It’s also possible that the second version would have also had to endure another (probably non-fatal) heart attack or two, and dealt with more day to day symptoms like fatigue, pains, and poor circulation. But you never would have really lost anything for being the n-word.

On the other hand, if by the time five years have elapsed, the drug hasn’t gotten approval, or quite possibly, hasn’t gotten close after the researchers discovered that curing a disease in mice didn’t also solve it in humans, then the difference between the two versions of you are going to start to compound. It may not even be noticeable after five years. But after ten, twenty, thirty years, the second version of you is going to be worse for wear. You might not be dead. But there’s a much higher chance you’re going to have had several more heart attacks, and possibly other problems as well.

This is a case of moral hazard, plain and simple, and it does appear in the attitudes of patients with chronic conditions that require constant treatment. The fact that, in this case, the perception of a lack of risk and consequences is a complete fantasy is not relevant. All risk analyses depend on the information that is given and available, not on whatever the actual facts may be. We know that the patient’s decision is ultimately misguided because we know the information they are being given is false, or at least, misleading, and because our detached perspective allows us to take a dispassionate view of the situation.

The patient does not have this information or perspective. In all probability, they are starting out scared and confused, and want nothing more than to return to their previous normal life with as few interruptions as possible. The information and advice they were given, from a medical team that they trust, and possibly have no practical way of fact checking, has led them to believe that they do not particularly need to be strict about their new regimen, because there will not be time for long term consequences to catch up.

The medical team may earnestly believe this. It is the same problem one level up; the only difference is, their information comes from pharmaceutical manufacturers, who have a marketing interest in keeping patients and doctors optimistic about upcoming products, and researchers, who may be unfamiliar with the hurdles in getting a breakthrough from the early lab discoveries to a consumer-available product, and whose funding is dependent on drumming up public support through hype.

The patient is also complicit in this system that lies to them. Nobody wants to be told that their condition is incurable, and that they will be chronically sick until they die. No one wants to hear that their new diagnosis will either cause them to die early, or live long enough for their organs to fail, because even by adhering to the most rigid medical plan, the tools available simply cannot completely mimic the human body’s natural functions. Indeed, telling a patient that they will still suffer long term complications, whether in ten, twenty, or thirty years, almost regardless of their actions today, it can be argued, will have much the same effect as telling them that they will be healthy regardless.

Given the choice between two extremes, optimism is obviously the better policy. But this policy does have a tradeoff. It creates a moral hazard of hope. Ideally, we would be able to convey an optimistic perspective that also maintains an accurate view of the medical prognosis, and balances the need for bedside manner with incentivizing patients to take the best possible care of themselves. Obviously this is not an easy balance to strike, and the balance will vary from patient to patient. The happy-go-lucky might need to be brought down a peg or two with a reality check, while the nihilistic might need a spoonful of sugar to help the medicine go down. Finding this middle ground is not a task to be accomplished by a practitioner at a single visit, but a process to be achieved over the entire course of treatment, ideally with a diverse and well experienced team including mental health specialists.

In an effort to finish on a positive note, I will point out that this is already happening, or at least, is already starting to happen. As interdisciplinary medicine gains traction, patient mental health becomes more of a focus, and as patients with chronic conditions begin to live longer, more hospitals and practices are working harder to ensure that a positive and constructive mindset for self care is a priority, alongside educating patients on the actual logistics of self-care. Support is easier to find than ever, especially with organized patient conferences and events. This problem, much like the conditions that cause it, are chronic, but are manageable with effort.

 

Too Many Tabs Open

It occurs to me that I don’t really have a quantitative, non-subjective metric for how much stress I’m under these days. I recognize that short of filling out a daily questionnaire, I’m not going to have a truly objective assessment of how I’m doing. Even the most detailed questionnaire is limited. Even so, it would be nice to have a yardstick, so to speak, to judge against.

For most of the people I know who have such a yardstick, it tends to be some kind of addiction or vice which they fall back on in difficult times. With the possible exception of chocolate, which I do occasionally use as a pick me up, but also indulge in semi-regularly because I see no point in denying myself enjoyment in moderation, I don’t believe that I have any such addictions. Nor are any of my vices, or at least the ones that I am consciously aware of, particularly correlated with my mood. I am just as likely to buy an expensive Lego set, over-salt my food, snap at people, and become distracted by side ventures when I am happy as when I am sad.

Previously, my yardstick was how many assignments I am working on. While it was never a perfect correlation, as obviously even before I graduated, there were ample things outside of school which also brought my stress, but it was something that was easy enough to track for a ballpark view. The correlation looked something like this:

Amount of Stress versus Number of assignments.

Now, however, I have no assignments, and hence, no yardstick. This might not be a problem, except that, in my goal of continual self-improvement, it is necessary to have, if not an accurate, than at least a consistent, assessment of how I am doing relative to how I have done in the past. Thus, I have cobbled together an ad-hoc assessment which I am hoping will give me something a little more concrete to work with than my own heuristic guesses. Here’s my formula.

Add 5 points for each app running in the background
Add 5 points for each tab open in safari
Add 1 point for each of note that was edited in the last week
Add 3 additional points if any of those were edited between 1:00am and 11:00am
Add 1 point for each note that’s a blog post, project, checklist, or draft communique
Add 5 additional points for any of those that really should have been done a week ago
Subtract 3 points for each completed checklist
Subtract 3 points for each post that’s in the blog queue
Add 3 points for every post that you’ve promised to write but haven’t gotten around to
Add 1 point for every war song, video game or movie soundtrack you’ve listened to in the last 24 hours. 
Add 10 points if there’s something amiss with the medical devices

Doing this right now nets me around 240 points. I would ballpark an average day at around 120-170 points. Admittedly this isn’t exactly statistically rigorous, but it does give me a vaguely scientific way to measure a vague feeling that has been building, and which I feel has come to a head in the last few days. Not a sense of being overwhelmed per se, but rather a feeling that precedes that. A feeling that I have too many things running in tandem, all occupying mental space and resources. A feeling of having too many unfinished notes, too many works-in-progress, and too many tabs open concurrently.

You see, despite my aversion to busywork, I also enjoy the state of being busy. Having an interesting and engaging project or three to work on gives me a sense of direction and purpose (or more cynically, distracts me from all the parts of my existence that bring me misery). Having things to do, places to go, and people to see is a way for me to feel that I am contributing, that I am playing a role, and that I am important. The fact that I need to rush between places, while physiologically tiring and logistically annoying, is also an indication that my time is sufficiently valued that I need not waste it. This is a feeling that I thrive on, and have since childhood.

I am told from my high school economics class that this kind of mindset and behavior often appears in entrepreneurial figures, which I suppose is a good thing, though if this is true, it also probably increases my risk of bankruptcy and the related risks of entrepreneurship. Nevertheless, my tendency towards always trying to be doing something both productive and interesting does seem to be at least moderately effective at spawning novel ideas, and pushing me to trying them at least far enough to see whether they are workable.

It has also gotten me to a point where I have far too many topics occupying space in my mind to properly focus on any of them. Rather than wait until I am well and truly spread too thin, I have decided to try and nip this problem in the bud.

So here’s the plan:

First, I’m going to put a number of projects in stasis. This isn’t “putting them on the back burner” as in my book that usually means keeping all the files active, which means I still see them and thing about them, and the whole point of this is to make it easier to focus on the projects that I actually want to complete soon. I mean I am going to consign those plans to the archives, indefinitely, with no concrete plan to bring them back out. If they become relevant again, then I might bring them back, or start over from scratch.

Second, I’m going to push in the next few days to knock a bunch of low hanging fruit off my list. These are little things like wrapping up blog posts, finalizing my Halloween costume, and a couple other miscellaneous items. This means that there will be a flurry of posts over the next few days. Possibly even a marathon!

All of this will hopefully serve to get, or rather, to keep, things on track. October is coming to a close, and November, which has always been a historically busy month, promises to be even more exciting.

I will add one final, positive note on this subject. While I may feel somewhat overwhelmed by all of the choices I have found in my new life free of school, I am without a doubt happier, certainly than I was over the last two years, and quite possibly over the last decade. Not everything is sunshine and lollipops, obviously, and my health will fairly well make sure it never is. But I can live with that. I can live with being slightly overwhelmed, so long as the things I’m being overwhelmed with are also making me happy.

What Comes Next?

So, as you may remember from a few days ago, I am now officially-unofficially done with classes. This is obviously a relief. Yet it is also dizzyingly anticlimactic. For so long I was solely focused on getting done the schoolwork in front of me that I never once dared to imagine what the world would look like when I was done. Now I am, and the answer is, to summarize: more or less the same as it looked when I was still working.

There is now an interesting paradox with my schedule. The list of things that I have to do each day is now incredibly short, and comprises mostly on those items which are necessary to my day to day survival; I have to make sure I eat, and shower, and get to the doctors’ offices on time. Beyond that I have almost no commitments. I have no local friends with whom I might have plans, nor any career that requires certain hours of me, nor even any concrete future path for my further education (I was, and still am prevented from making such plans because my school still cannot provide an up to date and accurate transcript, which is a prerequisite to applying).

At the same time, now that I have some semblance of peace in my life, for the first time in memory, there are plenty of things which I could do. I could go for a pleasant walk in the park. I could take to the streets and protest something. I could fritter away countless hours on some video game, or some television series. I could write a blog post, or even several. My options are as boundless as my newfound time. Yet for as many things as I could do, there are few things that I need to do.

Moreover, almost all of here things that I could do require some degree of proactive effort on my part. In order to sink time into a video game, for example, I would first have to find and purchase a game that interests me, which would first require that I find a means to acquire, and run said game on my hardware (the bottleneck isn’t actually hardware on my end, but internet speed, which in my household is so criminally slow that it does not meet the bare minimum technical specifications for most online distribution platforms).

As problems go… this isn’t particularly problematic. On the contrary, I find it exhilarating, if also new and utterly terrifying, to think that I now command my own time; indeed, that I have time to command. In the past, the question of time management was decidedly hollow, given that I generally had none. My problem, as I insisted to an unsympathetic study skills teacher, was not that I categorically made poor use of time, but that I only possessed about three productive hours in a day in which to complete twelve hours of schoolwork. The only question involved was which schoolwork I focused on first, which was never truly solved, as each teacher would generally insist that their subject ought be my highest priority, and that all of their class work was absolutely essential and none could be pared down in accordance with my accommodations.

Nevertheless, while my new state of affairs isn’t necessarily problematic, it certainly has the potential to become so if I allow myself to become entranced in the siren song of complacency and cheap hedonism. I am aware that many people, especially people in my demographic, fall prey to various habits and abuses when lacking clear direction in life, therefore I have two primary aims in the time that it will take the school to produce the necessary paperwork for me to move on to higher education.

First, I need to keep busy, at least to an extent that will prevent me from wallowing; for wallowing is not only unproductive, but generally counterproductive, as it increases feelings of depression and helplessness, and is associated with all manner of negative medical outcomes.

Second, I need to keep moving forward. I am well aware that I often feel most hopeless when I cannot see any signs of progress, hence why much of the past five years has been so soul-crushing. In theory, it would be quite easy to occupy my time by playing video games and watching television; by building great structures of Lego and then deconstructing them; or even by writing long tracts, and then destroying them. But this would provide only a physical, and not a mental defense against wallowing. What I require is not merely for my time to be occupied, but an occupation in my time.

I am therefore setting for myself a number of goals. All of these goals are relatively small scale, as I have found that when setting my own goals as opposed to working under the direction of others, I tend to work better with small, tactically minded checklist-style agendas than vague, grand strategies. Most of these goals are relative mundanities, such as shifting around money among accounts, or installing proper antivirus software on a new laptop. All of these goals are intended to keep me busy and nominally productive. A few of them have to do with my writing here.

I generally detest people who post too much of their day to day personal affairs online, particularly those who publish meticulous details of their daily efforts to meet one target or another. However, having my goals publicly known has in past attempts seemed to be a decent motivator of sorts. It forces me to address them in one way or another down the line, even if all I do by addressing them is explain why they haven’t happened yet. If there is a reasonable explanation, I do not feel pressure; if there is not, I feel some compulsion to keep my word to myself and others. So, here are a few of my goals as they regard this blog:

1) I am looking at getting a gallery page set up which will allow me to display the photos that I have taken personally in one place, as well as showing off some of my sketches, which people say I ought to. Aside from being nice for people who like to look at pictures, having a gallery, or a portfolio if you will, has been a thing that I have wanted to have in my life since my first high school art class, as part of my quest to be a pretentious, beret-wearing, capital-a Artist, and people have been clamoring to see more of my pictures and sketches of late. My aim is to have this page in working order before thanksgiving.

2) I am also working on getting that fictional story I keep mentioning polished up for launch. The reason it hasn’t gone up yet is no longer that I haven’t written necessary materials, but that I am still working on getting the backend set up so that it displays nicely and consistently. I’m also still writing it, but I’m far enough along writing it that I can probably start posting as soon as I get the technical hijinks worked out.

This story was scheduled to start some time at the beginning of last month. However, a major glitch in the plugin I was aiming to use to assist its rollout caused a sitewide crash (you may remember that part), and subsequently I had to go back to the drawing board. Because I am, quite simply, not a computer coding person, the solution here is not going to be technically elegant. What’s probably going to happen is that the story is going to be posted under a sub-domain with a separate install of WordPress, in order to keep fiction and nonfiction posts from becoming mixed up. I’m working on trying to make navigating between the two as painless as possible. The timeline on this one should be before thanksgiving.

3) I aim to travel more. This isn’t as strictly blog related, but it is something I’m likely to post about. Specifically, I aim to find a method by which I can safely and comfortably travel, with some degree of independence, despite my disability. My goal is to undertake a proof of concept trip before May of next year.

4) I want to write and create more. No surprises there.

This is likely to be the last post of the daily post marathon. That is, unless something strikes my fancy between now and tomorrow. I reckon that this marathon has served its intended purpose of bringing me up to date on my writings quite nicely. I have actually enjoyed getting to write something every day, even if I know that writing, editing, and posting two thousand words a day is not sustainable for me, and I may yet decide to change up my posting routine some more in the future.

PSA: Don’t Press My Buttons

Because this message seems to have been forgotten recently, here is a quick public service announcement to reiterate what should be readily apparent.

Messing with someone’s life support is bad. Don’t do that.

I’m aware that there is a certain compulsion to press buttons, especially buttons that one isn’t supposed to press, or isn’t sure what they do. Resist the temptation. The consequences otherwise could be deadly. Yes, I mean that entirely literally. It’s called life support for a reason, after all. Going up to someone and starting to press random buttons on medical devices is often equivalent to wrapping a tight arm around someone’s neck. You probably (hopefully) wouldn’t greet a stranger with a stranglehold. So don’t start fiddling with sensitive medical equipment.

Additionally, if you ignore this advice, you should not be surprised when the person whose life you are endangering reacts in self defense. You are, after all, putting their life at risk, the same as if you put them in a stranglehold. There is a very good chance that they will react from instinct, and you will get hurt. You wouldn’t be the first person I’ve heard of to wind up with a bloody nose, a few broken ribs, a fractured skull, maybe a punctured lung… you get the idea.

Don’t assume that because it doesn’t look like a medical device, that it’s fair to mess with either. A lot of newer medical devices aimed at patients who want to try and avoid sticking out are designed to look like ordinary electronic devices. Many newer models have touch screens and sleek modern interfaces. What’s more, a lot of life support setups now include smartphones as a receiver and CPU for more complicated functions, making these smartphones medical devices in practice.

Moreover, even where there is no direct life support function, many times phones are used as an integral part of one’s life support routine. For example, a patient may use their phone to convey medical data to their doctors for making adjustments. Or, a patient may rely on their phone as a means for emergency communication. While these applications do not have the same direct impact on physical safety, they nevertheless are critical to a person’s continued daily function, and an attack on such devices will present a disproportionate danger, and cause according psychological distress. Even relatively harmless phone pranks, which may not even impede the ordinary functioning of medical-related operations are liable to cause such distress.

What is concerned here is not so much the actual impediment, but the threat of impediment when it suddenly matters. For my own part, even complete destruction of my smartphone is not likely to put me in immediate physiological danger. It may, however, prove fatal if it prevents me from summoning assistance if, some time from now, something goes awry. Thus, what could have been a relatively uneventful and easily handled situation with my full resources, could become life threatening. As a result, any time there is any tampering with my phone, regardless of actual effect, it causes serious anxiety for my future wellbeing.

It is more difficult in such situations to establish the kind of causal chain of events which could morally and legally implicate the offender in the end result. For that matter, it is difficult for the would-be prankster to foresee the disproportionate impact of their simple actions. Indeed, common pranks with electronic devices, such as switching contact information, reorganizing apps, and changing background photos, is so broadly considered normal and benign that it is hard to conceive that they could even be interpreted as a serious threat, let alone result in medical harm. Hence my writing this here.

So, if you have any doubt whatsoever about messing with someone else’s devices, even if they may not look like medical devices, resist the temptation.