Early to Rise

I am not a morning person. This has been the case for as long as I’ve been old enough to have sleeping patterns to speak of, and unless my metabolism does a total reversal with age, I don’t foresee this changing. I am a person who wakes up late and goes to bed accordingly. 

This isn’t because I hate sunrises or morning talk shows; on the contrary, I enjoy both. My problem is that trying to drag myself out of bed in the morning is immensely painful. It often feels like someone is using a metal claw to unceremoniously yank my spinal cord out through a hole in my back, dragging the rest of my body with it by the nerves and sinews. I won’t say it’s the worst pain I’ve every experienced, but it’s up there. I also don’t wake up quickly, either. My brain takes time to boot up in the morning, and during this time I am unable to so much as walk a straight line. The earlier I am woken up, the longer this process takes- if I am dragged out of bed early it can take an hour before I’m conscious enough to make decisions, and leaves me for the rest of the day with an overwhelming exhaustion that borders on clinical narcolepsy.

I am aware that this goes somewhat beyond the normal scope. It’s almost certainly an underlying neurological problem- one of several. Since my brain already has some issues switching gears, it stands to reason that we’re looking at a different symptom of the same cause. But since meds only seem to blunt the symptoms and draw out over a longer period, I am stuck with it. I try to avoid mornings wherever humanly possible, and suck it up when I can’t. 

Of course, the problem, as one may suspect, isn’t actually with mornings. The problem is with my brain making the switch from being asleep to fully awake. In particular I have more trouble than most waking up when my brain is at an inopportune point in the sleep cycle.

Theoretically, this could be addressed on the other end- getting to bed earlier in order to make sure I get the right number of hours of sleep to wake up naturally at, say, 8:30 (which I know isn’t early by most definitions, but compared to my current routine, may as well be pre-dawn). Here we run headfirst into my other problem: severe and chronic insomnia, exacerbated by metabolic disorders that make it not only difficult, but actually dangerous to fall asleep at a reasonable hour most nights.

The situation of being a college student doesn’t help. In many ways the stereotype that college students are bad at time management is self reinforcing. Campus events start and run late, and emails containing essential information and even assignments are sent out hours before midnight. Facilities open from 10-1am. The scheduling of exams and final projects mere days after the material is covered makes long term planning impossible, and reinforces crunch time and cramming- even more so since it is all during the same few weeks. Last minute scrambling is not merely routine, it is impossible to avoid.

For as often as Americans ridicule the collectivist workaholism of Japan, China, and Germany, we suffer from the same kind of cultural fetish, or at least our young people do. Hauling oneself up by one’s bootstraps is used to encourage behaviors that are anti-productivity; destroying sleep schedules and health in order to make deadlines so that one can continue to repeat the same cycle next year. I could, and probably will eventually, write a whole post on these attitudes and their fallout, but for the time being, suffice it to say that being a college student makes already difficult problems much harder. 

But I digress. The point is, my sleep schedule has become unsustainable, and I need to make some changes. Getting to bed earlier, though a good idea, will not work on its own, since every time I have tried this I have wound up laying in bed awake for hours, making me feel less rested in the morning. What I need to do, and what I’ve dreaded doing, is force myself to get up earlier and get going, so that I will be tired enough to actually fall asleep at a (more) reasonable hour. In essence, I am performing a hard reset on my sleep schedule.

As schemes go, this one is fairly straightforward, but that doesn’t make it any easier. The fact that it is necessary does not make it easier either. But it is necessary. Not only do future plans depend on it, but being able to recognize, plan, and execute these smaller points of self improvement is critical to any future I hope to have. I am rising early to great the dawn not only in a literal sense, but in a metaphorical sense as well. 
At least, that is what I shall be telling myself while dragging my sorry behind out of bed.

Re: John Oliver

So I had a bunch of things to say this week. I was actually planning a gag where I was going to shut down part of the site for “Internet Maintenance Day“. Then stuff happened that I felt I wanted to talk about more urgently. Than more stuff happened, and I had to bump back the queue again. Specifically, with regards to that last one, John Oliver released a new episode that I have to talk about.  

If you don’t care to watch, the central thesis of the episode is, in a nutshell, that our medical device regulation system sucks and needs to be more robust. And he’s not wrong. The FDA is overstretched, underfunded, strung up by political diktats written by lobbyists, and above all, beset by brain drain caused by decades of bad faith and political badmouthing. The pharmaceutical and biotech lobby has an outsized influence on the legislation (as well as executive orders and departmental regulations) that are supposed to govern them.

But, and I’m going to repeat this point, the system isn’t broken. Don’t get me wrong, it’s hardly functional either, but these problems are far more often ones of execution than of structure. 

Let’s take the 510(k) exemption that is so maligned in the episode. The way it’s presented makes it seem like such a bad idea, that surely this loophole must be closed. And I’ll agree that the way it’s being exploited is patently unsafe, and needs to be stemmed. But the measure makes sense under far narrower circumstances. To use an example from real life, take insulin pumps. Suppose a pump manufacturing company realizes that it’s replacing a high number of devices because of cracked screens and cases occurring in everyday use. It takes the issue to its engineers, who spend a few days in autocad making a new chassis with reinforced corners and a better screen that’s harder to crack. The guts of the pump, the parts that deliver insulin and treat patients, are unchanged. From a technical perspective, this is the equivalent of switching phone cases.

Now, what kind of vetting process should this device, which is functionally identical to the previous iteration aside from an improved casing, have to go through before the improved model can be shipped out to replace the current flawed devices? Surely it would be enough just to show that the improvements are just cosmetic, perhaps some documentation about the new case and the materials. This is the kind of scenario where a 510(k) style fast track would be good for everyone. It saves time and taxpayer money for regulators, it gets the company’s product out sooner, and consumers get a sturdier, better device sooner. This is why having that path is a good idea.

Not that the FDA is likely to apply section 510(k) in this scenario. Insulin pumps tick all the boxes to make them some of the most regulated devices in existence, even more so than most surgical implants. Any upgrade to insulin pumps, no matter how inconsequential, or how urgently needed by patients, is subject to months of testing, clinical trials, reviews, and paperwork. The FDA can, and regularly does, send applications back for further testing, because they haven’t proven beyond a shadow of a doubt that there is no risk. As a result, improvement to crucial life support devices are artificially slowed by regulations and the market’s reaction to regulations. 

Here’s the other thing to remember about medical devices: for as much as we hear about the costs of prematurely releasing devices, there is also a cost to delaying them. And frustratingly, the ones which often have the greatest cost to delaying- devices like insulin pumps, monitors, and other life support -tend to be subject to the greatest scrutiny, and hence the longest delays. For while the FDA examines numbers and research data, real patients continue to suffer and die for want of better care. We might prevent harm by slowing down the rollout of new technologies, but we must acknowledge that we are also consigning people to preventable harm by denying them newer devices. Some argue that this is morally preferable. I staunchly disagree. More than just trying to protect people from themselves, we are denying desperate people the hope of a better life. We are stifling innovation and autonomy for the illusion of security. This isn’t only unhelpful, and counterproductive, but I would argue it’s downright un-American. 

Rest assured I’m not about to go and join the ranks of the anarchists in calling for the abolition of regulatory agencies. The FDA is slow, inefficient, and in places corrupt, but this is as much as anything due to cuts in funding, usually made by those who seek to streamline innovation, which have limited its ability to fulfill its mandate as well as ironically made processing applications slower. A lack of respect for the agency, its job, and the rules it follows, have inspired unscrupulous companies to bend rules to their breaking point, and commit gross violations of scientific and ethical standards in pursuit of profit. Because of the aforementioned lack of resources, and a political climate actively hostile to regulatory action, the FDA and the agencies responsible for enforcement have been left largely unable to follow their own rules. 

Cutting regulations is not the answer. Improving and reforming the FDA is not a bad idea, but the measures supported by (and implied to be supported by) John Oliver are more likely to delay progress for those who need it than solve the issues at hand. A half-informed politically led moral panic will only lead to bad regulations, which aside from collateral damage, are likely to be gutted at the next changing of the guard, putting us back in the same place. I like to use the phrase “attacking a fly with a sledgehammer”, but I think this is more a case of “attacking a fly with a rapier”, in that it will cause massive collateral damage and probably still miss the fly in the end.

So, how do we do it right? Well, first of all, better funding for the FDA, with an eye towards attracting more, better candidates to work as regulators. If done right, this will make the review process not only more robust, but more efficient, with shorter turnaround time for devices. It might also be a good idea to look into reducing or even abolishing some application fees, especially for those applications which follow high standards for clinical trials, and have the paper trail to prove ethical standards. At present, application fees are kept high as a means to bring in revenue and make up for budget cuts to the agency. Although this arguably does good by putting the cost of regulating on the industry, and hopefully incentivizing quality applications, it constrains the resources available to investigating applications, and gives applying companies undue influence over application studies.

Second, we need to discard this silly notion of a regulatory freeze. Regardless of how one feels about regulations, I would hope that we all agree that they should at least be clear and up to date in order to deal with modern realities. And this means more regulations amending and clarifying old ones, and dealing with new realities as they crop up. There should also be greater emphasis on enforcement, particularly during the early application process. The penalties for submissions intentionally misclassifying devices needs to be high enough to act as a deterrent. Exceptions like section 510(k) need to be kept as exceptions, for special extenuating circumstances, rather than remaining open loopholes. And violating research standards to produce intentionally misleading data needs to be treated far more seriously, perhaps with criminal penalties. This requires not only regulatory and enforcement power, which already exist on the books, but the political will to see abusers held to account. 

Third, there needs to be a much greater emphasis on post-market surveillance; that is, continued testing, auditing, and review of products after they reach consumers. This seems obvious, and from conversations with the uninitiated, I suspect it’s where most people believe the FDA spends most of its effort. But the way the regulations are written, and certainly how they’re enforced in practice, post-market surveillance is almost an afterthought. Most of it is handled by the manufacturers themselves, who have an alarming amount of latitude in their reporting. I would submit that it is this, the current lack of post-market surveillance, rather than administrative classifications, that is the gaping hole in our medical regulatory system. 

This is also a much harder sell, politically. Industry hates it, because robust surveillance often prevents them from getting away with cutting manufacturing costs after approval, when reducing costs would lead to reduced product quality, and it means they have to keep on extra QA staff for as long as they remain in business. It’s also expensive for industry because of how the current setup puts most of the cost on manufacturers. Plenty of politicians also hate post market surveillance, since it is a role that is ideally redundant when everyone does their jobs. When something goes wrong, we say that it shouldn’t have been sold in the first place, and when nothing goes wrong, why would we pay people to keep running tests? 

Incidentally, from what I have been led to understand, this is a major difference between US and EU regulatory processes. Drugs and devices tend to come out in the EU commercially before the US, because the US puts all of its eggs in the basket of premarket approval (and also underfunds that process), while the EU will approve innovations that are “good enough” with the understanding that if problems show up down the line, the system will react and swoop in, and those at fault will be held accountable. As a result, European consumers have safe and legal access to technologies still restricted as experimental in the US, while also enjoying the confidence that abusers will be prosecuted. Most of those new devices are also paid for by the government healthcare system. Just saying. 

Making Exceptions

Normally, I don’t go out of my way for cancer related charities. It’s not that I’m a Scrooge or anything, quite the contrary. The reason, aside from being a college student with no income, is that I’ve spent more than my fair share of time in the hospital myself, especially as a child. For a time I was even enrolled in a hospital school. 

Here’s the thing to understand about hospital school: only a small fraction of children in the hospital end up there. Most children don’t spend long enough to need it. Your average broken bone, tonsillectomy, or even appendicitis is only a few days in the hospital. Moreover, it doesn’t apply to the really bad cases either. Obviously a kid in a coma or similar isn’t going to be in classes, and someone who’s terminal is going to have other priorities. So in practice, you get a narrow overlap of kids who are sick enough to need to stay in the hospital, but healthy enough that education is a going concern and a worthwhile investment. 

Now, having a school where everyone is sick, most kids are disabled, and the cast rotates frequently enough (as kids either get better or worse) to have everyone be the new kid, makes for a comparatively egalitarian school environment. Indeed it was the only school I was ever in without any evidence of bullying. But despite this, hospital school is no utopia. Human nature is what it is, and so there were inevitably cliques. The kids who were longtime students, who had the same abilities or symptoms and shared the same diagnoses, and were on the same ward often formed their own cliques. Naturally, the largest and most influential clique was that of those students who had the most common diagnosis, who alll lived on the same specialty ward, and who, by the nature of their disease, got the most public sympathy, leading to charitable donations and better toys for them.
Yes, that’s right. In the hospital school I went to, the cool kids were the oncology patients. 

Obviously, I don’t envy them, even if I might’ve wanted to have been given a diagnosis that I could explain to other people, or gotten some of the perks of the countless charities devoted to childhood cancer. For what it’s worth, they were all very nice people. As cool kids go, they were benevolent overlords. I never had anything against them as individuals, just against the fact that society seemed to recognize them, who were in effectively in the same position as me, because they had a particular diagnosis and I didn’t. They had a certain social privilege, that cancer was everyone’s enemy, whereas the doctors couldn’t even name what it was that I had. 

There’s a bit more nuance here, but I’m writing this on a schedule, and that’s the short version. As a result, when it comes to organizations that provide resources specific to cancer, I tend to prioritize my contributions elsewhere. With the knowledge that cancer causes are usually fairly well supported and funded, all else equal I will usually donate my money, time, and energy to other causes. I don’t oppose such efforts, but it takes something exceptional to get me on side. 

And the reason I bring all this up is because we’ve reached one of my exceptions. This Star Won’t Go Out is hosting its big indiegogo fundraiser. I like TSWGO. Their story warms my old grinch heart. And more importantly, I like their angle. Because as much as they’re about sticking it to cancer, and helping kids do so, they are also focused on promoting kindness and good. Even though they’re a cancer charity, their projects benefit people besides just those who have a cancer diagnosis. After all, it has benefited me.

Their community outreach and Project Lovely grants have kept a strong focus on making life better for those who need it, regardless of diagnosis. These may not be the most economically efficient means of converting dollars into comfort, but it is an excellent way of spreading joy, goodwill, love, and compassion. Full disclosure here: I applied for such a grant two years ago for a project based on my own experiences in hospital so long ago. I didn’t make the cut, but my interactions with them made it clear that we both cared about essentially the same things. The projects they eventually did fund have done good in the world, and I support them wholeheartedly.

So if you, like me, are a charitable person, but shy away from donating to big causes, join me on supporting This Star Won’t Go Out’s current fundraiser. They do good with limited resources, and seek to change the world for the better, not just for their tribe, but for everyone.

Year One Done

This week has been quite the ride for me. Even though I’m the best student I know, finals are stressful. Perhaps I am just particularly susceptible to pressure in this vein, but it seems like the mere institution of final exams are profoundly anxiety-inducing. Add in some medical issues and a healthy sprinkling of social drama, and what was supposed to be a slam dunk became a chaotic tumble across the finish line. 

Thus ends my first full year of college. There are many lessons to unpack, and I expect I shall spend the next few weeks doing so. In the meantime, however, I have blocked off some much needed time for glorious nothingness, followed by the launch of my summer projects. 

In other news, I am proud of my brother. This week was his eighteenth birthday. By lucky coincidence, the day of his birthday was also a local budget referendum. 
In the scheme of things, a plebiscite to ratify the issuing of bonds to finance a new air conditioning unit for the recreation center is far from the most important vote. The kind of people who wind up voting in such things tend to either be obsessed with local politics, or people who have committed to always voting, no matter the issue. 

My brother and I voted. I did so because I see it as my civic and patriotic duty to cast my ballot for what I believe to be the greater good. And I am proud of him, because he joined me in casting a ballot, even though he didn’t have to, even though it was a perfectly nice day out and he had other things to do, as a matter of pride. 

Those are the headlines for this week, or at least the ones that I can get through at the moment. Focusing everything on getting out the other end of the semester has set back my writing and drained my mind of topics for a time. 

The Paradox Model

Editing note: I started this draft several weeks ago. I’m not happy with it, but given the choice between publishing it and delaying again during finals, I went with the former.

In the past few years, I’ve fallen down the rabbit hole of Paradox grand strategy games. Specifically, I started with Cities: Skylines before making the jump to Hearts of Iron IV. Following that, I was successfully converted to Stellaris. I haven’t touched the Victoria, Europa Universalis, or Crusader Kings franchises, but sometimes I think it might be awesome to try out the fabled mega-campaign, an undertaking to lead a single country from the earliest dates in Paradox Danes through to the conquest of the galaxy in Stellaris. 

But I don’t want to talk about the actual games that Paradox makes. I want to talk about how they make them. Specifically, I want to talk about their funding model. Because Paradox makes really big games. More than the campaigns or the stories are the enormous systems with countless moving part, which, when they gel together properly, serve to create an intricate and finely tuned whole, which seems like a self-consistent world. At their best, the systems paradox builds feel like stepping into a real campaign, constrained not by mechanics themselves, but by the limits of what you can dream, build, and execute within them. They don’t always hit their mark, and when they miss they can fall into incomprehensible layers of useless depth. But when they work, they’re a true experience, beyond mere game.

The problem, besides the toll this takes on any device short of a supercomputer, is that building something with that many moving parts is a technical feat. Getting them to keep working is a marvel. And keeping them updated, adding new bits and pieces to deal with exploits as players find them, making sure that every possible decision by the player is acknowledged and reflected in the story of the world they create, adding story to stop the complexity from breeding apathy, is impossible, at least in the frame of a conventional video game. A game so big will only ever have so many people playing it, so the constant patches required to keep it working can’t be sustained merely by sales. 

You could charge people for patches. But that’s kind of questionable if you’re charging people to repair something you sold them. Even if you could fend off legal challenges for forcing players to pay for potential security related fixes, that kind of breaks the implicit pact between player and publisher. You could charge a monthly flat fee, but besides making it a lot harder to justify any upfront costs that puts more pressure to keep pushing out new things to give players a reason to stick around, rather than taking time to work on bigger improvements. Additionally, I’m not convinced the same number of people would shell out a monthly fee for a grand strategy game. You could charge a heck of a lot more upfront. But good luck convincing anyone to fork over four hundred dollars for a game, let alone enough people often enough to keep a full time development team employed. 

What Paradox does instead of either of these is sell their games at a steep, but not unseen by industry standards, price, and then release a new DLC, or Downloadable Content package, every so often that expends the game for an additional price. The new DLC adds new approaches and mechanics to play around with, while Paradox releases a free update to everyone with bug fixes and some modest improvements. The effect is a steady stream of income for the developers, at a cost that most players can afford. Those that can’t can wait for a sale, or continue to play their existing version of the game without the fleshed out features. 

I can’t decide myself whether I’m a fan of this setup. On the one hand, I don’t like the feeling of having to continue to pay to get the full experience of a game I’ve already purchased, especially since in many cases, the particulars of the free updates mostly serve to make changes to enable the paid features. I don’t like having the full experience one day, and then updating my game to find it now incomplete. And the messaging from Paradox on this point is mixed. It seems like paradox wants to have a membership system but doesn’t want to admit it, and this rubs me the wrong way. 

On the other hand, with the amount of work they put into these systems, they do need to make their money back. And while the current system may not be good, it is perhaps the best it can be given the inevitable of market forces. Giving players the option to keep playing the game they have without paying for new features they may not want through a paid membership is a good thing. I can accept and even approve of game expansions, even those which alter core mechanics. It helps that I can afford to keep pace with the constant rollout of new items to purchase. 

So is Paradox’s model really a series of expansions, or a membership system in disguise? If it’s a membership system, then they really need to do something about all the old DLCs creating a cost barrier for new players. If my friend gets the base game in a bundle, for instance, it’s ridiculous that, for us to play multiplayer, he either has to shell out close to the original price again for DLCs, or I have to disable all the mechanics I’ve grown used to. If Paradox wants to continue charging for fixing bugs and balancing mechanics, they need to integrate old DLCs into base games, or at the very least, give a substantial discount to let new players catch up for multiplayer without having to fork over hundreds of dollars upfront. 

On the other hand if Paradox’s model is in fact an endless march of expansions, then, well, they need to make their expansions better. If Paradox’s official line is that every DLC is completely optional to enjoying the game (ha!), then the DLC themselves need to do more to justify their price tag. To pick on the latest Hearts of Iron IV DLC, Man the Guns: being able to customize my destroyers to turn the whole Atlantic into an impassible minefield, or turning capital ships into floating fortresses capable of smashing enemy ships while also providing air and artillery support for my amphibious tanks, or having Edward VIII show the peasants what happens when you try to tell the King whom he can marry, is all very well and good, I don’t know that it justifies paying $20. 

The Business Plot

For about a year now I’ve been sitting on a business idea. It’s not, like, the kind of business idea that makes anyone rich. On the sliding scale from lemonade stand to Amazon, this is much closer to the former. I think it will probably turn a profit, but I have no illusions about striking it rich and launching myself onto the pages of Forbes. Looking at the numbers realistically, I will be pleasantly surprised if I can make enough money to keep myself above the poverty line. It’s cliche, but I’m really not in it for the money, I’m in it for the thing.

My idea is for a board game, and it’s a game that I think is interesting to a wide range of people, and also deserves to be made, or at least attempted. I’m not going to share too many details, because if spending high school economics class watching clips of Shark Tank posted to YouTube has taught me anything, it’s that you don’t show your idea off until you have the legal grounds to sue any copycats into oblivion. 

The obvious question of why I don’t sign it to a board game publisher has a complicated answer that relies on context that I don’t want to share at this time. But the long and short of it is that, if I’m doing this, it’s a project that I’m pursuing for personal reasons, and based on what I’ve heard from people who have gone through game publishing, they’re reasons I have cause to fear a publisher won’t respect, or might try to renege on. And besides that, there’s a part of me that’s tickled by the idea of being an entrepreneur and not having to answer to anyone (except, you know, manufacturers, contractors, accountants, taxes, regulators, and of course, consumers). 

So I have the idea. I have a vague idea of what my end goals and expectations are, and some notion of the path towards them. Whether or not I’m “committed” in the sense that the guides say you need to be to be an entrepreneur, it’s an idea that I’d like to see exist, and I’m willing to throw what money and time I can spare at it. If there was any job that could ever motivate me to wake up early, this project would be one of them. The people I’ve talked to about this privately have told me it’s a good idea, including a business professor who, upon hearing my pitch, immediately endorsed it and tried to convert me to take her class. I think there’s something here.

And that’s about where I got stuck. I managed to make a prototype last summer, shortly after the idea popped into my head, and I’ve been play-testing and reviewing the rules a bit, but this is just circling the problem, and I know it. My next step is that I need to move forward on iterating the prototype towards a sellable product, and on looking into getting some cursory idea of costs. In practice this means getting quotes from manufacturers, which means I need some kind of email account and web presence. 

Theoretically, I could throw up a Gmail account and launch that process off today (well, not today; I have homework, but basically any time). But conducting such business under my own name, or even under an arbitrary trade name is both murky for tax purposes, and depending on whom you consult, somewhat legally risky, since it puts all the liability squarely on your head. It’s also less clean than setting up a proper web platform with a fancy custom URL and a logo to handle everything centrally. It’s the same reason I have a patreon already set up for this blog- even if I’m not raking in the big bucks today, I’d rather be prepared for that day with the proper infrastructure than have to scramble if I suddenly go viral. 

But setting up a website and branding materials effectively demands that I have, at the very least, an established brand name that can be trademarked. And doing that requires that I have the relevant paperwork filed to incorporate a business. It’s something of a point of no return, or at least a point past which returning becomes increasingly difficult and expensive. To this end I have spent quite a few free hours perusing the available information on starting up a startup and building a business. And let me just say, for as much talk that’s made about making life easy for small businesspeople, and lip service paid encouraging entrepreneurship, I expected it to be a heck of a lot more straightforward. Even the Small Business Administration, whose entire mandate is to make starting new businesses as painless as possible, is a convoluted and self-contradictory mess.

The problem isn’t so much a lack of available information as a lack of concrete information I can act upon. The website can’t seem to decide whether it wants to be written for laypeople or lawyers, and in failing to pick a side is decipherable to both. Most government websites are difficult to navigate, but I would’ve expected an agency whose sole job is to make life easier would be less egregious. 

But it’s not that I can’t find a form to fill out. Again, I could always pick a name and a business structure out of a hat and plow forward. The differences between a partnership and an LLC at the size I’m looking at, while not irrelevant, are perhaps less of the deciding factor that they’re made out to be. The problem is figuring out a way to start this project that lets me keep my dependent status and hence my health insurance. Because while I’m willing to throw time and money and endure paperwork for this idea, I’m not willing to go without life support. Or rather, I’m not able to go without life support. 

I think there’s a loophole that lets me have my cake, and also not die an agonizing death. But I’m not an expert on this field, and this isn’t a risk I want to take. If it’s a question between starting a business that I earnestly believe will change the world for the better, if only incrementally, and getting my life support, I’m going to pick the latter. This is really frustrating. I mean, I’m still head and shoulders above the people that have to pick between medicine and food, but choosing between medicine and chasing an opportunity is grating. 

But what really gets me is the fact that this isn’t a problem in other countries, because other countries have guaranteed healthcare, so that potential entrepreneurs can try their hand without risking their lives. Many of these countries also have free education, transport infrastructure, and in some cases free government advisors for new businesses, all of which lower entry barriers for startups. But in the land of the free markets, we apparently hate entrepreneurs. 

I digress. The point is, I’ve hit an entirely political roadblock, and it’s extremely discouraging. I haven’t set this project aside yet, because despite everything I still believe in it. Part of the reason I’m writing this is to remind myself of the excitement I feel to see this through. My hope is that I’ll be able to make some progress on this before summer. But we’ll see what’s possible for an entrepreneur in this allegedly business friendly country.

Millennial Nostalgia

Evidently the major revelation of 2019 is that I am getting old. 

When they started having 1990s music as a distinct category, and “90s nostalgia” became an unironic trend in the same vein as people dressing upon the styles of the roaring 20s, or whatever the 50s were, I was able to brush it aside. After all, most of the 90s were safely before I was born. Besides, I told myself, there are clear cultural and historical delineation between the 90s they hearken back to, and my own era. I mean, after all, the 90s started still having the Soviet Union exist, and most of the cadence of them was defined by the vacuum created immediately thereafter. 

If this seems like an odd thing to latch onto, perhaps it’s worth spelling out that for me, growing up, the Soviet Union became a sort of benchmark for whether something is better considered news or history. The fall of the Soviet Union was the last thing mentioned on the last page of the first history textbooks I received, and so in my head, if something was older than that, it was history rather than just a thing that happened. 

Anyways, I reasoned, the 90s were history. The fact that I recognized most of the songs from my childhood I was able to safely reason away as a consequence of the trans-Pacific culture delay of living in Australia. Since time zones make live broadcasts from the US impractical, and VHS, CDs, and DVDs take time to ship across an ocean, Australia has always been at least a few months behind major cultural shifts. The internet age changed this, but not fundamentally, since media companies have a potential financial benefit if they are able to stagger release dates around the world to spread out hype and profits. So of course I would recognize some of the songs, even perhaps identify with some of them from childhood, that were listed as being from an age I considered mentally closer to antiquity than modernity. 

The first references to “2000s” culture I can recall as early as 2012, but most of these struck me as toungue-in-cheek. A witty commentary on our culture’s tendency to group trends into decades, and attribute an overriding zeitgeist upon which we can gaze through rose-tinted retrospect, and from which we can draw caricatural outfits for themed parties. I chuckled along and brushed aside the mild disconcertion. Those few that weren’t obviously tongue in cheek were purely for categorization; grouping songs by the year of release, rather than attempting to bundle together the products of my childhood to put them on the shelf next to every other decade in history, and treat them with about the same regard. 

A few stray references to “new millennium” or “millennial” culture I was able to dismiss, either on the grounds that it was relying on the labels provided by generational theory, or because it was referring not to the decade from 2000-2010, but that peculiar moment right around January 1st, 2000, or Y2K if you prefer, between when the euphoria of the end of the Cold War made many proclaim that we had reached the end of history, the the events of September 11th, 2001 made it painfully clear that, no, we hadn’t. 

This didn’t bother me, even if the references and music increasingly struck home. It was just the cultural delay, I reasoned. The year 2000 was, in my mind, really just an epilogue to the 1990s, rather than a new chapter. Besides that, I couldn’t remember the year 2000. I mean, I’m sure things that I remember happened in that year, but there aren’t any memories tied to a particular date before 2001. 
Unfortunately for me and my pleasant self-delusions, we’ve reached a tipping point. Collections of “2000s songs” are now being manually pulled together by connoisseurs and dilettantes with the intent of capturing a historical moment now passed, without the slightest wink or trace of irony. There are suggestions of how to throw a millennial party in the same way as one might a 20s gala, without any distinction between the two.

Moreover, and most alarming to my pride, there are people reading, commenting, and sharing these playlists and articles saying they weren’t born yet to hear the music when it came out, but wish they had been.
While I’m a bit skeptical that the people leaving these comments are actually so young (I suspect they were already born, but just weren’t old enough to remember or be listening to music), it’s not impossible. For some of the songs I remember watching the premiere of the music video with friends, a person born that year would now be old enough that in many states they could drive themselves to their 2000s themed party. In parts of Europe, they’d be old enough to drink at the party. 
We’ve now reached a point where I can no longer have my entire life have happened recently, in the same historical era. Much of the music and culture I recall being new, cutting edge, and relevant, is not only no longer hip and happening, but has come out the other end, and is now vintage and historical. In a single sentence, I am no longer young, or at lest not as young as I would like to think myself.

In a sense, I knew this was coming. But having it illustrated is still a gut punch. It’s not so much that I think of myself as young and with it as a part of my identity, and this shift has shaken part of me. I know I’m not the life fast die young party animal our culture likes to applaud and poke fun at. I never have been, and probably never will be. That ship hasn’t so much sailed, as suffered failure on launch, with the champagne bottle at the ceremony causing a valve to come loose in the reactor room. 

I might have held out hope that it could someday be salvaged; that a few years from now when my life support technology is more autonomous, I would have the opportunity to go to parties and get blackout drunk without having to worry that between medication side effects, and the risk of life support shenanigans while blacked out, the affair would probably kill me. But if that goes down as the tradeoff- if I never go to a real five alarm teen party, but instead I live to 100, I could grit my teeth and accept it.

What does bother me is the notion that I am getting properly old. To be more specific, the notion that I’ve stopped growing up and have started aging is alarming, because it suggests that I’ve hit my peak, at least physiologically. It suggests that things aren’t going to get any better than they are now, and are only going to get worse with time. 

This is a problem. My back and joints already ache enough on a good day to give me serious pause. My circulation is poor, my heart and lungs struggle to match supply and demand, and my nervous system has a rebellious streak that leads my hands to shake and my knees to buckle. My immune system puts me in the same category as a chemotherapy patient, let alone an elderly person. In short, I don’t have a lot to lose should y faculties start to decline. So long as I’m young, that’s not a problem. There remains the possibility that I might grow out of some of my issues. And if I don’t, there’s a good chance that medical technology will catch up to meet me and solve my problems. 

But the medical advances on the table now promise only to halt further degradation. We have some ideas about how to prevent age-related tissue damage, but we still won’t be able to reverse harm that’s already been done. People that are still still young when the technology is discovered might be able to love that way forever, but short of another unseen and unimagined breakthrough, those who are old enough to feel the effects of aging won’t be able to be young again, and might simply be out of luck. 

A clever epistemologist might point out here that this problem isn’t actually unique. The speculative technology angle might add a new dimension to the consideration, but the central issue is not a novel dilemma. After all, this existentialist dread at one’s own aging and mortality is perhaps the oldest quandary of the human experience. I may perhaps feel it somewhat more acutely relative to where my chronological age would place me in modern society, but my complaints are still far from original.

Unsurprisingly, the knowledge that my problems are older than dirt, and have been faced by every sapient being, is not comforting. What solidarity I might feel with my predecessors is drastically outweighed by my knowledge that they were right to fear age, since it did get them in the end. 

This knowledge does contain one useful and actionable nugget of wisdom- namely, that if the best minds of the last twelve millennia have philosophized inconclusively for countless lifetimes, I am unlikely to reach a satisfactory end on my own. Fighting against the tide of time, railing against 2000s nostalgia, is futile and worthless. Acting indignant and distressed about the whole affair, while apparently natural to every generation and perhaps unavoidable as a matter of psychology, is not a helpful attitude to cultivate. The only thing left, then, is to embrace it.

Half Mast

If my blog had a flag, it would be at half-mast today. For the third time in recent memory, a friend of a friend has been killed in a mass shooting, marking the sixth such event to which I’ve had some kind of personal connection. The victim, in this case, was a father, the sole breadwinner for a family with special needs, who moved in the same communities as I do. There is now an open question as to how the mother, who has so far stayed home to manage the children’s health, will make ends meet with the cost of life support. 

This is, of course, only the latest tragedy in a series of horrors, about which I have made my feelings quite clear: this is unacceptable. It is a national disgrace that we allow this level of violence, which would be unacceptable even in a failed state, to continue with only token measures taken against it, in what is allegedly the greatest country on earth. Our continued inability to act decisively is an affront to the victims and survivors. 

I believe I have already made my position on guns and regulations about them quite clear: we need to do a lot better in a hurry. There is little more for me to say that isn’t beating a dead horse. But the lives of my comrades demands, at the barest minimum, a societal conversation more in depth than mere thoughts and prayers. And since I have little faith that the powers that be will fulfill this obligation, I suppose it falls on me to add to the conversation.

One of the things I have heard said in recent weeks is that shootings are a meme issue- that is, they generate a disproportionate amount of attention and media compared to the number of actual deaths in context. This is a hard claim to argue against, epistemologically. After all, how do you argue that something isn’t receiving too much attention? Relative to what? Claiming that the news spends not enough time on boring, everyday items seems to misrepresent the function of news- to report things that are newsworthy.

But beyond this, I feel it ignores a larger point. Saying that shootings are a meme issue requires an acknowledgement that it is, at the very least, a thing that happens. It is an issue, not just a one-time tragedy, (or, for that matter, a two, three, or so on -time tragedy). It is arguably not an issue of the same numerical scale as global poverty, or food prices, or nuclear proliferation. But even if shooting deaths are not as numerous as, say, cancer deaths, it is an issue that causes an unacceptably large number of totally unnecessary deaths.

And the deaths are unnecessary. There is no such thing as an unavoidable death by shooting. Mass shootings, terrorism, assassinations, accidents, even ordinary crime where guns are involved, are totally preventable with far more stringent restrictions on civilian weapon ownership, and better training and resources for law enforcement and intelligence services to enforce these measures. Proliferation of weapons is not some unavoidable part of human nature, it is a hallmark of a failed state. 

What really bugs me, though, isn’t knowing that we could and must do better, but seeing how we do, for other issues. It’s not that Americans have some deep and inflexible fixation with libertarian ideals, that we are willing to stoically accept that it is ultimately the price we pay for a just and free society, to have some people die from the abuse of freedom rather than engage the slippery slope of restricting it. That would be an argument that I would ultimately disagree with on the basis of moral priorities, but could at least acknowledge for its self-consistency. But it isn’t that. Americans aren’t absolute in their freedom. We set aside our principles all the time, for all different causes, from having working roads and schools, to letting police and intelligence agencies treat our online lives as having none of our rights as citizens, to anything involving any kind of travel.

Air travel is the most obvious example. We, as a society, decided almost twenty years ago that no act of airline terrorism on American soil was an acceptable price to pay for individual liberty. As a result, we took drastic actions to prevent a relatively rare scenario that not only kills fewer than guns, but fewer people than lightning strikes. We didn’t have to declare War on Terror; we could’ve tracked down the individual perpetrators, and then said that trying to prevent every madman from getting onto a plane was an impossible task in a non-totalitarian country. But we decided instead that this was a matter of principle. That we couldn’t afford to do anything less. We decided to treat a meme issue, and we dealt with it.

It would not be beyond our capacity to eliminate gun violence. If we were committed, in the same way we are committed to stamp out terrorism, it would not be difficult. Instead, we are told that scores of schoolchildren, teachers, fathers, mothers, friends, and first responders being killed every year is unavoidable, while those saying so live and work behind checkpoints and soldiers to ensure they will never face the consequences of failing to act.

Operation Endrun

I find myself with very little to say these days. Not because I don’t have anything to say, but because I don’t have the time or energy to put it in order. This is a recurring problem for me, particularly of late. Of course, a lot of it is because I’ve been kept on the back foot for a few weeks in a row now. Over the course of the last three weeks I’ve seemingly fallen prey to Murphy’s Law, with things piling up and compounding. I haven’t slept well, I’ve been having trouble thinking straight, and while I’ve avoided missing any particularly egregious deadlines in my classes, I feel more like I’ve been treading water than swimming forward. 

Amid all of this, I missed putting up a post last week without noticing. I was actually pretty sure I had done something. In fact, if you’d asked me at the end of last week whether I posted anything, I would have been fairly sure I had already done that. I probably would have bet money on it. And I would’ve been wrong. This isn’t the first or only thing I’ve lost track of in the last few weeks, but it is arguably the biggest; or perhaps better stated, one of the things more resistant to forgetting that I nevertheless forgot.

This is bad. Distraction and confusion on this level is dangerous. This time around it was a missing a post that didn’t go up. Next time it might be a school assignment, which would be bad. Worst case scenario, I might space out and forget about my life support routine. I don’t think it would kill me, but that’s the kind of risk I try very hard not to take. Whatever is causing this fugue, whether that’s a lack of sleep, too much slacking an procrastinating, or not enough productive projects to focus on, needs to be brought under control. People with my condition don’t have the luxury of being distracted. I know this. 

Of course, saying something is bad and doing something to solve it are very different. And it’s difficult to turn my life upside down in order to find and eliminate the source of a problem while also going ahead with schoolwork and other plans. So, what’s the plan? 

Well, first I need to get ahead in my schoolwork. The plan here is twofold: first, to make sure I’m covered for traveling next weekend. Second, getting ahead will give me the breathing room I need to begin the next set of endeavors. This kind of planning would’ve been impossible in high school, because my teachers were never so organized as to provide expectation ahead of time of what I needed to do, which I think was partially responsible for the tendency for things to snowball. College, however, has proven far easier to navigate, with important items being listed on syllabi well in advance. Consequently, it is possible for me to make plans that include completing work ahead of time. 

Second, I need to get my sleep schedule under control. This is a pain, because the only way I have found to reliably enforce a sleep schedule is to wake up early, and force myself out of bed, so that by the time night falls, I feel exhausted enough to fall asleep. This is always a miserable process, because, as I have mentioned previously, I am not a morning person. Waking up early is physically painful to me. I have designed most of my current life around the premise of never needing to wake up before 11am. This will be a sacrifice. But it is necessary.

Third, I need to get up and around more. Winter often has the effect of causing me to spend most of my days inside and sitting down, since I don’t tolerate the cold well enough to go for walks. This, I suspect, is bad for concentration, and it certainly weakens my stamina over time-something I can scarcely afford to lose.

Will I actually accomplish these things? Dunno. But by writing them down and posting them, I’m more likely to try. 

Fully Automated Luxury Disk Jockeys

Here’s an interesting observation with regards to automation- with the exception of purely atmospheric concerns, we have basically automated the disk jockey, that is, the DJ, out of a job. Pandora’s music genome project, Google’s music bots, Apple’s Genius playlists, and whatever system Spotify uses, are close enough for most everyday purposes. 

Case in point- my university got it in their heads that students were becoming overwhelmed with finals. This is obviously not a major revelation, but student stress has become something of a moral and political hot issue of late. The purported reason for this is the alarmingly high rate of suicides among college students- something which other universities have started to act on. Canada, for instance, has added more school breaks throughout the year during the weeks when suicide rates peak. Other schools have taken more pragmatic measures, like installing suicide nets under tall bridges. 

Of course, the unspoken reason for this sudden focus on mental health is because the national political administration has made school shootings a matter of mental health prevention, rather than, say, making guns harder to get. My university, like my hometown, lies in the shadow of Newtown, and plenty of students here lost people firsthand. Most of us went to schools that went on lockdown that day. The university itself has had two false alarms, and police teams clad in armor and machine guns already patrol the campus regularly. So naturally, the university is throwing money at mental health initiatives. 

Rather than do something novel, like staggering exam schedules, routine audits to prevent teachers from creating too much work for students, or even abolishing final exams altogether as has been occasionally proposed, the powers that be settled on the “Stress Free Finals Week” Initiative, whereby the school adds more events to the exam week schedule. I’m not sure how adding more social events to a time when students are already pressed to cram is supposed to help, but it’s what they did. 

As a commuter and part time student, most of this happens on the periphery for me. But when, through a series of events, I wound up on campus without anything to do nor a ride home, I decided I may as well drop by. After all, they were allegedly offering free snacks if I could find the location, and being a college student, even though I had just gorged myself on holiday cookies and donuts at the Social Sciences Department Holiday Party, I was already slightly peckish. They were also advertising music.

I got there to find the mid-afternoon equivalent of a continental breakfast- chips, popcorn, donuts, and cookies. Perhaps I had expected, what with all the research on the gut-brain connection, that an event purporting to focus on mental health would have a better selection. But no matter. There was a place to sit away from the sleet outside, and free snacks. The advertised DJ was up front blasting music of questionable taste at a borderline objectionable volume, which is to say, normal for a college campus.

Except the DJ wasn’t actually playing the music. He didn’t interact with any of the equipment on the table during the time I watched, and on several occasions he surrendered any pretense of actually working by leaving the table to raid the snacks, and then sat down at another table to eat while the music handled itself. No one else seemed to think this strange, but it struck me to think that for all I know, he might have just set up a YouTube Music playlist and let the thing run, and earned money for it. Heck, he wouldn’t even have to select the playlist manually- bots can do that part too. 

There are two takeaway observations here. The first is that computer automation is happening here and now, and the first wave of that automation is hitting now. I feel it worth noting that this isn’t just manual labor being made obsolete by mechanized muscle. While it might not exactly be white collar, a modern DJ is a thinking job. Sure, it doesn’t take a genius to hit shuffle on iTunes, but actually selecting songs that match a mood and atmosphere for a given event, following up with an appropriate song, and knowing to match the differing volumes on recordings with the desired speaker volume takes at least some level of heuristic thinking. We’ve made it fairly low-hanging fruit for bots in the way we label songs by genre already, but the fact we’re already here should be worrying for people that are worried about automation-driven mass unemployment. 

The second takeaway is a sort of caveat to the first, namely; even if this guy’s job was automated, he still got paid. An argument can be made that this is a function of bureaucratic inefficiency and an enterprising fellow playing the system in order to get paid to be lazy. And while this would be a fair observation, there’s another interpretation that I think is yet more interesting. Because the way I see it, it’s not like the university misunderstood what they were buying. They advertised having a DJ. They could have found any idiot to hook up an iPhone speaker and press shuffle, but instead they hired someone. 

There was a cartoon a while back that supposed that in the future, rather than automation causing a total revolution in man’s relationship with work, that we would simple start to put more value into more obscure and esoteric commodities. The example provided was a computer running on “artisanal bits” – that is, a Turing-complete setup of humans holding up signs for ones and zeroes. The implication is that increasing wealth inequality will drive more artificial distinctions in patterns of consumption. Rich people become pickier the richer they get, and since they’re rich, they can drive demand for more niche products to provide work for the masses.

This hypothesis would fit with current observations. Not only would it explain why institutions are still willing to hire human DJs in the age of music bots, but it would explain why trends like organic foods, fair trade textiles, and so on seem to be gaining economic ground. It’s an interesting counter argument to the notion that we’re shaping up for mass unemployment.

I still think this is a horribly optimistic outlook. After all, if the owning minority can’t be bothered to pay living wages in the process of making their wealth in the first place, why would they feel a need to employ a substantial number of people after the fact? There’s also a fundamental limit on how much a single person can consume*, and the number of people who can be gainfully employed in service of a single person’s whims has a functional limit, after which employment stops being accurately described as work, and is more like private welfare. Which makes this not so much a repudiation of the problem of the problem of automation induced mass unemployment, as another possible solution. Still, it’s a thing to keep in mind, and for me, a good reminder to pay attention to what’s actually happening around me as well as what models and experts say should happen.

*Technically, this isn’t true. A person with infinite money could easily spend infinite money by purchasing items for which the prices are artificially inflated through imposed scarcity, speculative bubbles, and other economic buzzwords. But these reflect peculiarities in the market, and not the number of people involved in their production, or the quality of their work.