World Health Day

The following message is part of a campaign to raise public awareness and resources in light of the global threat posed by COVID-19 on World Health Day. If you have the resources, please consider contributing in any of the ways listed at the end of this post. Remember to adhere to current local health guidelines wherever you are, which may differ from those referenced in this post. 

Now that the world has woken up to the danger that we face in the Covid-19 pandemic, and world leaders have begun to grapple with the problem in policy terms, many individuals have justifiably wondered how long this crisis will last. The answer is, we don’t know. I’m going to repeat this several times, because it’s important to come to terms with this. For all meaningful purposes, we are living through an event that has never happened before. Yes, there have been pandemics this bad in the long ago, and yes, there have been various outbreaks in recent memory, but there has not been a pandemic which is as deadly, and as contagious, which we have failed to contain so spectacularly, recently enough to use it is a clear point of reference. This means that every prediction is not just speculation, but speculation born of an imperfect mosaic. 

Nevertheless, it seems clear that unless we are willing to accept tens of millions of deaths in every country, humanity will need to settle in for a long war. With the language of the US President and Queen Elizabeth, the metaphor is apt. Whether “long” may mean a few months, or into next year will depend on several factors, among them whether a culture which has for many decades been inculcated with the notion of personal whimsy and convenience is able to adapt to collective sacrifice. The longer we take to accept the gravity of the threat, the weaker our response will be, and the longer it will take us to recover. Right now all of humanity face a collective choice. Either we will stubbornly ignore reality, and pay the price with human tragedy of hitherto-fore unimaginable proportions, and repercussions for decades to come, or we will listen to experts and hunker down, give support to those who need it, and help each other through the storm. 

For those who look upon empty streets and bare shelves and proclaim the apocalypse, I have this to say: it is only the apocalypse if we make it such. Granted, it is conceivable that if we lose sight of our goals and our capabilities, either by blind panic or stubborn ignorance, we may find the structures of our society overwhelmed, and the world we know may collapse. This is indeed a possibility, but a possibility which it is entirely within our collective capacity to avoid. The data clearly shows that by taking care of ourselves at home, and avoiding contact with other people or surfaces, we can slow the spread of the virus. With the full mobilization of communities, we can starve the infection of new victims entirely. But even a partial slowing of cases buys us time. With that most valuable of currencies, we can expand hospital capacity, retool our production, and focus our tremendous scientific effort towards forging new weapons in this fight. 

Under wartime pressure, the global scientific community is making terrific strides. Every day, we are learning more about our enemy, and discovering new ways to give ourselves the advantage. Drugs which prove useful are being deployed as fast as they can be produced. With proper coordination from world leaders, production of these drugs can be expanded to give every person the best fighting chance should they become sick. The great challenges now are staying the course, winning the battle for production, and developing humanity’s super weapon.

Staying the course is fairly simple. For the average individual not working essential jobs, it means staying home, avoiding contact as much as possible, and taking care to stay healthy. For communities and organizations, it means encouraging people to stay at home by making this as easy as possible. Those working essential jobs should be given whatever resources they need to carry on safely. Those staying at home need to have the means to do so, both logistically and psychologically. Logistically, many governments are already instituting emergency financial aid to ensure the many people out of work are able to afford staying home, and many communities have used volunteers or emergency workers such as national guard troops to support deliveries of essentials, in order to keep as many people as possible at home. Psychologically, many groups are offering online activities, and many public figures have taken to providing various forms of entertainment and diversion.

Winning the battle for production is harder, but still within reach. Hospitals are very resource intensive at the best of times. Safety in a healthcare setting means the use of large amounts of single-use disposable materials, in terms of drugs and delivery mechanisms, but also personal protective equipment such as masks, gowns, and gloves. If COVID-19 is a war, ventilators are akin to tanks, but PPE are akin to ammunition. Just as it is counterproductive and harmful to ration how many bullets or grenades a soldier may need to use to win a battle, so too is it counterproductive and harmful to insist that our frontline healthcare workers make do with a limited amount of PPE. 

The size and scope of the present crisis, taken with the amount of time we have to act, demands a global industrial mobilization unprecedented during peacetime, and unseen in living memory. It demands either that individuals exhibit self discipline and a regard for the common good, or central authorities control the distribution of scarce necessities. It demands that we examine new ways of meeting production needs while minimizing the number of people who must be kept out at essential jobs. For the individual, this mobilization may require further sacrifice; during the mobilization of WWII, certain commodities such as automobiles, toys, and textiles were unavailable or out of reach. This is the price we paid to beat back the enemy at the gates, and today we find ourselves in a similar boat. All of these measures are more effective if taken calmly in advance by central government, but if they are not they will undoubtedly be taken desperately by local authorities. 

Lastly, there is the challenge of developing a tool which will put an end to the threat of millions of deaths. In terms of research, there are several avenues which may yield fruit. Many hopes are pinned on a vaccine, which would grant immunity to uninfected, and allow us to contain the spread without mass quarantine. Other researchers are looking for a drug, perhaps an antiviral or immunomodulator which might make COVID-19 treatable at home with a pill, much like Tamiflu blunted the worst of H1N1. Still others are searching for antibodies which could be synthesized en masse, to be infused to the blood of vulnerable patients. Each of these leads requires a different approach. However, they all face the common challenge of not only proving safety and effectiveness against COVID-19, but giving us an understandable mechanism of action.

Identifying the “how and why” is not merely of great academic interest, but a pressing medical concern. Coronaviruses are notoriously unstable and prone to mutation; indeed there are those who speculate that COVID-19 may be more than one strain. Finding a treatment or vaccine without understanding our enemy exposes us to the risk of other strains emerging, undoing our hard work and invalidating our collective sacrifices. Cracking the COVID-19 code is a task of great complexity, requiring a combination of human insight and brilliance, bold experimentation, luck, and enormous computational resources. And like the allied efforts against the German enigma, today’s computer scientists have given us a groundwork to build off.

Unraveling the secrets of COVID-19 requires modeling how viral proteins fold and interact with other molecules and proteins. Although protein folding follows fairly simple rules, the computational power required to actually simulate them is enormous. For this, scientists have developed the Folding@Home distributed computing project. Rather than constructing a new supercomputer which would exceed all past attempts, this project aims to harness the power of unused personal computers in a decentralized network. Since the beginning of March, Folding@Home has focused its priorities on COVID-19 related modeling, and has been inundated with people donating computing power, to the point that they had to get help from other web services companies because simulations being completed faster than their web servers could assign them.

At the beginning of March, the computing power of the entire project clocked in at around 700 petaflops, FLOPS being a unit of computing power, meaning Floating Point Operations Per Second. During the Apollo moonshot, a NASA supercomputer would average somewhere around 100,000 FLOPS. Two weeks ago, they announced a new record in the history of computing: more than an exaflop of constant distributed computing power, or 10^18 FLOPS. With the help of Oracle and Microsoft, by the end of March, Folding@Home exceeded 1.5 Exaflops. These historic and unprecedented feats are a testament to the ability of humanity to respond to a challenge. Every day this capacity is maintained or exceeded brings us closer to breaking the viral code and ending the scourge. 

Humanity’s great strength has always lay in our ability to learn, and to take collective action based on reason. Beating back COVID-19 will entail a global effort, in which every person has an important role to play. Not all of us can work in a hospital or a ventilator factory, but there’s still a way each of us can help. If you can afford to donate money, the World Health Organization’s Solidarity Fund is coordinating humanity’s response to the pandemic. Folding@Home is using the power of your personal computers to crack the COVID-19 code. And if nothing else, every person who stays healthy by staying home, washing hands, wearing homemade masks and keeping social distance is one less person to treat in the ICU. 

This Was A Triumph

Today I am happy to announce a new milestone. As of today I have received from my manufacturer the authorization code to initiate semi-closed loop mode on my life support devices. This means that for the first time, my life support devices are capable of keeping me alive for short periods without immediate direct human intervention. For the first time in more than a decade, it is now safe for me to be distracted by such luxuries as homework, and sleep. At least, for short periods, assuming everything works within normal parameters. 

Okay, yes, this is a very qualified statement. Compared to the kind of developments which are daily promised by fundraising groups and starry eyed researchers, this is severely underwhelming. Even compared solely to technologies which have already proven themselves in other fields and small scale testing, the product which is now being rolled out is rather pathetic. There are many reasons for this, from the risk-aversiveness of industry movers, to the glacial pace of regulatory shakers, to a general shortage of imagination among decision makers. It is easy to find reasons to be angry and feel betrayed that the US healthcare system has once again failed to live up to its promise of delivering breakneck innovation and improvement.

Even though this is disappointing compared to the technological relief we were marketed, I am still excited about this development. First of all, because it is a step in the right direction, even if a small one, and any improvement is worth celebrating. Secondly, and chiefly, because I believe that even if this particular new product is only an incremental improvement over the status quo, and pales in comparison to what had been promised for the past several decades, the particular changes represent the beginning of a larger shift. After all, this is the first iteration of this kind of life support device which uses machine learning, not merely to enable a fail-safe to prevent medication overdoses, but which actually intends to make proactive treatment decisions without human oversight.

True, the parameters for this decision making are remarkably conservative, some argue to the point of uselessness. The software will not deploy under anything short of perfect circumstances, its treatment targets are short of most clinical targets, let alone best practices, the modeling is not self-correcting, and the software can not interpret human intervention and is therefore mutually exclusive with aggressive treatment by a human.

Crucially, however, it is making decisions instead of a human. We are over the hill on this development. Critiques of its decision-making skill can be addressed down the line, and I expect once the data is in, it will be a far easier approval and rollout process than the initial version. But unless some new hurdle appears, as of now we are on the path towards full automation.

Truth Machine

I find polygraphs fascinating. The idea of using a machine to exploit bugs in human behavior to discern objective truth from falsehood is just an irresistible notion to a story-minded person like me. To have a machine that can cut through the illusions and deceptions of human stories is just so metaphorically resonant. Of course, I know that polygraphs aren’t really lie detectors, not in the way they’re imagined. At best they monitor a person for signs of physiological stress as a reaction to making up lies on the spot. This is easily lost in background noise, and easily sidestepped by rehearsing a convincing lie ahead of time. 

A large part of the machine’s job is to make a subject afraid to lie in the first place, which makes lies easier to spot. It doesn’t work if the subject believes the lie, or doesn’t experience stress while telling it, nor is it effective on people who fall outside of some basic stereotypes about liars. Eye surgery, heart arrhythmia, brain damage, and ambidextrousness can all throw a polygraph to the point of uselessness. At worst, polygraphs provide a prop for interrogators to confirm their own biases and coerce a subject into believing they’re trapped, whether or not they’re actually guilty, or else to convince jurors of an unproven circumstantial case. 

Still, they’re fascinating. The kabuki theater act that interrogators put on to try and maneuver the subject into the correct state of mind to find a chink in the psychological armor, the different tactics, the mix of science and showmanship is exciting to explore. I enjoy reading through things like polygraph manuals, and the list of questions used in interviews of federal employees for security clearance. 

What’s interesting is that most of the questions are just bad. Questions like “Prior to [date], did you ever do anything dishonest?” are just bad questions. After all, who decides dishonesty? Is a dishonest act only an action committed in service of a direct, intentional lie, or is it broader? Does omission count as an act in this context? Is dishonesty assessed at the time of the act, or in retrospect? Would a knowing deception made in the interest of a unambiguously moral end (for example, misdirecting a friend about a Christmas present) constitute a dishonest act? 

These questions are listed in the manual as “No-answer Comparison Questions”, which if I understand the protocol correctly, are supposed to be set up such that a subject will always answer “No”, and most of the time, will be lying. The idea here is to establish a baseline, to get an idea of what the subject looks like when lying. The manual suggests that these questions will always be answered with “no” because, earlier in the interrogation, the interrogator will have made clear that it is crucial for subjects to provide an impression of being truthful people. The government, the interrogator is instructed to say, doesn’t want to work with people who lie or cheat, and so it is very important that people going through this process appear honest and straight laced. 

Of course, this is hogwash. The government does want people who lie, and it wants people who are talented at it. A general needs to be talented at deception. An intelligence operative needs to keep secrets. Any public figure dealing with sensitive information needs to be able to spin and bend the truth when national security demands it. Even the most morally absolutist, pro-transparency fiend understands that certain government functions require discretion with the truth, and these are exactly the kind of jobs that would involve polygraph tests beforehand. 

The government’s polygraph interrogation protocols rely on subjects swallowing this lie, that they need to keep a consistent and presentable story at the expense of telling the truth. They also rely on the subject recognizing that they are lying and having a reaction, since a polygraph cannot in itself divine material truths, but work only by studying reactions. For it to really work, the subject must also be nervous about lying. This too is set up ahead of time; interrogators are instructed to explain that lying is a conscious and deliberate act, which inspires involuntary physiological fear in the subject. This is arguably half true, but mostly it sets up a self-fulfilling prophecy in the mind of the subject. 

It’s pretty clear that the modern polygraph is not a lie detector. But then again, how could it be? Humans can barely even agree on a consistent definition of a lie within the same language and culture. Most often we tie in our definition of lying with our notions of morality. If you used deception and misrepresentation to do a bad thing, then you lied. If you said something that wasn’t true, but meant nothing by it, and nothing bad came out of it, well then you were probably just mistaken. I don’t want to make this post political, but this trend is obvious if you look at politics: The other side lies, because their ranks are filled with lying liars. By contrast, our side occasionally misspeaks, or is misinterpreted.

This isn’t to say that there’s no such thing as truth or lies, just that we can’t seem to pin down a categorical definition, which you do need if you’re going to program a machine to identify them. We could look for physiological reactions involved in what we collectively call lying, which is what polygraphs purport to do, but this just kicks the problem back a step. After all, what if I genuinely and wholeheartedly don’t consider my tactful omission about “clandestine, secret, unauthorized contact with a non-U.S. citizen or someone (U.S. citizen or non-U.S. citizen) who represents a foreign government, power, group or organization, which could result in a potential or real adverse impact on U.S. national security, or else could result in the unauthorized aid to a foreign government, power, group or organization” to be a lie? If the machine is testing my reactions, it would find nothing, provided I didn’t believe I had anything to lie about. 

This is where competent question design and interrogation technique is supposed to obviate this issue. So, a competent interrogator would be sure to explain the definition of contact, and foreign power, and so on, in such a way that would cause me to doubt any misconceptions, and hopefully if I’m lying, trigger a stress reaction. The interrogator might insinuate that I’m withholding information in order to get me to open up, or try and frame the discussion in such a way that I would think opening up was my only option. But at that point, we’re not really talking about a lie detecting machine, so much as a machine that gives an interrogator data to know when to press psychological attacks. The main function of the machine is to give the interrogator certainty and undermine my own confidence, so that the interrogator can pull off bluffing me into cracking. 

So are polygraphs useful? Obviously, as a psychological tool in an inquisitional interrogation, they provide a powerful weapon. But are they still more useful than, say, a metal box with a colander attached? Probably, under some circumstances, in the hands of someone familiar with the underlying principles and moving parts of both psychology, physiology, and the machine itself. After all, I don’t think there would be such a market if they were complete bunk. But then again, do I trust that they’re currently being used that way by the groups that employ them? Probably not.

Works Consulted

Burney, Nathan. “Convict Yourself.” The Illustrated Guide to Law, lawcomic.net/guide/?p=2494.

United States, Department of Defense, Polygraph Institute. “Law Enforcement Pre-Employment Test.” Law Enforcement Pre-Employment Test. antipolygraph.org/documents/dodpi-lepet.pdf.

The Paradox Model

Editing note: I started this draft several weeks ago. I’m not happy with it, but given the choice between publishing it and delaying again during finals, I went with the former.

In the past few years, I’ve fallen down the rabbit hole of Paradox grand strategy games. Specifically, I started with Cities: Skylines before making the jump to Hearts of Iron IV. Following that, I was successfully converted to Stellaris. I haven’t touched the Victoria, Europa Universalis, or Crusader Kings franchises, but sometimes I think it might be awesome to try out the fabled mega-campaign, an undertaking to lead a single country from the earliest dates in Paradox Danes through to the conquest of the galaxy in Stellaris. 

But I don’t want to talk about the actual games that Paradox makes. I want to talk about how they make them. Specifically, I want to talk about their funding model. Because Paradox makes really big games. More than the campaigns or the stories are the enormous systems with countless moving part, which, when they gel together properly, serve to create an intricate and finely tuned whole, which seems like a self-consistent world. At their best, the systems paradox builds feel like stepping into a real campaign, constrained not by mechanics themselves, but by the limits of what you can dream, build, and execute within them. They don’t always hit their mark, and when they miss they can fall into incomprehensible layers of useless depth. But when they work, they’re a true experience, beyond mere game.

The problem, besides the toll this takes on any device short of a supercomputer, is that building something with that many moving parts is a technical feat. Getting them to keep working is a marvel. And keeping them updated, adding new bits and pieces to deal with exploits as players find them, making sure that every possible decision by the player is acknowledged and reflected in the story of the world they create, adding story to stop the complexity from breeding apathy, is impossible, at least in the frame of a conventional video game. A game so big will only ever have so many people playing it, so the constant patches required to keep it working can’t be sustained merely by sales. 

You could charge people for patches. But that’s kind of questionable if you’re charging people to repair something you sold them. Even if you could fend off legal challenges for forcing players to pay for potential security related fixes, that kind of breaks the implicit pact between player and publisher. You could charge a monthly flat fee, but besides making it a lot harder to justify any upfront costs that puts more pressure to keep pushing out new things to give players a reason to stick around, rather than taking time to work on bigger improvements. Additionally, I’m not convinced the same number of people would shell out a monthly fee for a grand strategy game. You could charge a heck of a lot more upfront. But good luck convincing anyone to fork over four hundred dollars for a game, let alone enough people often enough to keep a full time development team employed. 

What Paradox does instead of either of these is sell their games at a steep, but not unseen by industry standards, price, and then release a new DLC, or Downloadable Content package, every so often that expends the game for an additional price. The new DLC adds new approaches and mechanics to play around with, while Paradox releases a free update to everyone with bug fixes and some modest improvements. The effect is a steady stream of income for the developers, at a cost that most players can afford. Those that can’t can wait for a sale, or continue to play their existing version of the game without the fleshed out features. 

I can’t decide myself whether I’m a fan of this setup. On the one hand, I don’t like the feeling of having to continue to pay to get the full experience of a game I’ve already purchased, especially since in many cases, the particulars of the free updates mostly serve to make changes to enable the paid features. I don’t like having the full experience one day, and then updating my game to find it now incomplete. And the messaging from Paradox on this point is mixed. It seems like paradox wants to have a membership system but doesn’t want to admit it, and this rubs me the wrong way. 

On the other hand, with the amount of work they put into these systems, they do need to make their money back. And while the current system may not be good, it is perhaps the best it can be given the inevitable of market forces. Giving players the option to keep playing the game they have without paying for new features they may not want through a paid membership is a good thing. I can accept and even approve of game expansions, even those which alter core mechanics. It helps that I can afford to keep pace with the constant rollout of new items to purchase. 

So is Paradox’s model really a series of expansions, or a membership system in disguise? If it’s a membership system, then they really need to do something about all the old DLCs creating a cost barrier for new players. If my friend gets the base game in a bundle, for instance, it’s ridiculous that, for us to play multiplayer, he either has to shell out close to the original price again for DLCs, or I have to disable all the mechanics I’ve grown used to. If Paradox wants to continue charging for fixing bugs and balancing mechanics, they need to integrate old DLCs into base games, or at the very least, give a substantial discount to let new players catch up for multiplayer without having to fork over hundreds of dollars upfront. 

On the other hand if Paradox’s model is in fact an endless march of expansions, then, well, they need to make their expansions better. If Paradox’s official line is that every DLC is completely optional to enjoying the game (ha!), then the DLC themselves need to do more to justify their price tag. To pick on the latest Hearts of Iron IV DLC, Man the Guns: being able to customize my destroyers to turn the whole Atlantic into an impassible minefield, or turning capital ships into floating fortresses capable of smashing enemy ships while also providing air and artillery support for my amphibious tanks, or having Edward VIII show the peasants what happens when you try to tell the King whom he can marry, is all very well and good, I don’t know that it justifies paying $20. 

Change Your Passwords

In accordance with our privacy policy, and relevant international legislation, this website is obligated to disclose any security breach as soon as we can do so without worsening the problem. At present we have no reason to believe that any breach has occurred. But the fact that another account of mine was recently caught up in a security breach with an entirely different account raises the possibility, albeit remote, that a bad actor may have had the opportunity to access to normally secure areas of this site. The probability of such an attack is, in my opinion, acceptably minute, and honestly I expect it much more likely that there’s something else going on that I haven’t noticed at all than this particular vulnerability. 

So you don’t need to worry about locking down your accounts, at least not from this site, at the moment. Except, maybe you should. Because the truth is, by the time you’re being notified about a data breach, it’s often too late. To be clear, if a security expert tells you to change your passwords because of something that happened, you should do that. But you should also be doing that anyways. You don’t buy a fire extinguisher after your house is on fire, so don’t wait to change your passwords until after a breach.

Also, use better passwords. Computer security and counter surveillance experts advise using pass phrases rather than ordinary words, or even random but short strings. Something like Myfavoritebookis#1984byGeorgeOrwell or mypasswordis110%secureagainsthackers. These kinds of pass phrases are often just as easy to memorize as a short random string, and stand up better to standard dictionary attacks by adding entropy through length. Sure, if a shady government comes after you, it won’t hold up against the resources they have. But then again, if it’s governments or big time professional hackers coming after you… well, nice knowing you. 

In the 21st century, protecting against digital crime is arguably more urgent than in person crime. Sure, a mugger might steal your wallet, but a hacker could drain your bank account, max out your credit ruining your credit score forever, expose your private information to anyone and everyone everyone who might have an axe to grind, use your identity to commit crimes, and frame you for illegal or indecent acts, essentially instantaneously, perhaps even automatically, without ever setting foot in the same jurisdiction as you. As technology becomes more omnipresent and integrated into our lives, this threat will only grow worse. 

Fully Automated Luxury Disk Jockeys

Here’s an interesting observation with regards to automation- with the exception of purely atmospheric concerns, we have basically automated the disk jockey, that is, the DJ, out of a job. Pandora’s music genome project, Google’s music bots, Apple’s Genius playlists, and whatever system Spotify uses, are close enough for most everyday purposes. 

Case in point- my university got it in their heads that students were becoming overwhelmed with finals. This is obviously not a major revelation, but student stress has become something of a moral and political hot issue of late. The purported reason for this is the alarmingly high rate of suicides among college students- something which other universities have started to act on. Canada, for instance, has added more school breaks throughout the year during the weeks when suicide rates peak. Other schools have taken more pragmatic measures, like installing suicide nets under tall bridges. 

Of course, the unspoken reason for this sudden focus on mental health is because the national political administration has made school shootings a matter of mental health prevention, rather than, say, making guns harder to get. My university, like my hometown, lies in the shadow of Newtown, and plenty of students here lost people firsthand. Most of us went to schools that went on lockdown that day. The university itself has had two false alarms, and police teams clad in armor and machine guns already patrol the campus regularly. So naturally, the university is throwing money at mental health initiatives. 

Rather than do something novel, like staggering exam schedules, routine audits to prevent teachers from creating too much work for students, or even abolishing final exams altogether as has been occasionally proposed, the powers that be settled on the “Stress Free Finals Week” Initiative, whereby the school adds more events to the exam week schedule. I’m not sure how adding more social events to a time when students are already pressed to cram is supposed to help, but it’s what they did. 

As a commuter and part time student, most of this happens on the periphery for me. But when, through a series of events, I wound up on campus without anything to do nor a ride home, I decided I may as well drop by. After all, they were allegedly offering free snacks if I could find the location, and being a college student, even though I had just gorged myself on holiday cookies and donuts at the Social Sciences Department Holiday Party, I was already slightly peckish. They were also advertising music.

I got there to find the mid-afternoon equivalent of a continental breakfast- chips, popcorn, donuts, and cookies. Perhaps I had expected, what with all the research on the gut-brain connection, that an event purporting to focus on mental health would have a better selection. But no matter. There was a place to sit away from the sleet outside, and free snacks. The advertised DJ was up front blasting music of questionable taste at a borderline objectionable volume, which is to say, normal for a college campus.

Except the DJ wasn’t actually playing the music. He didn’t interact with any of the equipment on the table during the time I watched, and on several occasions he surrendered any pretense of actually working by leaving the table to raid the snacks, and then sat down at another table to eat while the music handled itself. No one else seemed to think this strange, but it struck me to think that for all I know, he might have just set up a YouTube Music playlist and let the thing run, and earned money for it. Heck, he wouldn’t even have to select the playlist manually- bots can do that part too. 

There are two takeaway observations here. The first is that computer automation is happening here and now, and the first wave of that automation is hitting now. I feel it worth noting that this isn’t just manual labor being made obsolete by mechanized muscle. While it might not exactly be white collar, a modern DJ is a thinking job. Sure, it doesn’t take a genius to hit shuffle on iTunes, but actually selecting songs that match a mood and atmosphere for a given event, following up with an appropriate song, and knowing to match the differing volumes on recordings with the desired speaker volume takes at least some level of heuristic thinking. We’ve made it fairly low-hanging fruit for bots in the way we label songs by genre already, but the fact we’re already here should be worrying for people that are worried about automation-driven mass unemployment. 

The second takeaway is a sort of caveat to the first, namely; even if this guy’s job was automated, he still got paid. An argument can be made that this is a function of bureaucratic inefficiency and an enterprising fellow playing the system in order to get paid to be lazy. And while this would be a fair observation, there’s another interpretation that I think is yet more interesting. Because the way I see it, it’s not like the university misunderstood what they were buying. They advertised having a DJ. They could have found any idiot to hook up an iPhone speaker and press shuffle, but instead they hired someone. 

There was a cartoon a while back that supposed that in the future, rather than automation causing a total revolution in man’s relationship with work, that we would simple start to put more value into more obscure and esoteric commodities. The example provided was a computer running on “artisanal bits” – that is, a Turing-complete setup of humans holding up signs for ones and zeroes. The implication is that increasing wealth inequality will drive more artificial distinctions in patterns of consumption. Rich people become pickier the richer they get, and since they’re rich, they can drive demand for more niche products to provide work for the masses.

This hypothesis would fit with current observations. Not only would it explain why institutions are still willing to hire human DJs in the age of music bots, but it would explain why trends like organic foods, fair trade textiles, and so on seem to be gaining economic ground. It’s an interesting counter argument to the notion that we’re shaping up for mass unemployment.

I still think this is a horribly optimistic outlook. After all, if the owning minority can’t be bothered to pay living wages in the process of making their wealth in the first place, why would they feel a need to employ a substantial number of people after the fact? There’s also a fundamental limit on how much a single person can consume*, and the number of people who can be gainfully employed in service of a single person’s whims has a functional limit, after which employment stops being accurately described as work, and is more like private welfare. Which makes this not so much a repudiation of the problem of the problem of automation induced mass unemployment, as another possible solution. Still, it’s a thing to keep in mind, and for me, a good reminder to pay attention to what’s actually happening around me as well as what models and experts say should happen.

*Technically, this isn’t true. A person with infinite money could easily spend infinite money by purchasing items for which the prices are artificially inflated through imposed scarcity, speculative bubbles, and other economic buzzwords. But these reflect peculiarities in the market, and not the number of people involved in their production, or the quality of their work. 

Close Paren

Classes continue to go apace. I have had some trouble parsing classes and where they fall on the difficulty spectrum. On the one hand, the readings are, if not necessarily challenging themselves, then at least, reflective of an intellectual stature that seems to foreshadow challenge in class. On the other hand, classes themselves are unnervingly easy; or at least, the level of engagement by other students makes it distressingly easy to appear capable by comparison.

This unnerved feeling isn’t helped by my schedule. The downside of having a very light course load, which requires from me really only two afternoons a week, plus however long it takes to accomplish homework, is that my brain doesn’t seem to really cycle between periods of productivity and downtime. I haven’t seemed to slip into a daily cadence which allows me to intrinsically know what day of the week it is, and have an intrinsic perception of the events of the next several days.

I say this is a downside; in truth I don’t know. It is unexpected compared to how I expected to handle things, but at least so far I have continued to handle things, which I suppose is sufficient for now. It may be that my old notions of how I viewed the week were solely a product of my high school schedule at the time, and that in time I shall develop a new perspective tailored to the present situation. If so, I expect this will take some time to develop.

One sign that this is happening is that I have begun to pick up old projects again. In particular, I have taken to toying around with the modding tools on my Hearts of Iron IV game, with the end goal of adding the factions from some of my writings. Although I have used some tutorials in this process, it has mostly been a matter of reverse engineering the work of others, and experimenting through trial and error. Despite being totally out of my depth, in the sense that this is a matter of modifying computer code files more than writing alternate history, I consider myself talented at throwing myself into learning new things, and have made great strides in my programming efforts, despite setbacks.

I am still tickled by the image of staring at computer code in an editor, making tweaks and squashing bugs in the code. It strikes me because I am not a very technically savvy person. I can follow instructions, and with a vague understanding of what I want to do and examples of how it can be done, I can usually cobble together something that works. That is, after all, how I built this site, and how I have managed to get alternate history countries onto the map of my game; though the cryptic error messages and apparent bugs tell me I’ve still got a way to go. But even so, I’ve never considered myself a computer person.

What’s funny is that I fit into the stereotype. I am a pale, skinny, young man, I wear glasses, t-shirts, and trousers with many pockets, and I have trouble with stereotypical jocks. When I volunteer for my favorite charity, which provides free open source software for medical data, people assume I am one of the authors of the code. I have had to go to great lengths to convince people that I don’t write the code, but merely benefit from it, and even greater lengths to convince the same people that when I say the process by which the code is made operational is easy, I am not presupposing any kind of technical knowledge.

In any case the last week has been not necessarily uneventful, but focused on small headlines. There are other projects in the pipes, but nothing with a definitive timeframe. Actually that’s an outright lie. There are several things with definitive timeframes. But those things are a secret, to be revealed in due course, at the appropriate juncture.