The Lego Census

So the other day I was wondering about the demographics of Lego mini figures. I’m sure we’re all at least vaguely aware of the fact that Lego minifigs tend to be, by default, adult, male, and yellow-skinned. This wasn’t terribly worthy of serious thought back when Lego had only a handful of different minifigure designs existed. Yet nowadays Lego has thousands, if not millions of different minifigure permutations. Moreover, the total number of minifigures in circulation is set to eclipse the number of living humans within a few years.

Obviously, even with a shift towards trying to be more representative, the demographics of Lego minifigures are not an accurate reflection of the demographics of humankind. But just how out of alignment are they? Or, to ask it another way, could the population of a standard Lego city exist in real life without causing an immediate demographic crisis?

This question has bugged me enough that I decided to conduct an informal study based on a portion of my Lego collection, or rather, a portion of it that I reckon is large enough to be vaguely representative of a population. I have chosen to conduct my counts based on the central district of the Lego city that exists in our family basement, on the grounds that it includes a sizable population from across a variety of different sets.

With that background in mind, I have counted roughly 154 minifigures. The area of survey is defined as the city central district, which for our purposes is defined by the largest tables with the greatest number of buildings and skyscrapers, and so presumably the highest population density.

Because Lego minifigures don’t have numerical ages attached to them, I counted ages by dividing minifigures into four categories. The categories are: Children, Young Adults, Middle Aged, and Elderly. Obviously these categories are qualitative and subject to some interpretation. Children are fairly obvious for their different sized minifigures. An example of adult categories follows.

The figure on the left would be a young adult. The one in the middle would be classified as middle aged, and the one on the right, elderly.

Breakdown by age

Children (14)
Lego children are the most distinct category because, in addition to childish facial features and clothes, they are given shorter leg pieces. This is the youngest category, as Lego doesn’t include infant Lego minifigures in their sets. I would guess that this age includes years 5-12.

Young Adults (75)
Young adults encompasses a fairly wide range, from puberty to early middle age. This group is the largest, partially because it includes the large contingent of conscripts serving in the city. An age range would be roughly 12-32.

Middle Aged (52)
Includes visibly older adults that do not meet the criteria for elderly. This group encompasses most of the city’s administration and professionals.

Elderly (13)
The elderly are those that stand out for being old, including those with features such as beards, wrinkled skin, or off-color hair.

Breakdown by industry

Second is occupations. Again, since minifigures cant exactly give their own occupations, and since most jobs happen indoors where I can’t see, I was forced to make some guesses based on outfits and group them into loose collections.

27 Military
15 Government administration
11 Entertainment
9 Law enforcement
9 Transport / Shipping
9 Aerospace industries
8 Heavy industry
6 Retail / services
5 Healthcare
5 Light Industry

An unemployment rate would be hard to gauge, because most of the time the unemployment rate is adjusted to omit those who aren’t actively seeking work, such as students, retired persons, disabled persons, homemakers, and the like. Unfortunately for our purposes, a minifigure who is transitionally unemployed looks pretty much identical to one who has decided to take an early retirement.

What we can take a stab at is a workforce participation rate. This is a measure of what percentage of the total number of people eligible to be working are doing so. So, for our purposes, this means tallying the total number of people assigned jobs and dividing by the total number of people capable of working, which we will assume means everyone except children. This gives us a ballpark of about 74%, decreasing to 68% if we exclude military to look only at the civilian economy. Either of these numbers would be somewhat high, but not unexplainably so.

Breakdown by sex

With no distinction between the physical form of Lego bodies, the differences between sexes in minifigure is based purely on cosmetic details such as hair type, the presence of eyelashes, makeup, or lipstick on a face, and dresses. This is obviously based on stereotypes, and makes it tricky to tease apart edge cases. Is the figure with poorly-detailed facial features male or female? What about that faceless conscript marching in formation with their helmet and combat armor? Does dwelling on this topic at length make me some kind of weirdo?

The fact that Lego seems to embellish characters that are female with stereotypical traits suggests that the default is male. Operating on this assumption gives you somewhere between 50 and 70 minifigures with at least one distinguishing female trait depending on how particular you get with freckles and other minute facial details.

That’s a male to female ratio somewhere between 2.08:1 and 1.2:1. The latter would be barely within the realm of ordinary populations, and even then would be highly suggestive of some kind of artificial pressure such as sex selective abortion, infanticide, widespread gender violence, a lower standard of medical care for girls, or some kind of widespread exposure, whether to pathogens or pollutants, that causes a far higher childhood fatality rate for girls than would be expected. And here you were thinking that a post about Lego minifigures was going to be a light and gentle read.

The former ratio is completely unnatural, though not completely unheard of in real life under certain contrived circumstances: certain South Asian and Middle Eastern countries have at times had male to female ratios of as high as two owing to the presence of large numbers of guest workers. In such societies, female breadwinners, let alone women traveling alone to foreign countries to send money home, is unheard of.

Such an explanation might be conceivable given a look at the lore of the city. The city is indeed a major trade port and center of commerce, with a non-negligible transient population, and also hosts a sizable military presence. By a similar token, I could simply say that there are more people that I’m not counting hiding inside all those skyscrapers that make everything come out even. Except this kind of narrative explanation dodges the question.

The strait answer is that, no, Lego cities are not particularly accurate reflections of our real life cities. This lack of absolute realism does not make Lego bad toys. Nor does it detract from their value as an artistic and storytelling medium; nor either the benefits for play therapy for patients affected with neuro-cognitive symptoms, my original reason for starting my Lego collection.

 

The Fly Painting Debate

Often in my travels, I am introduced to interesting people, who ask interesting questions. One such person recently was a lady who was, I am told, raised on a commune as a flower child, and who now works in developing educational materials for schools. Her main work consists of trying to convey philosophical and moral questions to young children in ways that allow them to have meaningful discussions.

One such question, which she related to me, focused on a man she knew tangentially who made pieces of microscopic art. Apparently this man makes paintings roughly the width of a human hair, using tools like insect appendages as paintbrushes. These microscopic paintings are sold to rich collectors to the tune of hundreds of thousands of dollars. Because of their size, they are not viewable without special equipment, and broadly speaking, cannot be put on display.

There is obviously a lot to unpack here. The first question is: Is what this man does art, especially if it cannot be enjoyed? My feeling is yes, for two reasons. First, there is artistic expression taking place on the part of the artist, and more importantly, the artwork itself does have an impact on its consumers, even if the impact is more from the knowledge of the existence of the piece than any direct observation. Secondly, the piecesare by their very existence intellectually stimulating and challenging, in a way that can provoke further questions and discussion.

Certainly they challenge the limits of size as a constraint of artistic medium. And these kinds of challenges, while often motivated by pride and hubris, do often push the boundaries of human progress as a whole, by generating interest and demand for scientific advancement. This criteria of challenging the status quo is what separates my bathroom toilet from Marcel Duchamp’s “Fountain”. Admittedly, these are fairly subjective criteria, but going any further inevitably turns into a more general debate on what constitutes art; a question which is almost definitionally paradoxical to answer.

The second, and to me, far more interesting question is: is this man’s job, and the amount he makes justifiable? Although few would argue that he is not within his rights to express himself as he pleases, what of the resulting price tag? Is it moral to spend hundreds of thousands of dollars on such items that are objectively luxuries, that provide no tangible public good? How should we regard the booming business of this man’s trade: as a quirky niche market enabled by a highly specialized economy and generous patrons willing to indulge ambitious projects, or as wasteful decadence that steals scarce resources to feed the hubris of a disconnected elite?

This points at a question that I keep coming back to in my philosophical analyses, specifically in my efforts to help other people. Is it better to focus resources on smaller incremental projects that affect a wider number of people, or larger, more targeted projects that have a disproportionate impact on a small group?

To illustrate, suppose you have five thousand dollars, and want to do the moral utilitarian thing, and use it to improve overall happiness. There are literally countless ways to do this, but let’s suppose that you want to focus on your community specifically. Let’s also suppose that your community, like my community, is located in a developed country with a generally good standard of living. Life may not always be glamorous for everyone, but everyone has a roof over their head and food on the table, if nothing else.

You have two main options for spending your five thousand dollars.

Option 1: You could choose to give five hundred people each ten dollars. All of these people will enjoy their money as a pleasant gift, though it probably isn’t going to turn anyone’s life around.

Option 2: You could choose to give a single person five thousand dollars all at once.

I’m genuinely torn on this question. The first option is the ostensibly fairer answer, but the actual quality of life increase is marginal. More people benefit, but people probably don’t take away the same stories and memories as the one person would from the payout. The increase in happiness here is basically equivocal, making them a wash from a utilitarian perspective.

This is amplified by two quirks of human psychology. The first is a propensity to remember large events over small events, which makes some sense as a strategy, but has a tendency to distort trends. This is especially true of good things, which tend to be minimized, while bad things tend to be more easily remembered. This is why, for example, Americans readily believe that crime is getting worse, even though statistically, the exact opposite is true.

The second amplifier is the human tendency to judge things in relative terms. Ten dollars, while certainly not nothing, does not make a huge difference relative to an annual salary of $55,000, while $5,000 is a decent chunk of change. Moreover, people will judge based relative to each other, meaning that some perceived happiness may well be lost in giving the same amount of money to more people.

This question comes up in charity all the time. Just think about the Make a Wish Foundation. For the same amount of money, their resources could easily reach far more people through research and more broad quality of life improvements. Yet they chose to focus on achieving individual wishes. Arguably they achieve greater happiness because they focus their resources on a handful of life-changing projects rather than a broader course of universal improvement.

Now, to be clear, this does not negate the impact of inequality, particularly at the levels faced in the modern world. Indeed, such problems only really appear in stable, developed societies where the the value of small gifts is marginal. In reality, while ten dollars may not mean a great deal to myself or my neighbor, it would mean the difference between riches and poverty in a village facing extreme poverty in a developing nation. Also, in reality, we are seldom faced with carefully balanced binary options between two extremes.

The question of the microscopic artist falls into a grey area between the two extremes. As a piece of art, such pieces invariably contribute, even if only incrementally, to the greater corpus of human work, and their creation and existence contributes in meaningful and measurable ways to overall human progress.

There is, of course, the subjective, and probably unanswerable question of to what degree the wealthy collector buyers of these pieces are derive their enjoyment from the artistic piece itself, or from the commodity; that is, whether they own it for artistic sake, or for the sake of owning it. This question is relevant, as it does have some bearing on what can be said to be the overall utilitarian happiness derived from the work, compared to the utilitarian happiness derived from the same sum of resources spent otherwise. Of course, this is unknowable and unprovable.

What, then, can be made of this question? The answer is probably not much, unless one favors punitively interventionist economic policy, or totalitarian restrictions on artistic expression. For my part, I am as unable to conclusively answer this question as I can answer the question of how best to focus charitable efforts. Yet I do think it is worthwhile to always bear in mind the trade offs which are being made.

Incremental Progress Part 4 – Towards the Shining Future

I have spent the last three parts of this series bemoaning various aspects of the cycle of medical progress for patients enduring chronic health issues. At this point, I feel it is only fair that I highlight some of the brighter spots.

I have long come to accept that human progress is, with the exception of the occasional major breakthrough, incremental in nature; a reorganization here paves the way for a streamlining there, which unlocks the capacity for a minor tweak here and there, and so on and so forth. However, while this does help adjust one’s day to day expectations from what is shown in popular media to something more realistic, it also risks minimizing the progress that is made over time.

To refer back to an example used in part 2 that everyone should be familiar with, let’s refer to the progress being made on cancer. Here is a chart detailing the rate of FDA approvals for new treatments, which is a decent, if oversimplified, metric for understanding how a given patient’s options have increased, and hence, how specific and targeted their treatment will be (which has the capacity to minimize disruption to quality of life), and the overall average 5-year survival rate over a ten year period.

Does this progress mean that cancer is cured? No, not even close. Is it close to being cured? Not particularly.

It’s important to note that even as these numbers tick up, we’re not intrinsically closer to a “cure”. Coronaviruses, which cause the common cold, have a mortality rate pretty darn close to zero, at least in the developed world, and that number gets a lot closer if we ignore “novel” coronaviruses like SARS and MERS, and focus only on the rare person who has died as a direct result of the common cold. Yet I don’t think anyone would call the common cold cured. Coronaviruses, like cancer, aren’t cured, and there’s a reasonable suspicion on the part of many that they aren’t really curable in the sense that we’d like.

“Wait,” I hear you thinking, “I thought you were going to talk about bright spots”. Well, yes, while it’s true that progress on a full cure is inconclusive at best, material progress is still being made every day, for both colds and cancer. While neither is at present curable, they are, increasingly treatable, and this is where the real progress is happening. Better treatment, not cures, is from whence all the media buzz is generated, and why I can attend a conference about my disease year after year, hearing all the horror stories of my comrades, and still walk away feeling optimistic about the future.

So, what am I optimistic about this time around, even when I know that progress is so slow coming? Well, for starters, there’s life expectancy. I’ve mentioned a few different times here that my projected lifespan is significantly shorter than the statistical average for someone of my lifestyle, medical issues excluded. While this is still true, this is becoming less true. The technology which is used for my life support is finally reaching a level of precision, in both measurement and dosing, where it can be said to genuinely mimic natural bodily functions instead of merely being an indefinite stopgap.

To take a specific example, new infusion mechanisms now allow dosing precision down to the ten-thousandth of a milliliter. For reference, the average raindrop is between 0.5 and 4 milliliters. Given that a single thousandth of a milliliter in either direction at the wrong time can be the difference between being a productive member of society and being dead, this is a welcome improvement.

Such improvements in delivery mechanisms has also enabled innovation on the drugs themselves by making more targeted treatments wth a smaller window for error viable to a wider audience, which makes them more commercially viable. Better drugs and dosaging has likewise raised the bar for infusion cannulas, and at the conference, a new round of cannulas was already being hyped as the next big breakthrough to hit the market imminently.

In the last part I mentioned, though did not elaborate at length on, the appearance of AI-controlled artificial organs being built using DIY processes. These systems now exist, not only in laboratories, but in homes, offices, and schools, quietly taking in more data than the human mind can process, and making decisions with a level of precision and speed that humans cannot dream of achieving. We are equipping humans as cyborgs with fully autonomous robotic parts to take over functions they have lost to disease. If this does not excite you as a sure sign of the brave new future that awaits all of us, then frankly I am not sure what I can say to impress you.

Like other improvements explored here, this development isn’t so much a breakthrough as it is a culmination. After all, all of the included hardware in these systems has existed for decades. The computer algorithms are not particularly different from the calculations made daily by humans, except that they contain slightly more data and slightly fewer heuristic guesses, and can execute commands faster and more precisely than humans. The algorithms are simple enough that they can be run on a cell phone, and have an effectiveness on par with any other system in existence.

These DIY initiatives have already caused shockwaves throughout the medical device industry, for both the companies themselves, and the regulators that were previously taking their sweet time in approving new technologies, acting as a catalyst for a renewed push for commercial innovation. But deeper than this, a far greater change is also taking root: a revolution not so much in technology or application, but in thought.

If my memory and math are on point, this has been the eighth year since I started attending this particular conference, out of ten years dealing with the particular disease that is the topic of this conference, among other diagnoses. While neither of these stretches are long enough to truly have proper capital-H historical context, in the span of a single lifetime, especially for a relatively young person such as myself, I do believe that ten or even eight years is long enough to reflect upon in earnest.

Since I started attending this conference, but especially within the past three years, I have witnessed, and been the subject of, a shift in tone and demeanor. When I first arrived, the tone at this conference seemed to be, as one might expect one primarily of commiseration. Yes, there was solidarity, and all the positive emotion that comes from being with people like oneself, but this was, at best, a bittersweet feeling. People were glad to have met each other, but still nevertheless resentful to have been put in the unenviable circumstances that dictated their meeting.

More recently, however, I have seen and felt more and more an optimism accompanying these meetings. Perhaps it is the consistently record-breaking attendance that demonstrates, if nothing else, that we stand united against the common threat to our lives, and against the political and corporate forces that would seek to hold up our progress back towards being normal, fully functioning humans. Perhaps it is merely the promise of free trade show goodies and meals catered to a medically restricted diet. But I think it is something different.

While a full cure, of the sort that would allow me and my comrades to leave the life support at home, serve in the military, and the like, is still far off, today more than ever before, the future looks, if not bright, then at least survivable.

In other areas of research, one of the main genetic research efforts, which has maintained a presence at the conference, is now closing in on the genetic and environmental triggers that cause the elusive autoimmune reaction which has been known to cause the disease, and on various methods to prevent and reverse it. Serious talk of future gene therapies, the kind of science fiction that has traditionally been the stuff of of comic books and film, is already ongoing. It is a strange and exciting thing to finish an episode of a science-fiction drama television series focused on near-future medical technology (and how evil minds exploit it) in my hotel room, only to walk into the conference room to see posters advertising clinical trial sign ups and planned product releases.

It is difficult to be so optimistic in the face of incurable illness. It is even more difficult to remain optimistic after many years of only incremental progress. But pessimism too has its price. It is not the same emotional toll as the disappointment which naive expectations of an imminent cure are apt to bring; rather it is an opportunity cost. It is the cost of missing out on adventures, on missing major life milestones, on being conservative rather than opportunistic.

Much of this pessimism, especially in the past, has been inspired and cultivated by doctors themselves. In a way, this makes sense. No doctor in their right mind is going to say “Yes, you should definitely take your savings and go on that cliff diving excursion in New Zealand.” Medicine is, by its very nature, conservative and risk averse. Much like the scientist, a doctor will avoid saying anything until after it has been tested and proven beyond a shadow of a doubt. As noted previously, this is extremely effective in achieving specific, consistent, and above all, safe, treatment results. But what about when the situation being treated is so all-encompassing in a patient’s life so as to render specificity and consistency impossible?

Historically, the answer has been to impose restrictions on patients’ lifestyles. If laboratory conditions don’t align with real life for patients, then we’ll simply change the patients. This approach can work, at least for a while. But patients are people, and people are messy. Moreover, when patients include children and adolescents, who, for better or worse, are generally inclined to pursue short term comfort over vague notions of future health, patients will rebel. Thus, eventually, trading ten years at the end of one’s life for the ability to live the remainder more comfortably seems like a more balanced proposition.

This concept of such a tradeoff is inevitably controversial. I personally take no particular position on it, other than that it is a true tragedy of the highest proportion that anyone should be forced into such a situation. With that firmly stated, many of the recent breakthroughs, particularly in new delivery mechanisms and patient comfort, and especially in the rapidly growing DIY movement, have focused on this tradeoff. The thinking has shifted from a “top-down” approach of finding a full cure, to a more grassroots approach of making life more livable now, and making inroads into future scientific progress at a later date. It is no surprise that many of the groups dominating this new push have either been grassroots nonprofits, or, where they have been commercial, have been primarily from silicon valley style, engineer-founded, startups.

This in itself is already a fairly appreciable and innovative thesis on modern progress, yet one I think has been tossed around enough to be reasonably defensible. But I will go a step further. I submit that much of the optimism and positivity; the empowerment and liberation which has been the consistent takeaway of myself and other authors from this and similar conferences, and which I believe has become more intensely palpable in recent years than when I began attending, has been the result of this same shift in thinking.

Instead of competing against each other and shaming each other over inevitable bad blood test results, as was my primary complaint during conferences past, the new spirit is one of camaraderie and solidarity. It is now increasingly understood at such gatherings, and among medical professionals in general, that fear and shame tactics are not effective in the long run, and do nothing to mitigate the damage of patients deciding that survival at the cost of living simply isn’t worth it [1]. Thus the focus has shifted from commiseration over common setbacks, to collaboration and celebration over common victories.

Thus it will be seen that the feeling of progress, and hence, of hope for the future, seems to lie not so much in renewed pushes, but in more targeted treatments, and better quality of life. Long term patients such as myself have largely given up hope in the vague, messianic cure, to be discovered all at once at some undetermined future date. Instead, our hope for a better future; indeed, for a future at all; exists in the incremental, but critically, consistent, improvement upon the technologies which we are already using, and which have already been proven. Our hope lies in understanding that bad days and failures will inevitably come, and in supporting, not shaming, each other when they do.

While this may not qualify for being strictly optimistic, as it does entail a certain degree of pragmatic fatalism in accepting the realities of disabled life, it is the closest I have yet come to optimism. It is a determination that even if things will not be good, they will at least be better. This mindset, unlike rooting for a cure, does not require constant fanatical dedication to fundraising, nor does it breed innovation fatigue from watching the scientific media like a hawk, because it prioritizes the imminent, material, incremental progress of today over the faraway promises of tomorrow.


[1] Footnote: I credit the proximal cause of this cognitive shift in the conference to the progressive aging of the attendee population, and more broadly, to the aging and expanding afflicted population. As more people find themselves in the situation of a “tradeoff” as described above, the focus of care inevitably shifts from disciplinarian deterrence and prevention to one of harm reduction. This is especially true of those coming into the 13-25 demographic, who seem most likely to undertake such acts of “rebellion”. This is, perhaps unsurprisingly, one of the fastest growing demographics for attendance at this particular conference over the last several years, as patients who began attending in childhood come of age.

Something Old, Something New

It seems that I am now well and truly an adult. How do I know? Because I am facing a quintessentially adult problem: People I know; people who I view as my friends and peers and being of my own age rather than my parents; are getting married.

Credit to Chloe Effron of Mental Floss

It started innocently enough. I became first aware, during my yearly social media purge, in which I sort through unanswered notifications, update my profile details, and suppress old posts which are no longer in line with the image which I seek to present. While briefly slipping into the rabbit hole that is the modern news feed, I was made aware that one of my acquaintances and classmates from high school was now engaged to be wed. This struck me as somewhat odd, but certainly not worth making a fuss about.

Some months later, it emerged after a late night crisis call between my father and uncle, that my cousin had been given a ring by his grandmother in order to propose to his girlfriend. My understanding of the matter, which admittedly is third or fourth hand and full of gaps, is that this ring-giving was motivated not by my cousin himself, but by the grandmother’s views on unmarried cohabitation (which existed between my cousin and said girlfriend at the time) as a means to legitimize the present arrangement.

My father, being the person he was, decided, rather than tell me about this development, to make a bet on whether or not my cousin would eventually, at some unknown point in the future, become engaged to his girlfriend. Given what I knew about my cousin’s previous romantic experience (more in depth than breadth), and the statistics from the Census and Bureau of Labor Statistics (see info graphic above), I gave my conclusion that I did not expect that my cousin to become engaged within the next five years, give or take six months [1]. I was proven wrong within the week.

I brushed this off as another fluke. After all, my cousin, for all his merits, is rather suggestible and averse to interpersonal conflict. Furthermore, he comes from a more rural background with a strong emphasis on community values than my godless city-slicker upbringing. And whereas I would be content to tell my grandmother that I was perfectly content to live in delicious sin with my perfectly marvelous girl in my perfectly beautiful room [2], my cousin might be otherwise more concerned with traditional notions of propriety.

Today, though, came the final confirmation: wedding pictures from a friend of mine I knew from summer camp. The writing is on the wall. Childhood playtime is over, and we’re off to the races. In comes the age of attending wedding ceremonies and watching others live out their happily ever afters (or, as is increasingly common, fail spectacularly in a nuclear fireball of bitter recriminations). Naturally next on the agenda is figuring out which predictions about “most likely to succeed” and accurate with regards to careers, followed shortly by baby photos, school pictures, and so on.

At this point, I may as well hunker down for the day that my hearing and vision start failing. It would do me well, it seems, to hurry up and preorder my cane and get on the waiting list for my preferred retirement home. It’s not as though I didn’t see this coming from a decade away. Though I was, until now, quite sure that by the time that marriage became a going concern in my social circle that I would be finished with high school.

What confuses me more than anything else is that these most recent developments seem to be in defiance of the statistical trends of the last several decades. Since the end of the postwar population boom, the overall marriage rate has been in steady decline, as has the percentage of households composed primarily of a married couple. At the same time, both the number and percentage of nonfamily households (defined as “those not consisting of persons related by blood, marriage, adoption, or other legal arrangements”) has skyrocketed, and the growth of households has become uncoupled from the number of married couples, which were historically strongly correlated [3].

Which is to say that the prevalence of godless cohabitation out of wedlock is increasing. So too has increased the median age of first marriage, from as low as eighteen at the height of the postwar boom, to somewhere around thirty for men in my part of the world today. This begs an interesting question: For how long is this trend sustainable? That is, suppose the current trend of increasingly later marriages continues for the majority of people. At some point, presumably, couples will opt to simply forgo marriage altogether, and indeed, in many cases, already are in historic numbers [3]. At what point, then, does the marriage age snap back to the lower age practiced by those people who, now a minority, are still getting married early?

Looking at the maps a little closer, an few interesting correlations emerge [NB]. First, States with larger populations seem to have both fewer marriages per capita, and a higher median age of first marriage. Conversely, there is a weak, but visible correlation between a lower median age of first marriage, and an increased marriage per capita rate. There are a few conclusions that can be drawn from these two data sets, most of which match up with our existing cultural understanding of marriage in the modern United States.

First, marriage appears to have a geographic bias towards rural and less densely populated areas. This can be explained either by geography (perhaps large land area with fewer people makes individuals more interested in locking down relationships), or by a regional cultural trend (perhaps more rural communities are more god-fearing than us cityborne heathens, and thus feel more strongly about traditional “family values”.

Second, young marriage is on the decline nationwide, even in the above mentioned rural areas. There are ample potential reasons for this. Historically, things like demographic changes due to immigration or war, and the economic and political outlook have been cited as major factors in causing similar rises in the median age of first marriage.

Fascinatingly, one of the largest such rises seen during the early part of the 20th century was attributed to the influx of mostly male immigrants, which created more romantic competition for eligible bachelorettes, and hence, it is said, caused many to defer the choice to marry [3]. It seems possible, perhaps likely even, that the rise of modern connectivity has brought about a similar deference (think about how dating sights have made casual dating more accessible). Whether this effect works in tandem with, is caused by, or is a cause of, shifting cultural values, is difficult to say, but changing cultural norms is certainly also a factor.

Third, it seems that places where marriage is more common per capita have a lower median age of first marriage. Although a little counterintuitive, this makes some sense when examined in context. After all, the more important marriage is to a particular area-group, the higher it will likely be on a given person’s priority list. The higher a priority marriage is, the more likely that person is to want to get married sooner rather than later. Expectations of marriage, it seems, are very much a self-fulfilling prophecy.

NB: All of these two correlations have two major outliers: Nevada and Hawaii, which have far more marriages per capita than any other state, and fairly middle of the road ages of first marriage. It took me an unconscionably long time to figure out why.

So, if marriage is becoming increasingly less mainstream, are we going to see the median age of first marriage eventually level off and decrease as this particular statistic becomes predominated by those who are already predisposed to marry young regardless of cultural norms?

Reasonable people can take different views here, but I’m going to say no. At least not in the near future, for a few reasons.

Even if marriage is no longer the dominant arrangement for families and cohabitation (which it still is at present), there is still an immense cultural importance placed on marriage. Think of the fairy tales children grow up learning. The ones that always end “happily ever after”. We still associate that kind of “ever after” with marriage. And while young people may not be looking for that now, as increased life expectancies make “til death do us part” seem increasingly far off and irrelevant to the immediate concerns of everyday life, living happily ever after is certainly still on the agenda. People will still get married for as long as wedding days continue to be a major celebration and social function, which remains the case even in completely secular settings today.

And of course, there is the elephant in the room: Taxes and legal benefits. Like it or not, marriage is as much a secular institution as a religious one, and as a secular institution, marriage provides some fairly substantial incentives over simply cohabiting. The largest and most obvious of these is the ability to file taxes jointly as a single household. Other benefits such as the ability to make medical decisions if one partner is incapacitated, to share property without a formal contract, and the like, are also major incentives to formalize arrangements if all else is equal. These benefits are the main reason why denying legal marriage rights to same sex couples is a constitutional violation, and are the reason why marriage is unlikely to go extinct.

All of this statistical analysis, while not exactly comforting, has certainly helped cushion the blow of the existential crisis which seeing my peers reach major milestones far ahead of me generally brings with it. Aside from providing a fascinating distraction, pouring over old reports and analyses, the statistics have proven what I already suspected: that my peers and I simply have different priorities, and this need not be a bad thing. Not having marriage prospects at present is not by any means an indication that I am destined for male spinsterhood. And with regards to feeling old, the statistics are still on my side. At least for the time being.

Works Consulted

Effron, Chloe, and Caitlin Schneider. “At What Ages Do People First Get Married in Each State?” Mental Floss. N.p., 09 July 2015. Web. 14 May 2017. <http://mentalfloss.com/article/66034/what-ages-do-people-first-get-married-each-state>.

Masteroff, Joe, Fred Ebb, John Kander, Jill Haworth, Jack Gilford, Bert Convy, Lotte Lenya, Joel Grey, Hal Hastings, Don Walker, John Van Druten, and Christopher Isherwood. Cabaret: original Broadway cast recording. Sony Music Entertainment, 2008. MP3.

Wetzel, James. American Families: 75 Years of Change. Publication. N.p.: Bureau of Labor Statistics, n.d. Monthly Labor Review. Bureau of Labor Statistics, Mar. 1990. Web. 14 May 2017. <https://www.bls.gov/mlr/1990/03/art1full.pdf>.

Kirk, Chris. “Nevada Has the Most Marriages, but Which State Has the Fewest?” Slate Magazine. N.p., 11 May 2012. Web. 14 May 2017. <http://www.slate.com/articles/life/map_of_the_week/2012/05/marriage_rates_nevada_and_hawaii_have_the_highest_marriage_rates_in_the_u_s_.html>.

Tax, TurboTax – Taxes Income. “7 Tax Advantages of Getting Married.” Intuit TurboTax. N.p., n.d. Web. 15 May 2017. <https://turbotax.intuit.com/tax-tools/tax-tips/Family/7-Tax-Advantages-of-Getting-Married-/INF17870.html>.

Strike!

Update: Scroll to the bottom of the post for the latest.

This blog is currently participating successfully participated in the nationwide general strike in protest of the United States government’s actions against refugees and immigrants. Access to our archives has been was temporarily suspended and has since been restored.

We do not apologize for this inconvenience.

All complaints should be directed to the United States government.

Read more about the strike here:

http://www.forbes.com/sites/michelinemaynard/2017/02/15/how-much-do-immigrants-matter-to-restaurants-d-c-will-find-out-thursday/#206d38201242

http://www.washingtontimes.com/news/2017/feb/15/day-without-immigrants-will-shutter-dc-businesses-/

https://www.nytimes.com/2017/02/15/us/politics/immigration-restaurant-strike-trump.html

Update:

The day is over, and access to our archives has been restored. The Day Without Immigrants strike made headlines nationwide, and shut down a good portion of my local area. It is heartening to see people participating in collective action in meatspace in addition to online action.

Perhpas surprisingly, the hit counter actually reached a record since launch over the past 24 hours. I’m not quite sure what to make of this. We received a great deal of positive feedback during this time, which is much appreciated.

The biggest complaint of this strike, aside from those disagreeing with the cause, and the idea of collective action in general, was that it was poorly organized, poorly publicized, and done on short notice. In a sense, this is good news. It demonstrated the ability for quick reaction, and provides feedback for future action; most notably, the planned general strike for International Women’s Day on March 8th.

There have been talks of further strike action on Friday and continuing into the weekend. I applaud this effort, although I fear that attempting to extend this largely spontaneous effort will overtax the limited economic resources and political will of those who are perhaps sympathetic, but not necessarily committed enough to risk their livelihoods. Better in this case, I believe, to play the long game.

One more thing; amid all the demonstrations and media coverage, the super-PAC behind the presidential administrations quietly released a “Media Accountability Survey”. The questions are, of course, horribly biased, and it seems reasonable to assume that this will result in an accordingly biased result. Therefore, there has been an effort by some social media circles to spread publicity of the survey to ensure that it receives a wide sample size. For those interested, the link is below.

https://action.donaldjtrump.com/mainstream-media-accountability-survey/

Statistically Significant

Having my own website (something I can only now scarcely say without adding exclamation points,) has unlocked a great deal of new tools to explore. Specifically, having an operational content platform has given me access to statistics on who is reading what, who is clicking on given buttons, and where people are coming here from. It is enthralling, and terribly addictive.

Here are some initial conclusions from the statistics page:

1) There is a weak positive correlation between the days I release new content and the days we get more views. This correlation is enhanced if we stretch the definition of “day” to include proceeding twenty-four hours, rather than the remainder of the calendar day on which the content was released. This suggests that there may, in fact, be people actually reading what I write here. How exciting!

2) Most visitors register as originating from the United States. However, the script which tracks where our referrals come from paints a far more diverse picture. This could be a bug in the monitoring software, or people accessing the site from overseas could be using proxies to hide their identities.

3) The viewership of this blog is becoming larger and more international as a function of time.

4) More referrals currently come from personal one-on-one sharing (Facebook, web forums, shared links) than stumble-upon searches.

5) Constantly interrupting one’s routine to check website statistics will quickly drive on stark raving mad, as well as suck time away from writing.

These are interesting insights, and worthy of understanding for future posts. Of course, the immediate follow-up question is: What do I do with this data? How do I leverage it into more views, more engagement, and more shares? How do I convert these insights into money of fame or prestige? The idea seems to be that if a thing is being shared, there has to be some value coming back for the sharer aside from simply contributing to public discourse.

While I will not deny that I would enjoy having money, fame, and prestige, as of now, these are not my primary goals in maintaining this blog. If I do decide, as has been suggested, to follow the route of the professional sharer, soliciting donations and selling merchandise, it would not be in pursuit of Gatsbyesque money and status, but merely so that writing and not starving may not be mutually exclusive.

It is still strange to me that I have a platform. That, in the strictest sense, my writing here is a competitor of Netflix, JK Rowling, and YouTube. I am a creator. I am a website owner. I have a tendency to think of those aforementioned entities as being on a plane unto themselves, untouchable by mere mortals (or muggles, as the case may be) such as myself. And in business terms, there is some truth to this. But in terms of defining the meaning of “artist”, “creator” and “writer” in the twenty-first century, I am already on the same side of the line as them.

I suppose the heart of the matter is that, setting aside that those entities actually have professional salaries, there is no intrinsic difference between either of us. They have platforms, and I have a platform. They have an audience with certain demographics, as do I. They receive value from the distribution of their work, and I do for mine (albeit in different forms and on different orders of magnitude).

Growing up, I had this notion that adulthood conferred with it some sort of intrinsic superiority borne of moral and cognitive righteousness, and conferred upon each and every human upon reaching adulthood. I believed that the wealthy and famous had this same distinction one step above everyone else, and that those in positions of legal authority had this same distinction above all. Most of the authority figures in my life encouraged this mindset, as it legitimized their directions and orders to me.

The hardest part of growing up for me has been realizing that this mindset simply isn’t true; that adulthood is not a summary promotion by divine right, and that now that I too am a nominal adult, that no one else can truly claim to have an inherently better understanding of the world. Different minds of differing intellectual bents can come to differing conclusions, but people in power are not inherently right merely because they are in power.

I am not a better or worse human being merely because I happen to have the passwords and payment details to this domain, any more than Elon Musk is an inherently better human for having founded Tesla and Space-X. Yes, the two of us had resources, skills, and motivation to begin both of our projects, but this is as much a coincidental confluence of circumstances as a reflection on any actual prowess. Nor are we better people because we have our respective audiences.

In this day an age, there is much talk of division of people into categories. There are the creators and the consumers. The insiders and the outsiders. The elite and the commoners. The “world of success” as we have been taught to think about it, is a self-contained, closed-loop, open only to those who are worthy, and those of us who aren’t destined to be a part of it must inevitably yield to those who are. Except this plainly isn’t true. I’m not special because I have a blog, or even because I have an audience large enough to draw demographic information. There is nothing inherent that separates me from the average man, and nothing that separates both of us from those at the very top. To claim otherwise is not only dangerous to the idea of a democratic, free-market society, but is frankly a very childish way to look at the world.