Supersonic Man

December 14, 2014

Same Height II: the Rejumpening

Filed under: Hobbyism and Nerdry — Supersonic Man @ 11:30 am

Let’s try to put a little more rigor into the question of “do all animals jump the same height”, as discussed in this previous post.  We saw what appeared to be the same results being produced at different scales under certain assumptions… let’s check if it’s really a mathematical equality.


stages of accumulation of wealth

Filed under: Uncategorized — Supersonic Man @ 10:42 am

In the first stage, people work hard and prosper, and a few of them become rich.

In the second stage, the rich become an established class, able to grow their wealth through investment, or waste it in idleness.  Successful working people rise up to replenish this class, while its less competent members drop out.

In the third stage, the class has political power and sets the terms for how working people can prosper.  They control, and take a big cut from, most means by which it’s possible to rise through hard work.  Both the upper and the middle classes still have growth, but moving between classes becomes rare.

In the fourth stage, the upper class convinces itself that the wealth produced by labor is actually generated by capital, and that therefore they are entitled to it.  Now the ruling class’s wealth grows while working people make no gains at all, and are reduced to just struggling to stay above their less fortunate neighbors.

In the final stage, the ruling class decides it’s entitled to not just the new wealth produced by working people, but to their existing assets as well.  They try to take all money from everyone and reduce everyone but themselves to poverty.  To succeed fully would, of course, collapse the whole system, or at least reduce it to something no more advanced than feudalism, but they try to cut it as close as possible while keeping everyone still toiling.

We’re in the fourth stage now and some are trying to push us to the fifth.  But the good news is, we can turn back at any time.  We’ve done it before.

November 24, 2014

what is liberalism?

Filed under: Rantation and Politicizing — Supersonic Man @ 12:06 pm

I’ve talked a number of times about conservatism here and elsewhere, but rather little about liberalism.  If I’m a liberal, what does that mean?  I think I now have a clear description of what I think it means to be a liberal.  If I were to try to state what I think liberalism stands for in one sentence, giving it a sort of mission statement, I’d say this:

We the people have the right and the responsibility to manage our social institutions so that they work to our benefit.

In other words, if our political and economic systems are not serving us, we have the right to choose ones which do.  We have every right to modify, adjust, debug, or even wholly replace our laws and public institutions in order to improve their outcomes.  They are ours.  We created them, and their purpose is to serve us, and not vice versa.  And when an institution is working badly, we don’t just have a right to adjust it, but a duty.  Neglect and inattention to problems with them is irresponsible.

When I listen to conservative critiques of liberal ideas, it seems to me that no matter what kind of conservative perspective it comes from — whether feudal or theocratic or law-n-order or laissez-faire capitalist or even anarcho-libertarian — what they have in common is a refusal to admit this right to choose.  They may claim that it’s impossible, that it’s unworkable, that it’s immoral, or that it’s just too risky… but all come down to someone with power telling you that you need to accept and embrace and surrender to the system as it is, or as they would like to make it.  They all agree that for you to change the system to your taste is unacceptable — that it’s wrong and harmful for you to try.  They expect society to work by certain rules, and once those rules are established, they get treated as sacrosanct. And in the end, they see themselves as having the moral authority to decree what is right for society, while you lack that authority unless you happen to agree with them.

Even those who boast of seeking to lift the burden of law from you as much as possible, in the name of liberty, follow this same pattern: they treat their no-rules metarule as morally inviolable.  And if the consequences of living within their system turn out to be harmful or limiting to you, that’s your problem, not theirs.  You should have worked harder to make the best of the hand you were dealt.  And if you ask whether the people as a whole are better off with these rules, their answer is either propaganda that says “of course they are” with no data to back it up, or to tell you that your question is the wrong one to ask.

Any system will develop problems if you leave it running long enough without adjustment.  Rules that seem balanced and fair at first start to produce uneven rewards for those who have a chance to take advantage of loopholes or artificial opportunities, or they start encouraging unhelpful behaviors that weren’t intended by the drafters of the laws.  The way to handle these problems is to dynamically adjust the system as you go along.  By responding actively to issues and problems, you can keep imbalances and flaws from blowing up to catastrophic size.  Liberalism is not just the recognition that we have the right to drastically rewrite the social order if necessary: it’s also a recognition that we have a responsibility, once we have a good system, to constantly make small tweaks and adjustments to keep it running well.  Policy has to be active and responsive, not static holy writ.  In other words, we need to govern.  We do this ongoing adjustment of policy on the basis of whether the system is producing desirable outcomes, not by whether it embodies desirable moral virtues on paper.  Antigovernment ideology basically says we should let problems run their course unabated, instead of allowing ourselves to catch them small — they’re telling us that to actively fix things is not a job we can be trusted wth.

One common way — possibly the most common way — that a social problem can grow out of control is the development of a privileged ruling class.  In the end, what system you pick initially almost doesn’t matter: whether it’s tribal anarchy, warlordism, feudal aristocracy, theocracy, corporatism, anarcho-capitalism, socialism, or communism, they’ll all eventually produce a minority group which has large and increasing power, while the power remaining with the majority decreases.  This is because whatever allows one person to get a little bit ahead of those around him will then allow him, once he’s gained that ground, to improve his advantage further.  Any system will have some tendency to be pulled toward this outcome, no matter what principle it starts with!  The only way history has shown that this growing advantage for a few can be held back, is when the people have the agency to make countermoves to check the growth of excess power and privilege, in whichever particular areas it starts to crop up.  This is why we have things like banking regulations — because they were needed in order to counteract the concentration of wealth and power into places where they no longer benefit society.  When such regulations are repealed, that undue concentration comes right back.

Antiregulatory demagogues like to warn of slippery slopes, where overregulation will produce terrible stifling results.  But when regulation is dynamic and active and based on outcomes, this becomes a non-problem: when things start to have a bad effect, they’re corrected.  I believe that application of this dynamic approach is what’s responsible for every historical success at producing free and prosperous societies with widespread opportunity.  Every society that has provided real liberty and opportunity for average citizens has been essentially liberal. There has never been another way to do it.  Of all the ways that people have tried to produce a thriving society, only the ones that fit this principle have worked well.  Nobody can plan a social order that will, a priori, support widespread opportunity and a large middle class (or whatever other definition of a successful society you wish to use) over indefinite time.  You can only keep that going by reacting to imbalances that undercut your desired outcome.

I think this definition clarifies some things that might otherwise be confusing, such as the paradox of Soviet communism: the Russian Revolution was clearly liberal, yet the Soviet Union which arose from it was not at all liberal, even though they were both based on the same values and rhetoric.  This definition clarifies that the ideology is not what matters.  What matters is responsiveness to the public. One allowed the people to make changes and the other did not.

Speaking of outcomes, of course, raises the question of how we decide what outcomes are desirable.  You can be liberal by my above definition without necessarily being just or democratic.  Consider the Austro-Hungarian emperor Joseph II: he was a tremendous liberalizer of dysfunctional old feudal institutions, but also an extreme centrist authoritarian.  Conservatives tend to worry that this is what liberalism could lead to.  Will liberal forces get caught up in some enthusiasm that makes them forget the “we the people” part?  In theory, it’s certainly possible.

But I’m not nearly as worried about the question of deciding which outcomes are desirable as you might think.  I see that the people, when given a true choice, can mostly be counted on to support liberty over slavery, equality over privilege, and responsible preparation for the future over short-term indulgence.  Different conservative movements have widely varying ideas of what kind of society everyone should have to live under, and liberal people’s ideas of what they’d like to see also have lots of variety.  But when you put everyone together, basic fairness and justice are concepts that almost everyone supports.  And if they occasionally forget that, they will remember it when reminded.  So I consider the question of exactly what outcomes to pursue to be a secondary issue.  Given a true choice in the matter, the people can normally be depended upon choose fairly well.  The important thing is just that they are really able to have that choice — the opportunity to notice when something isn’t working, or when they’re being taken advantage of, and do something about it.

July 19, 2014

Freedom and Christianity

Filed under: Rantation and Politicizing — Supersonic Man @ 7:49 am

Why do so many people who believe strongly in freedom also believe strongly in the Christian bible? The two aren’t particularly compatible. In the old testament, slavery is endorsed and its victims are told to know their place. (There is one exception: Deuteronomy forbids sending escaped slaves back.) And the new testament isn’t really any better: its spiritual virtues are all about humility and submission, not just to God but to earthly masters as well.

Small wonder that some would decide that the divine figure who represents liberty is not Christ or Yahweh, but Satan. He’s all about letting you make your own choices and live with the results. But is the Satanic approach any better? That’s where you can do whatever you want, not just for yourself, but against anyone else. Take from your neighbor if he’s weaker than you, rape anyone you can catch, or despoil things that other people depend on — it’s all good with Big Red. He’s even less oppposed to slavery than Jesus was.

Neither offers any concept of human rights. Those are a strictly secular invention. And without them, there’s no way for a society that values freedom to preserve it in practice.

June 19, 2014

the Swift programming language(s)

Filed under: Hobbyism and Nerdry,thoughtful handwaving — Supersonic Man @ 9:52 pm

So Apple is regretting the corner they painted themselves into by having their core development language be Objective-C.  This language is a horrid mashup made half of Smalltalk and half of traditional unreconstructed C.  Compared to C++, the modern half is more modern, but the primitive half is more primitive.  Steve Jobs used it for NeXT during his time away from Apple, and brought it back with him.  But what looked cool and exciting in 1986 is looking awfully outdated today.

The trend in the industry is clearly moving away from these half-and-half languages, toward stuff that doesn’t inherit primitive baggage from the previous century.  Microsoft has had great success by stripping all the old C-isms out of C++ to make C#, and Java — the oldest and crudest of this new generation of programming languages — may still be the world’s most widely used language, even though most people probably now see it as something that’s had its day and is not the place to invest future effort.

Now Apple has announced a nu-school language of their own, to replace Objectionable-C.  They’re calling it Swift.  It’s even more hep and now and with-it than C#.

There’s just one problem: there’s already another computer language using the name.  It’s a scripting language for parallel computing.  Its purpose is to make it easy to spread work over many computers at once.  And this, to me, is far more interesting than Apple’s new me-too language.  (Or any of the other new contenders coming up, like Google’s Go or the Mozilla foundation’s Rust.)

See, massive parallelism is where the future of computing lies.  If you haven’t noticed, desktop CPUs aren’t improving by leaps and bounds anymore like they used to.  Speeds and capacities are showing a much flatter growth curve than they did five years ago.  You can’t keep making the same old CPUs faster and smaller… you run into physical limits.

And this means that if we want tomorrow’s computers to be capable of feats qualitatively beyond what today’s can do — stuff like understanding natural language, or running a realistic VR simulation, or making robots capable of general-purpose labor — the only way to get there is through massive parallelism.  I think that in a decade or two, we’ll mainly compare computer performance specs not with gigahertz or teraflops, but with kilocores or megacores.  That is, by the degree of parallelism.

One problem is that 95% of programming is still done in a single-tasking form.  Most programmers have little idea of how to really organize computing tasks in parallel rather than in series.

There’s very little teaching and training being directed toward unlearning that traditional approach, which soon is going to be far too limiting.  Promulgating a new language built around the idea — especially one that makes it as simple and easy as possible — strikes me as a very positive and helpful step to take.  I’m really disappointed that Apple has chosen to dump on that helpful effort by trying to steal its name.

June 12, 2014

getting through the Dumb Layer

Filed under: Rantation and Politicizing,thoughtful handwaving — Supersonic Man @ 1:38 pm

I am defining a new term, “the dumb layer”.  What I mean by the term is, any layer of human interface which is designed to deal quickly, or at low cost, with simple issues and questions that do not (or which someone assumes should not) require an intelligent response.  Some examples of Dumb Layers are:

  • voice menu systems that answer telephones, which you have to navigate through (or even actively outwit) in order to speak to a live person
  • software user interfaces which hide their more complex features behind some kind of gateway where you have to select “advanced settings” or “expert mode”
  • scripts which tell customer service workers to answer all incoming queries initially with a canned response covering basic common issues, so that you never get a relevant or thoughtful reply until you send in a followup query
  • bureaucracies which routinely reject legitimate requests for action until you show persistence in nagging them, or which ignore you until you submit lots of “required” paperwork
  • software which has a simple GUI to make it easy to use, but also a command-line or script-based interface which is more powerful
  • anything that appears when you click a Help button
  • anything that provides premade style templates as an alternative to manual styling
  • the Check Engine light in your car

There are lots more types.  A lot of times, a Dumb Layer is a feature of a machine, with the goal being ease of use for the majority, but there are lots of human institutions that also have a Dumb Layer, implemented formally as a set of rules that employees are instructed to follow in dealing with the public, or informally as an attitude of lackadaisicalness toward anyone who they think they can safely ignore.

Dumb Layer design wasn’t very common when I was young.  Nowadays, Dumb Layers seem to be everywhere.  And if you want good service, getting it depends on how adroit one becomes at penetrating through the Dumb Layer to reach someone who is empowered to think about what they’re doing.  This becomes annoying if one has to have an extended back-and-forth over several days, particularly over the phone, as you may have to redundantly re-navigate the Dumb Layer on each new call.

Sometime, designers and authorities are seduced into believing that the Dumb Layer should be able to do everything, and there’s no need to let anyone through to anything smarter.  This makes economic sense if you’re providing some service at a super low price and can’t afford to give interactive support.  But it can also afflict systems and institutions that really don’t have any excuse.  Such systems can gradually become acutely dysfunctional, even as superficially most business goes on normally with no problems.

Note: sometimes what appears to be a Dumb Layer is actually a security layer, and needs to be there to limit unauthorized access.  And sometimes it’s a safety feature, like traction control and antilock brakes.  (A Harley-Davidson has no dumb layer.)

I wrote a post here a while back about how Google and other search engines seem to actually be getting less useful as they “improve”.  I think this is an instance of the dumb layer taking over.  The smart layer of Google is getting more and more inaccessible, and Bing and the others don’t seem to be any better.

Lastly, I just want to applaud some organizations which have chosen to have no Dumb Layer.  Wikipedia is one: you search for advanced quantum mechanics or unsolved problems in mathematics, and you get the whole enchilada plopped down right in front of you, not filtered or simplified in any way — just as the ability to add new content is right there with no filter, as long as you don’t abuse it.  To me, this makes it not just the largest and most accessible encyclopedia ever assembled, but also the most genuinely useful.  People may put in facetious stuff, such as describing Solange Knowles as “Jay-Z’s 100th problem”, but every other encyclopedia loses far more value because of all the detail that the editors decide has to be left out as not of broad enough interest.

I don’t use Photoshop, but as I understand it, it has no dumb layer.  This is one reason people consider it the definitive tool.  (I use less expensive competing software such as DxO, which does have a bit of a dumb layer, but you can quickly un-hide all the smart bits.) Photoshop is an example of something common in many fields: the cheap products have thick dumb layers, and the expensive elite products lack them. Cameras, for instance: the more you pay, the fewer controls and options are hidden in “helpful” menus. The same applies to music gear: keyboards and mixers and production software get de-dumbed as you pay more. The ultimate undumb musical tool is, like, a Stradivarius violin: pay millions and it doesn’t even come with a chinrest.

One reason people often prefer Android to iOS is because its dumb layer is more permeable.

Anything that is shared as “open source” is a counterstrike against dumb layers.

May 31, 2014

the unifying force of conservatism

Filed under: Rantation and Politicizing — Supersonic Man @ 11:48 am

The Republican party, which as we know is in a fairly desperate situation with regard to attracting younger voters, may have hit on something that will help: they’re beginning to dabble in marijuana reform.  This has always  been attractive to libertarians, and as the religious fundamentalists and old-south racists fade away as electoral powers, a libertarian wing of the GOP looks like it might be coming into ascendance.

There seem to be two branches to the GOP base.  (We leave aside, for the moment, the monied establishment insiders, and look just at mass voting groups.)  One includes the aforementioned fundies and racists, plus assorted other know-nothing antigovernment types such as the militia movement.  This is the traditional GOP base, the angry white male low-information voters, the foundation of the “southern strategy”… Bobby Jindal’s “stupid party”.  One thing that unifies this block is a negative attitude toward science and intellectualism.  This group is concentrated in the older generations and is, thankfully, fading away with time.

The other branch is not anti-intellectual at all, and isn’t particularly religious.  Many of its members are successful high-tech engineering types, or smart go-getters in finance.  It is racially colorblind as a rule, and not notably patriarchal or misogynistic (though it still may attract men a lot more easily than women).  Unlike the first group, they usually have modern attitudes about issues having to do with sex and drugs.  They have no problem indulging in the kinds of decadent lifestyles that the first group loves to hate.  This group includes libertarians, objectivists, and a variety of more mainstream groups who just happen to believe in enterpreneurship, small government, low tax, light regulation, and an ideal of meritocracy, such as for instance the Log Cabin Republicans.  It can include people who are in some ways quite radical, such as transhumanists.

The difference between the two groups is so wide that one might naturally wonder what they’re doing in the same party in the first place.  Where is their common ground?

The only one I can see is that they both defend privilege.  Libertarians back a system that lets all the people who have wealth and private power keep it and expand it.  Cultural traditionalists, including religious traditionalists, end up defending traditional advantages for favored groups.  Racists and sexists oppose taking any advantage from white males, who happen to include most of the rich and powerful people protected by the other groups.  They can all agree, basically, on letting the rich get richer, and though none of the three actually wants an impoverished middle class, they’re all capable of lying to themselves about that somehow being unrelated to the protection of the wealthy which they support.

So letting the rich get richer is now the only thing that the Republican party, as a unified whole, stands for.

May 29, 2014

why the insurance industry is highly regulated, and needs to stay that way

Filed under: Rantation and Politicizing — Supersonic Man @ 10:41 am

I currently work for an insurance company.  Not a great big one, but a leading one in its niche, and one that seems to treat its clients pretty well.  And I get to see firsthand the impact of heavy regulation on the company.  Some people look at this impact and argue that business would be much more efficient without it.  But that won’t work in this case.  I will explain here why the insurance industry needs this regulation.

Let’s assume that some insurance company executives are more ethical and principled, and others are less so.  Let’s consider the incentives acting on the latter group.  Getting high premiums is good, but drives away new customers.  Advertising lower premiums brings in new customers, allowing total revenue to go up by more than enough to compensate for the loss of margin on each.  So the incentive is to compete with other insurers with a low price.

Paying claims is bad.  It not only loses money right away, it forces the raising of premiums.  The more claims you can avoid paying, the more competitive a rate you can offer to new customers.

Now, the revenue of premium payers is steady, but the cost of claims is not.  One year might be twice as expensive as another.  This means you need a cash reserve.  But this needs slightly higher premiums, so the incentive is to keep it small.  The incentive is to underestimate the degree of variability, to assume that the exceptionally bad claim years won’t happen.

The cash reserve can earn money through investment.  Some investments pay more than others, but the ones that pay more also pay at a less predictable rate.  So the incentive is to underestimate that variability and invest in overly risky opportunities for capital growth.

Whenever a company has a few good years in a row, the amount of saving they’re doing for future claims starts to look excessive, and the incentive is to think that it’s now safe to cut back on savings, and lower premiums.

Put all this together and what you get is a situation where market forces push insurers toward being underprepared for major claims.  If some companies prepare well and others prepare poorly, the ones making poorer preparation will tend to get customers away from the ones preparing well, by offering lower prices.  This can force the companies that are reluctant to do so, to also shave their premium prices.  So the number of poorly prepared companies has to increase.  And because the rewards of underpricing are immediate but the negative consequences are uncertain and may not occur for many years, it’s very easy for people to convince themselves that they’re not underpricing at all.

Eventually, a crunch will come — the investments may go south, and at some point the big claims will hit.  If the investments do poorly enough, even routine claims may suddenly be too big.  Once this happens, the company can either exhaust their capital, or cheat their customer, coming up with some technicality as a grounds to refuse to pay a claim.  The latter might work, or it might make them liable to a costly lawsuit.  Either way, it’s not sustainable.  Eventually, you either have to give up the money, or lose your reputation so customers won’t trust you with their money anymore.

Now a free market apologist might argue that the market will eliminate these bad companies eventually, thereby leaving the good ones behind.  But once some companies turn bad, competitive pressure forces the others to either get down in the mud with them, or shrink and become minor niche providers.  This means that the majority of customers buying insurance end up being not really insured.  Legally they’re covered, but in practice they aren’t — they’re still subject to the risk they tried to eliminate by insuring themselves.  The result of this is that the entire industry ceases to function; no real insurance is being provided except to a select few who are willing to pay a top-shelf price for it.

As a result of this, the only way to have an insurance industry that really insures people is, firstly, to force them to expose their financial data so that everyone can see if they are at risk of failing to pay claims. And to test those findings against some standards of financial preparedness.

But that’s not the only issue.  Even when all the insurers are financially healthy, each one still has an incentive to resist their obligations to pay claims.  There’s always an immediate reward for finding ways to deprive a customer of coverage when they make a claim.  And if some companies do so and others don’t, those companies gain a pricing advantage in the short term.  This can mean that the whole industry, again, can be dragged toward failing to really cover their customers.  This isn’t theoretical: a situation like this did happen in the pre-Obamacare health insurance industry.  It became the norm to cheat and rob many customers through legalistic trickery.  (We shall see in time how much difference the new law manages to make.)  The auto collision insurance industry has allegedly also showed tendencies in this direction, when the regulatory climates of particular states allowed it.

Again, the free market does not weed out the bad apples — or rather, it doesn’t weed them out quickly enough to discourage their proliferation.  Instead, it pushes the good ones to emulate the bad.  And even if you clean out all the bad ones, the least good of the ones remaining will still exert a slight downward pull, which increases with time.

The only way to overcome this steady downward movement is to put a floor under it, which means setting and enforcing minimum standards of dependability in paying claims.  Our current regulatory environment, unfortunately, is spottier in this area than it is in the financial one.  But when and where it’s done properly, the insurance industry can function pretty well, providing a valuable and necessary service.  When it doesn’t, the industry can gradually shift from useful to parasitical.  If that shift becomes complete, one might as well never have bothered to start insuring oneself.

May 13, 2014

cosmic inflation

Filed under: Hobbyism and Nerdry,thoughtful handwaving — Supersonic Man @ 10:24 am

The cosmological inflation theory always sounded weird to me.  I’ve been reading a bit about it, trying to get my head around what they’re claiming.  And I’m unconvinced that it’s a valid theory, even though it’s currently winning at prediction.

The classical big bang theory certainly has problems that need addressing.  Now, there’s no doubt that there was a bang which was big.  Everything we see in the universe is flying apart from everything else, and behind everything we can see is a thermal glow which, if you account for redshift, appears to come from hot gas just at the point where it cools enough to be transparent. (This is around 3000 Kelvin, about as hot as a halogen lamp filament).  So there’s no way around the conclusion that, thirteen point something billion years ago, the entire observable universe was packed into a much smaller volume which was dense and hot, and that it exploded out at terrific speed.  That much is clear.

The problems arise when you try to extrapolate what came before the ball of dense hot gas, which clearly was already expanding.  The math says that it must have all expanded from a volume that was much smaller and hotter still.  In fact, the classical mathematical solution to the big bang insists that the initial explosion must have taken place in a region that wasn’t just tiny, or even infinitesimal, but in no volume at all: an absolute single point where density and temperature equalled infinity.

This answer is nonsensical.  Scientists are now rightly rejecting it, as many of them also reject the notion that the center of a black hole must be a singularity of zero size.  Clearly both are an oversimplification.

The trouble is, if you postulate anything else, the consequences run up against observed data.  The key point of observation is that the universe, as far back as it’s possible to see, has an essentially uniform density and temperature in all directions.  It appears very much as if the volume of hot gas which existed at the earliest moment we can see was in a state of thermal equilibrium with itself, as if all energy differences had been allowed to settle down and blend themselves together at a smooth common level of heat.

But such a blending could not have happened.  There was no time for it to happen in.  And worse, the uniformity includes regions which could not have interacted with each other, because light is only now managing to cross from one to the other, or hasn’t even done so yet.  If we look at opposite sides of our sky, we see as far as light has travelled either way in all of history, which means the total distance from one side to the other has only been crossed halfway.  Since no influence can move faster than light, the two opposite sides cannot possibly have exchanged any energy with each other.  It’s true that they may once have only been a foot apart, but they moved away from each other so close to lightspeed that the light from one side still hasn’t reached the debris of the other.  This means they cannot have exchanged any form of energy.

The smallest random quantum fluctuations back then should loom large today as differences in the cosmic backround temperature, and in the density of galaxies.  The differences are too small to account for without some means of smoothing them away.  It can’t have been ordinary thermal equilibrium.  What could it have been?

Similar questions apply on more esoteric levels too, like why the curvature of space appears to be so near to the ideal value that makes it absolutely flat.  It’s not that it’s close today that’s the problem, since our measurement of it is not all that precise… it’s that any small departure would increase over time, so if it’s close now, then in the distant past it must have been ultra-close with improbable precision.

The inflation theory is an attempt to answer these questions.  It postulates that some unknown repulsive force caused space itself, and the matter in it, to undergo some kind of self-generating expansion which kept the universe hot and dense while it grew at an immense rate.  Then, at some point, it ran out of gas and the universe started coasting outward in a conventional way, as we now observe.

The math they postulate for this process would have the effect of ironing out the irregularities that preceded it, making everything steadily smoother and flatter as long as it continued.  And it allows for a time before the inflation started, in which parts of the universe that are now inseparably distant could have achieved energy equilibrium.  It would only have required a brief instant of delay between the initial creation, when all the matter may have been trapped by gravity in a very small initial volume, and the commencement (by entirely hypothetical means) of inflation.

It might even mean that the initial appearance of the pre-inflated universe could be possible as a quantum fluctuation in vacuum, because the amount of energy that needs to appear from nothing might be comparatively tiny.  In these versions, the universe’s gravitational field constitutes a store of negative energy, and the total net mass of the cosmos is zero.

But if you run it backwards to see what initial conditions could have worked out that way, many say that it falls victim to the same problems as a conventional big bang.  Whatever state preceded it has to have been already uniform to an utterly unnatural degree.  Some say that it actually makes the problem worse.

But aside from that, the whole theory just seems like the ultimate in ad-hockery, a contrivance of arbitrary rules and conditions based on imaginary physics, tuned to “predict” observed results. And it makes some claims that are hard to credit, like that the inflationary expansion was in some sense faster than light. Apparently this is a mathematically allowed solution to the equations of general relativity.

We already knew that we can’t account for the ultimate question of why there is something and not nothing.  Even a religious hypothesis doesn’t have an answer for that.  But even leaving that aside, it’s looking like we’ve really got no answers as to what physical conditions must have existed in the early part of the big bang — at a point where matter and energy had fully come into being and started obeying the physical laws of our cosmos, but before what can be observed.

I’d say both the conventional big bang theory and the inflation theory must be missing something essential.  There’s something big going on there which we’re totally failing to see yet.  Probably something that will look blatant and obvious in hindsight someday. Maybe brane theory, or something equally far out, can provide a missing piece if it develops enough.

The inflation theory may be half true. I’m sure some parts are valid. Maybe even most parts. The part where energy and gravity cancel each other out and can therefore be created together in arbitrarily large quantities, for instance, sounds pretty attractive. But I think we’re probably still missing some key piece of context in which these parts can make a good overall theory.

It’s clear that the inflation hypothesis in its current form is incomplete… I wouldn’t be surprised if whatever comes along to complete the picture ends up discarding a large part of it in the process, and the hypothetical inflation stage seems like a prime candidate for something that might turn out to not be needed anymore if we only knew what was missing.

April 1, 2014

the theory of funniness

Filed under: Hobbyism and Nerdry,thoughtful handwaving — Supersonic Man @ 11:41 am

I recently ran across a new psychological theory which purports to pin down exactly what it is that makes something funny. It’s called the Benign Violation theory, and it basically says that for something to be funny, it has to constitute some form of violation — in either the sense of breaking the rules, or the sense of an assault — in a way that’s somehow harmless. The inventors of this theory claim it explains everything from puns to tickling.

What are the past theories it’s contending against, and how well do they do at predicting what is funny?  The other contenders essentially boil down to two ideas: the idea that humor is based on seeing someone else encounter misfortune (the “superiority theory”), or the idea that humor is based on noticing and resolving incongruity.  The latter has many offshoots and variants, but they are all based on the same core concept.  (There’s also a sort of Freudian theory that says laughter is about release of tension, but as far as I can see this lacks any predictive utility, so I’m going to ignore it.)

Let’s dispose first of the superiority theory.  Here’s a moment which made me loudly guffaw with no one else around: I was driving down California’s magnificent Pacific Coast Highway, one of the most scenic roads anywhere on Earth, in a state of utter joy, and I encountered a sign that said “Begin Scenic Route”.  Incongruity theory works fine for illustrating the humor in this, but superiority theory utterly fails.

Superiority theory also totally fails to show why a pun can be funny.

Let’s list some of the kinds of laugh-inducing experiences which different kinds of theory will need to stretch to explain:

  • dumb puns and knock-knock jokes
  • seeing an athlete get hit in the nads
  • someone doing an accurate impression of a famous person’s mannerisms
  • highly exaggerated expressions, gestures, and body language
  • watching cute kittens clumsily bumble around
  • watching squirrels display amazing agility
  • someone speaking poorly with misused words
  • someone speaking cleverly with fancy vocabulary
  • farts

Not a lot in common across that list.

Incongruity theory can cover a lot of ground.  It explains why, if you’re dining outdoors and bird shit suddenly lands on the food, it’s funnier on a birthday cake than on a salad.  (Yes, I know this from experience.) It might stretch to both ends on the issue of exceptionally stupid or exceptionally smart vocabulary.  It explains why it’s funny to see someone caught by his own practical joke.  But it has to work a lot harder to explain the humor in a successful practical joke, or in a pie to the face, unless the recipient is unusually dignified.  Both of those are cases where the benign violation theory seems to it dead center.

The benign violation theory, in turn, has to stretch for some cases.  You really have to work at finding secondary meanings of the word “violation” to explain the humor of kittens, or why an insult is better if it’s cleverly rhymed and flowed in a rap battle.  It looks to me like once you stretch the word “violation” far enough, it pretty much gets reduced to being a synonym for “incongruity”, as the only thing being violated is our expectation of what should normally happen — that is, our sense of congruity.

One other approach is the suggestion I read which speculates on an evolutionary purpose for humor, besides being a way to demonstrate a sharp mind for sexual selection purposes: it helps us spot bad thinking.  We laugh at ignorance and misperceptions and wrong logic.  Humor rewards us for being alert to cognitive errors.  This has real survival value.  And it’s a point in favor of incongruity theory, which matches this supposition pretty well.

But… speaking of animals, consider this youtube clip of a shorebird feeding in mud, which I shot a few months ago.  In the middle of the sequence, a duck walks through the frame.  Some viewers crack up every time they see the duck.  Why is that duck funny?  It certainly isn’t because we’ve spotted ignorance or invalid logic.

Probably the answer has something to do with misattributing human qualities to animals.  Maybe some part of our minds treats the featured willet as the star of the scene, and the duck as a bumbler who doesn’t know to stay out of the shot.  This is emphasized by how he pauses directly in front of the willet.  That would make sense both as incongruity and as violation, if we were filming people.  And maybe squirrel antics are funny because it’s incongruous, or a violation, to imagine a human carrying them out.  Maybe.

So it’s looking to me like both the incongruity rule and the benign violation rule cover about two thirds of cases quite well, but each have to stretch and struggle to cover some parts — and often those are the parts that are best covered by the other.  Something like, say, tipping over an occupied outhouse (which is funny if you’re mean enough to have a very broad notion of “benign”) is pure violation with no real incongruity, whereas a comic actor’s stagey mugging is hard to construe as any sort of violation… though I suppose you could argue that he’s “violating” his own dignity.

To examine the last case in a bit more detail, let’s say our comic actor is playing a scientist or inventor.  He gets an idea and loudly says “A-ha!’.  Why is it funnier if, while saying so, he points his right index finger straight up?  I really can’t find a way to apply the word “violation” to that.  What he’s doing is making a callback to a cultural trope or stereotype in which that gesture was considered appropriate for such a declaration.  It’s incongruous, one might say, because our experience is that people saying “aha” in real life don’t do this.

Here’s a tricky one.  Let’s say our comic actor is playing an upper class toff.  Sometimes he gets a laugh by using a word or phrases that’s just especially redolent of the stereotype he’s playing — when he’s been sounding snooty all along, but then comes up with something that seems even snootier than before.  It’s hard to see how this is either an incongruity or a violation, because instead of applying a twist to our expectations, he’s reinforcing them.  The humor seems to come from a reaction that says, we thought we had the measure of how much this character differs from you and me, but now it turns out he’s even more unlike us than we thought.  The same can happen with any other character type we have strong stereotypical expectations of, like a southern redneck or a drag queen or a hippie or a New Jersey mafioso.

You can get humor out of any situation where you deal with a person whose culture is strongly different from ours.  The theory that humor is there to detect bad thinking applies here, I think: when we look for erroneous logic and misperception and ignorance, the differences between cultures can easily register as false positives.  The person seems locally ignorant and foolish, regardless of how clever and knowledgeable they really are.  And I suppose that’s an incongruity relative to how we expect our friends and neighbors to act.

So when I first heard about the benign violation theory it seemed pretty persuasive, but after examination I’m now leaning toward the conclusion that it covers more ground than incongruity only if you try to stretch the concept of “violation” to include incongruity as a subset.

I’d just like to grumble about how people get excited about a theory and convince themselves that it explains everything. A lot of these humor theorists are guilty of that — they’re sure their pet theory is not just the truth, but the whole truth. And here, it clearly isn’t. There is more than one kind of humor.

Next Page »

The Rubric Theme. Create a free website or blog at


Get every new post delivered to your Inbox.