Let’s try to put a little more rigor into the question of “do all animals jump the same height”, as discussed in this previous post. We saw what appeared to be the same results being produced at different scales under certain assumptions… let’s check if it’s really a mathematical equality.
December 14, 2014
June 19, 2014
So Apple is regretting the corner they painted themselves into by having their core development language be Objective-C. This language is a horrid mashup made half of Smalltalk and half of traditional unreconstructed C. Compared to C++, the modern half is more modern, but the primitive half is more primitive. Steve Jobs used it for NeXT during his time away from Apple, and brought it back with him. But what looked cool and exciting in 1986 is looking awfully outdated today.
The trend in the industry is clearly moving away from these half-and-half languages, toward stuff that doesn’t inherit primitive baggage from the previous century. Microsoft has had great success by stripping all the old C-isms out of C++ to make C#, and Java — the oldest and crudest of this new generation of programming languages — may still be the world’s most widely used language, even though most people probably now see it as something that’s had its day and is not the place to invest future effort.
Now Apple has announced a nu-school language of their own, to replace Objectionable-C. They’re calling it Swift. It’s even more hep and now and with-it than C#.
There’s just one problem: there’s already another computer language using the name. It’s a scripting language for parallel computing. Its purpose is to make it easy to spread work over many computers at once. And this, to me, is far more interesting than Apple’s new me-too language. (Or any of the other new contenders coming up, like Google’s Go or the Mozilla foundation’s Rust.)
See, massive parallelism is where the future of computing lies. If you haven’t noticed, desktop CPUs aren’t improving by leaps and bounds anymore like they used to. Speeds and capacities are showing a much flatter growth curve than they did five years ago. You can’t keep making the same old CPUs faster and smaller… you run into physical limits.
And this means that if we want tomorrow’s computers to be capable of feats qualitatively beyond what today’s can do — stuff like understanding natural language, or running a realistic VR simulation, or making robots capable of general-purpose labor — the only way to get there is through massive parallelism. I think that in a decade or two, we’ll mainly compare computer performance specs not with gigahertz or teraflops, but with kilocores or megacores. That is, by the degree of parallelism.
One problem is that 95% of programming is still done in a single-tasking form. Most programmers have little idea of how to really organize computing tasks in parallel rather than in series.
There’s very little teaching and training being directed toward unlearning that traditional approach, which soon is going to be far too limiting. Promulgating a new language built around the idea — especially one that makes it as simple and easy as possible — strikes me as a very positive and helpful step to take. I’m really disappointed that Apple has chosen to dump on that helpful effort by trying to steal its name.
May 13, 2014
The cosmological inflation theory always sounded weird to me. I’ve been reading a bit about it, trying to get my head around what they’re claiming. And I’m unconvinced that it’s a valid theory, even though it’s currently winning at prediction.
The classical big bang theory certainly has problems that need addressing. Now, there’s no doubt that there was a bang which was big. Everything we see in the universe is flying apart from everything else, and behind everything we can see is a thermal glow which, if you account for redshift, appears to come from hot gas just at the point where it cools enough to be transparent. (This is around 3000 Kelvin, about as hot as a halogen lamp filament). So there’s no way around the conclusion that, thirteen point something billion years ago, the entire observable universe was packed into a much smaller volume which was dense and hot, and that it exploded out at terrific speed. That much is clear.
The problems arise when you try to extrapolate what came before the ball of dense hot gas, which clearly was already expanding. The math says that it must have all expanded from a volume that was much smaller and hotter still. In fact, the classical mathematical solution to the big bang insists that the initial explosion must have taken place in a region that wasn’t just tiny, or even infinitesimal, but in no volume at all: an absolute single point where density and temperature equalled infinity.
This answer is nonsensical. Scientists are now rightly rejecting it, as many of them also reject the notion that the center of a black hole must be a singularity of zero size. Clearly both are an oversimplification.
The trouble is, if you postulate anything else, the consequences run up against observed data. The key point of observation is that the universe, as far back as it’s possible to see, has an essentially uniform density and temperature in all directions. It appears very much as if the volume of hot gas which existed at the earliest moment we can see was in a state of thermal equilibrium with itself, as if all energy differences had been allowed to settle down and blend themselves together at a smooth common level of heat.
But such a blending could not have happened. There was no time for it to happen in. And worse, the uniformity includes regions which could not have interacted with each other, because light is only now managing to cross from one to the other, or hasn’t even done so yet. If we look at opposite sides of our sky, we see as far as light has travelled either way in all of history, which means the total distance from one side to the other has only been crossed halfway. Since no influence can move faster than light, the two opposite sides cannot possibly have exchanged any energy with each other. It’s true that they may once have only been a foot apart, but they moved away from each other so close to lightspeed that the light from one side still hasn’t reached the debris of the other. This means they cannot have exchanged any form of energy.
The smallest random quantum fluctuations back then should loom large today as differences in the cosmic backround temperature, and in the density of galaxies. The differences are too small to account for without some means of smoothing them away. It can’t have been ordinary thermal equilibrium. What could it have been?
Similar questions apply on more esoteric levels too, like why the curvature of space appears to be so near to the ideal value that makes it absolutely flat. It’s not that it’s close today that’s the problem, since our measurement of it is not all that precise… it’s that any small departure would increase over time, so if it’s close now, then in the distant past it must have been ultra-close with improbable precision.
The inflation theory is an attempt to answer these questions. It postulates that some unknown repulsive force caused space itself, and the matter in it, to undergo some kind of self-generating expansion which kept the universe hot and dense while it grew at an immense rate. Then, at some point, it ran out of gas and the universe started coasting outward in a conventional way, as we now observe.
The math they postulate for this process would have the effect of ironing out the irregularities that preceded it, making everything steadily smoother and flatter as long as it continued. And it allows for a time before the inflation started, in which parts of the universe that are now inseparably distant could have achieved energy equilibrium. It would only have required a brief instant of delay between the initial creation, when all the matter may have been trapped by gravity in a very small initial volume, and the commencement (by entirely hypothetical means) of inflation.
It might even mean that the initial appearance of the pre-inflated universe could be possible as a quantum fluctuation in vacuum, because the amount of energy that needs to appear from nothing might be comparatively tiny. In these versions, the universe’s gravitational field constitutes a store of negative energy, and the total net mass of the cosmos is zero.
But if you run it backwards to see what initial conditions could have worked out that way, many say that it falls victim to the same problems as a conventional big bang. Whatever state preceded it has to have been already uniform to an utterly unnatural degree. Some say that it actually makes the problem worse.
But aside from that, the whole theory just seems like the ultimate in ad-hockery, a contrivance of arbitrary rules and conditions based on imaginary physics, tuned to “predict” observed results. And it makes some claims that are hard to credit, like that the inflationary expansion was in some sense faster than light. Apparently this is a mathematically allowed solution to the equations of general relativity.
We already knew that we can’t account for the ultimate question of why there is something and not nothing. Even a religious hypothesis doesn’t have an answer for that. But even leaving that aside, it’s looking like we’ve really got no answers as to what physical conditions must have existed in the early part of the big bang — at a point where matter and energy had fully come into being and started obeying the physical laws of our cosmos, but before what can be observed.
I’d say both the conventional big bang theory and the inflation theory must be missing something essential. There’s something big going on there which we’re totally failing to see yet. Probably something that will look blatant and obvious in hindsight someday. Maybe brane theory, or something equally far out, can provide a missing piece if it develops enough.
The inflation theory may be half true. I’m sure some parts are valid. Maybe even most parts. The part where energy and gravity cancel each other out and can therefore be created together in arbitrarily large quantities, for instance, sounds pretty attractive. But I think we’re probably still missing some key piece of context in which these parts can make a good overall theory.
It’s clear that the inflation hypothesis in its current form is incomplete… I wouldn’t be surprised if whatever comes along to complete the picture ends up discarding a large part of it in the process, and the hypothetical inflation stage seems like a prime candidate for something that might turn out to not be needed anymore if we only knew what was missing.
April 1, 2014
I recently ran across a new psychological theory which purports to pin down exactly what it is that makes something funny. It’s called the Benign Violation theory, and it basically says that for something to be funny, it has to constitute some form of violation — in either the sense of breaking the rules, or the sense of an assault — in a way that’s somehow harmless. The inventors of this theory claim it explains everything from puns to tickling.
What are the past theories it’s contending against, and how well do they do at predicting what is funny? The other contenders essentially boil down to two ideas: the idea that humor is based on seeing someone else encounter misfortune (the “superiority theory”), or the idea that humor is based on noticing and resolving incongruity. The latter has many offshoots and variants, but they are all based on the same core concept. (There’s also a sort of Freudian theory that says laughter is about release of tension, but as far as I can see this lacks any predictive utility, so I’m going to ignore it.)
Let’s dispose first of the superiority theory. Here’s a moment which made me loudly guffaw with no one else around: I was driving down California’s magnificent Pacific Coast Highway, one of the most scenic roads anywhere on Earth, in a state of utter joy, and I encountered a sign that said “Begin Scenic Route”. Incongruity theory works fine for illustrating the humor in this, but superiority theory utterly fails.
Superiority theory also totally fails to show why a pun can be funny.
Let’s list some of the kinds of laugh-inducing experiences which different kinds of theory will need to stretch to explain:
- dumb puns and knock-knock jokes
- seeing an athlete get hit in the nads
- someone doing an accurate impression of a famous person’s mannerisms
- highly exaggerated expressions, gestures, and body language
- watching cute kittens clumsily bumble around
- watching squirrels display amazing agility
- someone speaking poorly with misused words
- someone speaking cleverly with fancy vocabulary
Not a lot in common across that list.
Incongruity theory can cover a lot of ground. It explains why, if you’re dining outdoors and bird shit suddenly lands on the food, it’s funnier on a birthday cake than on a salad. (Yes, I know this from experience.) It might stretch to both ends on the issue of exceptionally stupid or exceptionally smart vocabulary. It explains why it’s funny to see someone caught by his own practical joke. But it has to work a lot harder to explain the humor in a successful practical joke, or in a pie to the face, unless the recipient is unusually dignified. Both of those are cases where the benign violation theory seems to it dead center.
The benign violation theory, in turn, has to stretch for some cases. You really have to work at finding secondary meanings of the word “violation” to explain the humor of kittens, or why an insult is better if it’s cleverly rhymed and flowed in a rap battle. It looks to me like once you stretch the word “violation” far enough, it pretty much gets reduced to being a synonym for “incongruity”, as the only thing being violated is our expectation of what should normally happen — that is, our sense of congruity.
One other approach is the suggestion I read which speculates on an evolutionary purpose for humor, besides being a way to demonstrate a sharp mind for sexual selection purposes: it helps us spot bad thinking. We laugh at ignorance and misperceptions and wrong logic. Humor rewards us for being alert to cognitive errors. This has real survival value. And it’s a point in favor of incongruity theory, which matches this supposition pretty well.
But… speaking of animals, consider this youtube clip of a shorebird feeding in mud, which I shot a few months ago. In the middle of the sequence, a duck walks through the frame. Some viewers crack up every time they see the duck. Why is that duck funny? It certainly isn’t because we’ve spotted ignorance or invalid logic.
Probably the answer has something to do with misattributing human qualities to animals. Maybe some part of our minds treats the featured willet as the star of the scene, and the duck as a bumbler who doesn’t know to stay out of the shot. This is emphasized by how he pauses directly in front of the willet. That would make sense both as incongruity and as violation, if we were filming people. And maybe squirrel antics are funny because it’s incongruous, or a violation, to imagine a human carrying them out. Maybe.
So it’s looking to me like both the incongruity rule and the benign violation rule cover about two thirds of cases quite well, but each have to stretch and struggle to cover some parts — and often those are the parts that are best covered by the other. Something like, say, tipping over an occupied outhouse (which is funny if you’re mean enough to have a very broad notion of “benign”) is pure violation with no real incongruity, whereas a comic actor’s stagey mugging is hard to construe as any sort of violation… though I suppose you could argue that he’s “violating” his own dignity.
To examine the last case in a bit more detail, let’s say our comic actor is playing a scientist or inventor. He gets an idea and loudly says “A-ha!’. Why is it funnier if, while saying so, he points his right index finger straight up? I really can’t find a way to apply the word “violation” to that. What he’s doing is making a callback to a cultural trope or stereotype in which that gesture was considered appropriate for such a declaration. It’s incongruous, one might say, because our experience is that people saying “aha” in real life don’t do this.
Here’s a tricky one. Let’s say our comic actor is playing an upper class toff. Sometimes he gets a laugh by using a word or phrases that’s just especially redolent of the stereotype he’s playing — when he’s been sounding snooty all along, but then comes up with something that seems even snootier than before. It’s hard to see how this is either an incongruity or a violation, because instead of applying a twist to our expectations, he’s reinforcing them. The humor seems to come from a reaction that says, we thought we had the measure of how much this character differs from you and me, but now it turns out he’s even more unlike us than we thought. The same can happen with any other character type we have strong stereotypical expectations of, like a southern redneck or a drag queen or a hippie or a New Jersey mafioso.
You can get humor out of any situation where you deal with a person whose culture is strongly different from ours. The theory that humor is there to detect bad thinking applies here, I think: when we look for erroneous logic and misperception and ignorance, the differences between cultures can easily register as false positives. The person seems locally ignorant and foolish, regardless of how clever and knowledgeable they really are. And I suppose that’s an incongruity relative to how we expect our friends and neighbors to act.
So when I first heard about the benign violation theory it seemed pretty persuasive, but after examination I’m now leaning toward the conclusion that it covers more ground than incongruity only if you try to stretch the concept of “violation” to include incongruity as a subset.
I’d just like to grumble about how people get excited about a theory and convince themselves that it explains everything. A lot of these humor theorists are guilty of that — they’re sure their pet theory is not just the truth, but the whole truth. And here, it clearly isn’t. There is more than one kind of humor.
September 21, 2013
I was talking earlier about Windows now having a somewhat bleak future despite still being firmly dominant today, and now I have to recognize something else that’s gotten itself into a similar position: the Java language. Over much of the last decade it’s probably been the most widely used programming language… though it’s hard to be sure, and it certainly was never in any position of majority dominance. But now nobody sees any kind of growth in its future, and other languages like C# are making it look outdated. Combine that with the well-publicized security troubles which, among other things, nailed shut the coffin for applets in the browser (the one place where the average computer user came into direct contact with the Java brand), and nobody’s seeing it as the right horse to bet on anymore.
Which is a shame, because it’s still one of the most widely supported and most available languages, and it’s probably still the best teaching language in the C-derived family. It’s going to have to be fairly widely used in schools, even if it drops slowly out of use in industry. There isn’t a suitable replacement for that role yet, as far as I can see.
Even as it gets into a state where people scoff at it for real work, it might still be unavoidable for a long time as something you have to know.
. . . . .
Another sad observation of decline: I think MS Office is now better at supporting Open Office than OpenOffice.org is at supporting MS Office.
September 10, 2013
I thought strict doctypes, like XMTML Strict, were just for eliminating all the deprecated HTML tags that were used for stuff that now uses CSS, such as <font> and <center>. But there are a couple of gotchas with it. For instance, strict [X]HTML does not allow you to put a target= attribute on a link. Apparently this is considered a matter of presentation and styling, though only cutting-edge implementations of CSS support setting it in a stylesheet. But the one that really makes me scratch my head is that <blockquote> is only allowed to contain block-level elements. What? The obvious semantics of a block quote are that it should contain text. But no, now it’s only supposed to contain paragraphs and divs, not act as a paragraph in itself.
(I’m posting this partly just as a sort of note to myself.)
I do try to use modern standards, but my website has content dating back as far as 1996, so no way am I going to clean out all the old <font> tags.
Maybe I should at least validate capejeer.com, since the content there is all fairly new, and generated from a single master page that I can easily modernize.
[update] I did: capejeer.com is now fully XHTML Strict compliant, though paulkienitz.net still has tons of content that’s stuck at a Netscape 4 level of markup, using no CSS at all. The front landing page is the only part that uses any modern browser technology, and even that dates mainly from about 2005.
[update 2] I made a spreadsheet of all the HTML pages on paulkienitz.net assessing their state of modernity in terms of styling. The current status is:
- root level: almost everything is archaic except the index page and the one page that draws the most search traffic.
- the old film-era photo gallery folder (which frankly, has been an embarrassment for some time, and really needs updating, or even just some severe culling) is also completely archaic.
- the Enron & Friends material is 90% bad, with a light sprinkling of modern style tweaks, but the current events movie reviews in the same folder are 90% good.
- the B movie folder is good, and the boids folder, plus bits in the Amiga folder and the Reagan folder.
- two of the biggest folders are good, but they’re both unfinished projects which are not yet exposed to the public.
The question is, which of these archaic areas is even worth updating? The answer would be, almost none. They’re all dated, essentially of historical interest only, except for the gallery, where markup is the least of its problems.
April 3, 2013
I think I’ve about had it with Wordpiss. Their comment approval process is fine for rejecting dozens of spam comments, but it’s terrible for approving a valid comment where you have to actually READ it before you’re sure it’s good. The only way to read the whole comment to the end, as far as I could see, was to edit it! I could not find any option for viewing the comment as it would appear if approved. And then, when I try to follow any links to the post it’s a comment to, they’re links for editing it, not reading it. This is stupid.
I have a sneaking feeling that Blogger is much easier to work with. But I don’t want to move yet more of my life on to Google’s servers. I think they’ve now officially crossed the line into being the new Microsoft — the big dominant choice that anyone who doesn’t like monopolies ought to look for alternatives to. Since Windows 8 came out, Microsoft might actually now qualify as an underdog. If not now, then they will soon.
IBM has been an underdog for a while now. If they achieve the ability to answer natural-language questions before Google does, as they well might, I’ll be rooting for them, even though they were once the bad guy. But I won’t go so far as to root for Microsoft… the memories of their ways when they were on top are a bit too fresh.
As for blogging platforms… what I really miss is Livejournal. Why are today’s social networking sites so good for connecting people but so terrible for longer-form writing? LJ was the one and only time that I saw thoughtful blogging combined with strong social networking in a way where both were able to work to their fullest.
February 21, 2013
It was bugging me that the text along the right hand side of this blog would be rendered on top of my lovely pictures. So I experimented, and it turns out that WordPress is perfectly happy to let you add this little adjustment to your
January 23, 2013
The term “Artificial Intelligence” means a computer or robot programmed to be smart like a person. It’s a pipe dream so far, but a lot of people think it makes sense that it can happen eventually, and the idea is a staple of science fiction, in which it’s often taken for granted that a hundred years from now, our machines will be as smart as a lot of us are, and might even be considered citizens with the same rights as people.
Is this notion realistic? Is it possible? Is it likely? If it happens, what form will it take? I think I may be able to help clarify these questions a bit.
December 2, 2012
Are we finally seeing the first signs of the end of Windows? Can the vast decaying empire of the Windows desktop finally be about to fall? (more…)