Supersonic Man

December 19, 2015

A historical timeline of the word “nerd”

Filed under: fun,Hobbyism and Nerdry — Supersonic Man @ 4:22 pm

An expanded version of this post has been moved to my website.  This is just an early draft — the official version is quite a bit longer, and has pictures.

Note that the “previous post” button below this post doesn’t work.  This is due to an outright bug in WordPress — maybe more than one.  I found this post to be difficult to compose, due to both the website and their mobile app misbehaving.  That’s one reason new material rarely gets written in this blog, and in the cases where it does, why it doesn’t remain here. (more…)

December 13, 2015

A missing piece in the puzzle of misogyny?

Filed under: Hobbyism and Nerdry,Rantation and Politicizing,thoughtful handwaving — Supersonic Man @ 10:41 am

A year or two ago there was a lot of discussion about misogyny among video gamers, due to a stink raised over something called “gamergate”.  This turned out to be only the most visible of a large number of cases of a certain core part of “gamer culture” being agressively hostile to women — a syndrome that also seems to rub off on some related cultural areas such as comic book fandom.  I recently ran into some people discussing this, and one of them mocked these gamers as being afraid to catch cooties.  And with a little distance from the original furor, that made me realize something. It’s a bit speculative, but I think it’s something that will eventually need society’s attention.

But first, a broader question.  Where does evil come from?  When dealing with random individuals who commit evil acts on their own, there are many answers and many schools of belief which advocate one answer over another.  But when people commit evil in groups, things become much clearer.  If you look at history’s big acts of mass evil, such as the Nazi holocaust, or the genocides in Armenia or Rwanda — when you look at mass atrocities down the ages, from medieval witch burning to American slavery to the latest horrors perpetrated by ISIS or Boko Haram — it’s clear how evil spreads.  It isn’t about baleful supernatural influences, it isn’t about people being born bad, and it isn’t about early childhod trauma: evil of this kind is cultural.  Quite often, it isn’t even about the culture people were raised with: in a lot of the more virulent phases of mass evil, the whole thing burns itself out in less than a generation.  The people who commit these horrors were talked into it as adults.  They were persuaded to be evil.  Of course, some people just need the slightest encouragement, and one could just dismiss them as rotten of character, but plenty more who aren’t that bad will also join in if subjected to long term campaigns of propaganda.  As human beings, we are extremely prone to allowing our moral compasses to be calibrated by what the society around us defines as normal.

So culture matters.  Culture has a huge influence on whether people behave positively or negatively.  Not that this is any excuse for the individuals involved — once you’re an adult and have developed independent thought, you bear complete moral responsibility for whether or not you choose to embrace the negative side of whatever propaganda you’re being fed.  Even if you grow up acculturated to hate from infancy, to earn any forgiveness for acting on it as an adult is still a tough job and far from automatic.

So yeah, the fact that abusive misogyny is so widespread in gaming is a matter of culture.  Mocking insults passed back and forth between people who are pretending to kill each other is to be expected, of course, but that doesn’t account for the special virulence aimed at women, or for how lots of women have found gaming much more bearable if they hide behind a male persona.  (There’s plenty of anti-gay denigration too, of course.)  There’s something way beyond friendly ribbing going on when women connected to the game industry, such as Anita Sarkeesian and Brianna Wu, receive repeated death threats when they speak out.  Clearly the people involved in this hateful behavior have created a culture in which people encourage and normalize acting this way.  Which, again, is no excuse for the individual gamer who allows themselves to be persuaded by it; nobody guilty of, say, mailing some woman violent threats accompanied by a photochopped image of herself being raped by Duke Nukem, is deserving of any level of sympathy or support more comforting than a knuckle sandwich. But maybe there are underlying aspects of the situation where a more compassionate view is possible.

It doesn’t help that the game industry panders to this culture as much as it does, for instance by sexualizing such a large percentage of female in-game characters. As in the porn business, the tastes that rule are not those of the majority of the audience, but of the hardcore few who spend the most money. Pandering is profitable.

Was this culture passed down from old-school traditional sexism? It doesn’t seem to have been. Old adages about women belonging in the kitchen or the man being the head of the house are largely absent, except as taunts. It used to be that it was easy to conflate misogyny with patriarchy, but nowadays patriarchal ideology is nearly dead, while misogyny is very much alive. (It may even be on the rise — I’m not sure, but it didn’t seem this bad a decade ago.) As a result, those misogynists who feel the need for an ideological framework are nowadays trying to cobble together a new one using odd bits from sociobiology and pickup-artist jargon. The result, commonly known as “red pill” philosophy, is remarkable more for extreme cynicism than for coherency. It makes Biblical patriarchy seem humane and uplifting by comparison.

When seeing hate like this, we might debate or theorize all day about where it comes from, without reaching any consensus. But there’s a related phenomenon going on which might get us a little closer to seeing what’s what, which is more visible in the comic book arena: the protests against the menace of “fake geek girls”. A number of comic book (and related) fans, and even some creators, have taken to denigrating women who hang around their scene, particularly those who like to dress up a la Adrianne Curry, as being “fake geeks”. Similar terms come up in gaming. The way they describe this supposed fakery makes the girls sound like dangerous predators out to rob the real fans of a precious resource… namely, their attention. Yeah, the sinister plot of pretending to be a comix fan is all done for male attention! And it must be stopped before more true geeks are victimized.

What’s up with that? Normally, if persons of hotness start being fascinated by your interests, most of us would consider that to be good luck, not a threat. What is being threatened? Clearly some of it could be that attractiveness is scary, as this particular line of criticism often seems to skip past the women who are being plain and drab. (Some women have reported that in gamer circles, you have to “dress down” to be taken seriously in face-to-face conversations.)

But why the focus on authenticity? Why the implicit assertion that these cosplayers and so forth are not just doing fandom wrong in some way, but are fundamentally unqualified to participate at all? Why the presumption that by default, they just don’t belong?

To me this reaction really does seem consistent with a theory that the true threat is cooties.

What does “cooties” mean? Originally the term referred to skin parasites, particularly pediculus humanus capitis, the head louse. But now, among preteen boys, it commonly refers to a metaphorical state of contamination arising from contact with girls (or, sometimes, other persons or groups that kids want to shun). The desire of six- to twelve-year-old boys to keep away from girls, and vice versa, seems to be remarkably consistent and durable across cultures. It seems to be not a cultural artifact at all, but a genuine instinct.

The theory, as far as I know, is that normal childhood development is supposed to have a swing toward attraction to the opposite sex at puberty, coming after an anti-attraction in the preceding period which helps prevent things from getting started earlier than they should. But of course “normal” is just a short way of saying that exceptions are everywhere. The biological systems involved in these changes operate very loosely. Look how often the “attraction to the opposite sex” part ends up working out differently — and that’s the bit that’s going to have the least freedom to vary, as it’s essential for propagation. The other parts that aren’t as critical can vary even more easily. For instance, lots of people report having fallen into crushes on the opposite sex at very young ages where it isn’t supposed to happen.

As an engineer type whose job skills are about getting complicated systems to work dependably, my instincts are fairly good for noticing which bits of a plan are the ones where it’s vulnerable to going off the rails. And in this theoretical plan for how puberty is supposed to work, where I see the biggest vulnerability is in this zigzag reversal of attraction. It could go off course in about four different ways. And if it can, it probably does, more often than we realize.

If we are all programmed with an inborn instinct to avoid the opposite sex, what would happen if it sometimes failed to be temporary — if sometimes it doesn’t deactivate when it’s supposed to? We’d end up with an invisible minority of people who might, in many cases, be sexually attracted to the opposite sex, yet nevertheless don’t enjoy their presence or company and prefer to hang out with their own gender. And, well… when you put it like that, you of course recognize that such people are commonplace, and always have been. To switch the focus briefly from nerds to jocks, we’ve seen plenty of them deride those who they deem overly woman-friendly as “pussyfied” or “whipped”, and heard phrases such as “bros before hos”. And women expressing similar sentiments in terms of sisterhood and the like are well known also. The only new thought here is the notion that maybe this has a common biological basis rather than just being due to some personal psychological quirk, or to being turned off by bad experiences. For women, the bad-experience factors may make it rather impossible to separate out any influence of temperament, but for men there at least superficially appear to be plenty of examples that can’t be ascribed to negative experience.

I think it’s quite possible that we have an unrecognized minority of persons who have a built-in aversion to mixed company, over which they have no choice — persons who can never quite feel comfortable being around the opposite sex all the time, however they try.

It used to be that society accomodated such a temperament pretty well. Almost every culture has been well supplied — sometimes to excess — with areas of daily life in which men and women are kept apart into separate areas and activities. That is, until recently. Our modern culture, in the last couple of generations, has become one where such separation is no longer a normal part of life. And we see this as progress, and in most ways that view is undoubtedly correct. For most of us, it’s a clear improvement.

But what if the remaining areas of single-sex grouping are not just remnants of patriarchy, but persist in order to fulfill an unmet need? What if there’s a significant subset of the populace which, unlike me and probably unlike you, would be genuinely healthier and happier if they could spend more time in the company of only their own gender? Such opportunities are now scarce, and continuing to shrink. Women-only groups are mostly still acceptable enough to create privately, as long as they stay small, but men-only groups are socially difficult because of the appearance (and sometimes more than just the appearance) of patriarchal privilege.

Even a couple of generations ago, it wasn’t that hard to find ways to keep male company. From wealthy private clubs to hunting and fishing trips, from joining the military to careers involving manual skills and hazardous conditions, those who wanted such a thing could have it. But now, where can they go?

Such men might be driven to try to create informal all-male spaces wherever the find an attractive opportunity to do so, and to instinctively (if perhaps inarticulately) try to defend such spaces, and resent any and all non-male intrusions there.

I can see why gaming — at least, the subset of “serious” gaming that’s focused on gritty, high-realism combat action — would seem like a natural spot for this. There are now just as many women as men participating in gaming overall, but that’s fairly recent, and probably not true in the so-called hardcore games.

Maybe some areas of life that can’t be segregated for men only, like gaming, would have a lot less trouble if we were to reopen some other areas that can be. Maybe someday, when sexism is over, there’ll be a recognition of a need for some people to still have single-sex spaces. Can we bring some of that back today, and if so, would it help? Maybe, maybe not. If tried today, maybe opening up something like that might help “drain the swamp”, and ease the pressure that leads people to turn hostile. Maybe it would allow some people who don’t want cooties to calm down and not act like jerks about it. But on the other hand, in today’s culture it might just create a new breeding ground for even more virulent sexism. I don’t know what would happen. I just know that if there really is an unmet instinctive need here, then trying to educate it away will never succeed.

March 12, 2015

devicePixelRatio

Filed under: Hobbyism and Nerdry — Supersonic Man @ 8:16 pm

Well I guess there’s one category of post I can still put here: nerd notes that none of my friends care about.  These may end up being little more than notes to myself.

I’ve been looking at the Javascript property window.devicePixelRatio.  It’s a way to compensate for the way that browsers more and more often separate the definition of a CSS “px” from a physical device pixel.  It started with Apple’s “retina” iPhones, which had twice the pixel density but didn’t want to display web pages half as big because of it.  So they set up the browser so that one “px” would be two physical pixels.  As phone displays got denser, a bunch of other manufacturers followed suit: on my HTC One, for instance, it’s three physical pixels per “px”.  And this value is reflected in the Javascript API with the property window.devicePixelRatio.

So far, no problem — everybody gets a display that works pretty well on their mobile device regardless of its density.  Where things get awkward is when this comes back to the desktop browser.  Chrome and Firefox now support devicePixelRatio in Javascript.  But unlike the mobile browsers, the value is changeable: it varies depending on the zoom selected by the user, whereas on a phone, zooming means just changing your area of view over a layout that mostly remains fixed.  In Chrome, it starts as 1.0, but if you hit ctrl-plus, it becomes 1.1, then 1.25, then 1.5, then 1.75, and so on.  In Firefox, it goes from 1.0 to 1.25 to 1.5 to 1.76470589 (don’t ask me to explain that last one).

What bugs me is that Firefox does not start out at 1.0.  Since a year or two ago, they take the initial ratio from the host operating system’s monitor pixel density setting, which in Windows is controlled by the desktop text size setting.  When you pick “smaller”, the ratio is 1.0, for “medium” it’s 1.25, and for “larger” it’s 1.5… and like a lot of people, I use “medium”.  So Firefox starts out at 125% zoom on every website, unless you install an add-on to change that behavior.

And mostly, this larger zoom is a good thing.  It makes the website look about the size it was intended to look.  But there’s one situation where this zoom ratio is bad, and that’s when viewing images at full size.  See, the browser has to resize the image in much the same way that you’d do in a program like Photoshop, and if you’ve messed around with resizing in such programs, you’ve likely noticed that the results are worst when the amount of change in size is small.  Cut a picture to half size or blow it up triple, and the results are no worse than you’d expect, but magnify or shrink it just a little and you get a ton of extra blurriness.  (Or pixelated graininess, depending on the type of size-changing you select.)  This is because changing sizes by a small ratio, such as 1.25, combines the imprecision of both formats.  Each image’s pixelization process involves a roundoff error in where a given picture detail is located, and resizing adds together the roundoff error of both scales.  When you change by a large ratio, the amount of detail in the picture is nearly identical to what you could see in the smaller of the two sizes, but with a small fractional ratio, the result is not much better than half as sharp as it would be if it were created originally in either of the two sizes.

So, when my website hosts large images and I don’t want them fuzzified, I’d like to find a way to fool browsers into showing the image at exactly a 1.0 pixel ratio, if it’s attempting to use an awkward ratio like 1.25.  If the ratio gets to 1.5 or higher, fine, resize it, but if you’re trying to resize it a little bit, I’d much rather have 1.0.  That’s why I’m investigating window.devicePixelRatio: so I can make a Javascript hack that will change the sizing of images so that, should I so desire, they look their best and sharpest rather than looking correctly proportioned to the text and stuff around them.  Sometimes the proportion doesn’t matter.

Unfortunately, IE 10 also looks at the Windows desktop size setting, and starts out with an initial zoom of 125%… but it does not support the devicePixelRatio property.  So for now, if I make such a fix, it’ll basically be for Firefox only.

…Unless, that is, I make a separate CSS media query for each individual pixel ratio.  There probably aren’t that many to be found out in the wild: 1.1, 1.25, and 1.33 are probably about it for common cases.  But on the other hand, I don’t want these rules to snap in and out of action as the user zooms through the different ratios. JavaScript would be better because it could change the picture’s size once, and then any zooming done after that has no unexpected effect.

February 19, 2015

reactionless drives

Filed under: Hobbyism and Nerdry — Supersonic Man @ 8:37 am

This post has been promoted to a permanent location on my website, here.

June 19, 2014

the Swift programming language(s)

Filed under: Hobbyism and Nerdry,thoughtful handwaving — Supersonic Man @ 9:52 pm

So Apple is regretting the corner they painted themselves into by having their core development language be Objective-C.  This language is a horrid mashup made half of Smalltalk and half of traditional unreconstructed C.  Compared to C++, the modern half is more modern, but the primitive half is more primitive.  Steve Jobs used it for NeXT during his time away from Apple, and brought it back with him.  But what looked cool and exciting in 1986 is looking awfully outdated today.

The trend in the industry is clearly moving away from these half-and-half languages, toward stuff that doesn’t inherit primitive baggage from the previous century.  Microsoft has had great success by stripping all the old C-isms out of C++ to make C#, and Java — the oldest and crudest of this new generation of programming languages — may still be the world’s most widely used language, even though most people probably now see it as something that’s had its day and is not the place to invest future effort.

Now Apple has announced a nu-school language of their own, to replace Objectionable-C.  They’re calling it Swift.  It’s even more hep and now and with-it than C#. There’s just one problem: there’s already another computer language using the name.  It’s a scripting language for parallel computing.  Its purpose is to make it easy to spread work over many computers at once.  And this, to me, is far more interesting than Apple’s new me-too language.  (Or any of the other new contenders coming up, like Google’s Go or the Mozilla foundation’s Rust.)

See, massive parallelism is where the future of computing lies.  If you haven’t noticed, desktop CPUs aren’t improving by leaps and bounds anymore like they used to.  Speeds and capacities are showing a much flatter growth curve than they did five years ago.  You can’t keep making the same old CPUs faster and smaller… you run into physical limits.

And this means that if we want tomorrow’s computers to be capable of feats qualitatively beyond what today’s can do — stuff like understanding natural language, or running a realistic VR simulation, or making robots capable of general-purpose labor — the only way to get there is through massive parallelism.  I think that in a decade or two, we’ll mainly compare computer performance specs not with gigahertz or teraflops, but with kilocores or megacores.  That is, by the degree of parallelism.

One problem is that 95% of programming is still done in a single-tasking form.  Most programmers have little idea of how to really organize computing tasks in parallel rather than in series. There’s very little teaching and training being directed toward unlearning that traditional approach, which soon is going to be far too limiting.  Promulgating a new language built around the idea — especially one that makes it as simple and easy as possible — strikes me as a very positive and helpful step to take.  I’m really disappointed that Apple has chosen to dump on that helpful effort by trying to steal its name.

May 13, 2014

cosmic inflation

Filed under: Hobbyism and Nerdry,thoughtful handwaving — Supersonic Man @ 10:24 am

This post has been promoted to a permanent page on my website, here.

April 1, 2014

the theory of funniness

Filed under: Hobbyism and Nerdry,thoughtful handwaving — Supersonic Man @ 11:41 am

This post has been promoted to a permanent page on my website, here.

September 21, 2013

Java is doomed?

Filed under: Hobbyism and Nerdry — Supersonic Man @ 9:25 am

I was talking earlier about Windows now having a somewhat bleak future despite still being firmly dominant today, and now I have to recognize something else that’s gotten itself into a similar position: the Java language. Over much of the last decade it’s probably been the most widely used programming language… though it’s hard to be sure, and it certainly was never in any position of majority dominance.  But now nobody sees any kind of growth in its future, and other languages like C# are making it look outdated.  Combine that with the well-publicized security troubles which, among other things, nailed shut the coffin for applets in the browser (the one place where the average computer user came into direct contact with the Java brand), and nobody’s seeing it as the right horse to bet on anymore.

Which is a shame, because it’s still one of the most widely supported and most available languages, and it’s probably still the best teaching language in the C-derived family.  It’s going to have to be fairly widely used in schools, even if it drops slowly out of use in industry.  There isn’t a suitable replacement for that role yet, as far as I can see.

Even as it gets into a state where people scoff at it for real work, it might still be unavoidable for a long time as something you have to know.

. . . . .

Another sad observation of decline: I think MS Office is now better at supporting Open Office than OpenOffice.org is at supporting MS Office.

September 10, 2013

strict doctypes and old markup

Filed under: Hobbyism and Nerdry — Supersonic Man @ 9:49 am

I thought strict doctypes, like XMTML Strict, were just for eliminating all the deprecated HTML tags that were used for stuff that now uses CSS, such as <font> and <center>.  But there are a couple of gotchas with it.  For instance, strict [X]HTML does not allow you to put a target= attribute on a link.  Apparently this is considered a matter of presentation and styling, though only cutting-edge implementations of CSS support setting it in a stylesheet.  But the one that really makes me scratch my head is that <blockquote> is only allowed to contain block-level elements.  What?  The obvious semantics of a block quote are that it should contain text.  But no, now it’s only supposed to contain paragraphs and divs, not act as a paragraph in itself.

(I’m posting this partly just as a sort of note to myself.)

I do try to use modern standards, but my website has content dating back as far as 1996, so no way am I going to clean out all the old <font> tags.

Maybe I should at least validate capejeer.com, since the content there is all fairly new, and generated from a single master page that I can easily modernize.

[update] I did: capejeer.com is now fully XHTML Strict compliant, though paulkienitz.net still has tons of content that’s stuck at a Netscape 4 level of markup, using no CSS at all.  The front landing page is the only part that uses any modern browser technology, and even that dates mainly from about 2005.

[update 2] I made a spreadsheet of all the HTML pages on paulkienitz.net assessing their state of modernity in terms of styling.  The current status is:

  • root level: almost everything is archaic except the index page and the one page that draws the most search traffic.
  • the old film-era photo gallery folder (which frankly, has been an embarrassment for some time, and really needs updating, or even just some severe culling) is also completely archaic.
  • the Enron & Friends material is 90% bad, with a light sprinkling of modern style tweaks, but the current events movie reviews in the same folder are 90% good.
  • the B movie folder is good, and the boids folder, plus bits in the Amiga folder and the Reagan folder.
  • two of the biggest folders are good, but they’re both unfinished projects which are not yet exposed to the public.

The question is, which of these archaic areas is even worth updating?  The answer would be, almost none.  They’re all dated, essentially of historical interest only, except for the gallery, where markup is the least of its problems.

April 3, 2013

is Microsoft now the underdog? (WordPress is annoying)

Filed under: Hobbyism and Nerdry,Rantation and Politicizing — Supersonic Man @ 7:15 pm

I think I’ve about had it with Wordpiss.  Their comment approval process is fine for rejecting dozens of spam comments, but it’s terrible for approving a valid comment where you have to actually READ it before you’re sure it’s good.  The only way to read the whole comment to the end, as far as I could see, was to edit it!  I could not find any option for viewing the comment as it would appear if approved.  And then, when I try to follow any links to the post it’s a comment to, they’re links for editing it, not reading it.  This is stupid.

I have a sneaking feeling that Blogger is much easier to work with.  But I don’t want to move yet more of my life on to Google’s servers.  I think they’ve now officially crossed the line into being the new Microsoft — the big dominant choice that anyone who doesn’t like monopolies ought to look for alternatives to.  Since Windows 8 came out, Microsoft might actually now qualify as an underdog.  If not now, then they will soon.

IBM has been an underdog for a while now.  If they achieve the ability to answer natural-language questions before Google does, as they well might, I’ll be rooting for them, even though they were once the bad guy.  But I won’t go so far as to root for Microsoft… the memories of their ways when they were on top are a bit too fresh.

As for blogging platforms… what I really miss is Livejournal.  Why are today’s social networking sites so good for connecting people but so terrible for longer-form writing?  LJ was the one and only time that I saw thoughtful blogging combined with strong social networking in a way where both were able to work to their fullest.

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.