Supersonic Man

November 24, 2014

what is liberalism?

Filed under: Rantation and Politicizing — Supersonic Man @ 12:06 pm

I’ve talked a number of times about conservatism here and elsewhere, but rather little about liberalism.  If I’m a liberal, what does that mean?  I think I now have a clear description of what I think it means to be a liberal.  If I were to try to state what I think liberalism stands for in one sentence, giving it a sort of mission statement, I’d say this:

We the people have the right and the responsibility to manage our social institutions so that they work to our benefit.

In other words, if our political and economic systems are not serving us, we have the right to choose ones which do.  We have every right to modify, adjust, debug, or even wholly replace our laws and institutions in order to improve their outcomes.  They are ours.  We created them, and their purpose is to serve us, and not vice versa.  And when an institution is working badly, we don’t just have a right to adjust it, but a duty.  Neglect and inattention to problems with them is irresponsible.

When I listen to conservative critiques of liberal ideas, it seems to me that whether they come from a perspective that’s feudal or theocratic or law-n-order or laissez-faire capitalist or even anarcho-libertarian, what they have in common is a refusal to admit this right to choose.  They may claim that it’s impossible, that it’s unworkable, or that it’s immoral, but all come down to someone with power telling you that you need to accept and embrace and surrender to the system as it is, or as they would like to make it.  They all agree that for you to change the system that they think is right and good is unacceptable — that it’s wrong and harmful for you to try.  They expect society to work by certain rules, and once those rules are established, they get treated as sacrosanct.  Even those who boast of seeking lift the burden of law from you as much as possible follow this same pattern: they treat their no-rules metarule as morally inviolable.  And if the consequences of living within their system turn out to be harmful or limiting to you, that’s your problem, not theirs.  You should have worked harder to make the best of the hand you were dealt.  And if you ask whether the people as a whole have more opportunity than they had before, the answer is either propaganda that says “of course they do” with no data to back it up, or to be told that your question is the wrong one to ask.

Any system will develop problems if you leave it running long enough without adjustment.  Rules that seem balanced and fair at first start to produce uneven rewards for those who have a chance to take advantage of loopholes or artificial opportunities, or start encouraging unhelpful behaviors that weren’t intended by the drafters of the laws.  The way to handle these problems is to dynamically adjust the system as you go along.  By responding actively to issues and problems, you can keep imbalances and flaws from blowing up to catastrophic size.  Liberalism is the recognition that, besides having the right to drastically rewrite the social order if necessary, we also have a responsibility, once we have a good system, to constantly make small tweaks and adjustments to keep it running well.  Policy has to be active and responsive, not static holy writ.  In other words,   we need to govern.  We do this ongoing adjustment of policy on the basis of whether the system is producing desirable outcomes, not by whether it embodies desirable moral virtues on paper.  Antigovernment ideology basically says we should let problems run their course unabated, instead of allowing ourselves to catch them small — they’re telling us that to actively fix things is not a job we can be trusted wth.

One common way that a social problem can grow out of control is the development of a privileged ruling class.  In the end, what system you pick initially almost doesn’t matter: whether it’s tribal anarchy, warlordism, feudal aristocracy, theocracy, corporatism, anarcho-capitalism, socialism, or communism, they’ll all eventually produce a minority group which has large and increasing power, while the power remaining with the majority decreases.  This is because whatever allows one person to get a little bit ahead of those around him will then allow him, once he’s gained that ground, to improve his advantage further.  Any system will have some tendency to be pulled toward this outcome, no matter what principle it starts with.  The only way history has shown that this growing advantage for a few can be held back, is when the people have the agency to make countermoves to check the growth of excess power and privilege, in whichever particular areas it starts to crop up.  This is why we have things like banking regulations — because they were needed in order to counteract the concentration of wealth and power into places where they no longer benefit society.  When such regulations are repealed, that undue concentration comes right back.

Antiregulatory demagogues like to warn of slippery slopes, where overregulation will produce terrible stifling results.  But when regulation is dynamic and active and based on outcomes, this becomes a non-problem: when things start to have a bad effect, they’re corrected.  I believe that application of this dynamic approach is what’s responsible for every historical success at producing free and prosperous societies with widespread opportunity.  There has never been another way to do it.  Of all the ways that people have tried to produce a prosperous and thriving society, only the ones that fit this principle have worked well.  Nobody can plan a social order that will, a priori, support flat widespread opportunity and a large middle class (or whatever other definition of a successful society you wish to use) over indefinite time.  You can only keep that going by reacting to imbalances that undercut your desired outcome.

I think this definition clarifies some things that might otherwise be confusing, such as the paradox of Soviet communism: the Russian Revolution was clearly liberal, yet the Soviet Union which arose from it was not at all liberal, even though they were both based on the same values and rhetoric.  This definition clarifies that the ideology is not what matters.  What matters is that one allowed the people to make changes and the other did not.

Speaking of outcomes, of course, raises the question of how we decide what outcomes are desirable.  You can be liberal by this definition without necessarily being just or democratic.  Consider the Austro-Hungarian emperor Joseph II: he was a tremendous liberalizer of dysfunctional old feudal institutions, but also an extreme centrist authoritarian.  Conservatives tend to worry that this is what liberalism could lead to.  Will liberal forces get caught up in some enthusiasm that makes them forget the “we the people” part?  In theory, it’s certainly possible.

But I’m not nearly as worried about the question of deciding which outcomes are desirable as you might think.  I see that the people, when given a true choice, can mostly be counted on to support liberty over slavery, equality over privilege, and responsible preparation for the future over short-term indulgence.  Different conservative movements have widely varying ideas of what kind of society everyone should have to live under, and liberal ideas of what they’d like to see also have lots of variety.  But when you put everyone together, basic fairness and justice are concepts that almost everyone supports.  And if they occasionally forget that, they will remember it when reminded.  So I consider the question of exactly what outcomes to pursue to be a secondary issue.  Given a true choice in the matter, the people can normally be depended upon choose fairly well.  The important thing is just that they are really able to have that choice — the opportunity to notice when something isn’t working, and do something about it.

July 19, 2014

Freedom and Christianity

Filed under: Rantation and Politicizing — Supersonic Man @ 7:49 am

Why do so many people who believe strongly in freedom also believe strongly in the Christian bible? The two aren’t particularly compatible. In the old testament, slavery is endorsed and its victims are told to know their place. (There is one exception: Deuteronomy forbids sending escaped slaves back.) And the new testament isn’t really any better: its spiritual virtues are all about humility and submission, not just to God but to earthly masters as well.

Small wonder that some would decide that the divine figure who represents liberty is not Christ or Yahweh, but Satan. He’s all about letting you make your own choices and live with the results. But is the Satanic approach any better? That’s where you can do whatever you want, not just for yourself, but against anyone else. Take from your neighbor if he’s weaker than you, rape anyone you can catch, or despoil things that other people depend on — it’s all good with Big Red. He’s even less oppposed to slavery than Jesus was.

Neither offers any concept of human rights. Those are a strictly secular invention. And without them, there’s no way for a society that values freedom to preserve it in practice.

June 19, 2014

the Swift programming language(s)

Filed under: Hobbyism and Nerdry,thoughtful handwaving — Supersonic Man @ 9:52 pm

So Apple is regretting the corner they painted themselves into by having their core development language be Objective-C.  This language is a horrid mashup made half of Smalltalk and half of traditional unreconstructed C.  Compared to C++, the modern half is more modern, but the primitive half is more primitive.  Steve Jobs used it for NeXT during his time away from Apple, and brought it back with him.  But what looked cool and exciting in 1986 is looking awfully outdated today.

The trend in the industry is clearly moving away from these half-and-half languages, toward stuff that doesn’t inherit primitive baggage from the previous century.  Microsoft has had great success by stripping all the old C-isms out of C++ to make C#, and Java — the oldest and crudest of this new generation of programming languages — may still be the world’s most widely used language, even though most people probably now see it as something that’s had its day and is not the place to invest future effort.

Now Apple has announced a nu-school language of their own, to replace Objectionable-C.  They’re calling it Swift.  It’s even more hep and now and with-it than C#.

There’s just one problem: there’s already another computer language using the name.  It’s a scripting language for parallel computing.  Its purpose is to make it easy to spread work over many computers at once.  And this, to me, is far more interesting than Apple’s new me-too language.  (Or any of the other new contenders coming up, like Google’s Go or the Mozilla foundation’s Rust.)

See, massive parallelism is where the future of computing lies.  If you haven’t noticed, desktop CPUs aren’t improving by leaps and bounds anymore like they used to.  Speeds and capacities are showing a much flatter growth curve than they did five years ago.  You can’t keep making the same old CPUs faster and smaller… you run into physical limits.

And this means that if we want tomorrow’s computers to be capable of feats qualitatively beyond what today’s can do — stuff like understanding natural language, or running a realistic VR simulation, or making robots capable of general-purpose labor — the only way to get there is through massive parallelism.  I think that in a decade or two, we’ll mainly compare computer performance specs not with gigahertz or teraflops, but with kilocores or megacores.  That is, by the degree of parallelism.

One problem is that 95% of programming is still done in a single-tasking form.  Most programmers have little idea of how to really organize computing tasks in parallel rather than in series.

There’s very little teaching and training being directed toward unlearning that traditional approach, which soon is going to be far too limiting.  Promulgating a new language built around the idea — especially one that makes it as simple and easy as possible — strikes me as a very positive and helpful step to take.  I’m really disappointed that Apple has chosen to dump on that helpful effort by trying to steal its name.

June 12, 2014

getting through the Dumb Layer

Filed under: Rantation and Politicizing,thoughtful handwaving — Supersonic Man @ 1:38 pm

I am defining a new term, “the dumb layer”.  What I mean by the term is, any layer of human interface which is designed to deal quickly, or at low cost, with simple issues and questions that do not (or which someone assumes should not) require an intelligent response.  Some examples of Dumb Layers are:

  • voice menu systems that answer telephones, which you have to navigate through (or even actively outwit) in order to speak to a live person
  • software user interfaces which hide their more complex features behind some kind of gateway where you have to select “advanced settings” or “expert mode”
  • scripts which tell customer service workers to answer all incoming queries initially with a canned response covering basic common issues, so that you never get a relevant or thoughtful reply until you send in a followup query
  • bureaucracies which routinely reject legitimate requests for action until you show persistence in nagging them, or which ignore you until you submit lots of “required” paperwork
  • software which has a simple GUI to make it easy to use, but also a command-line or script-based interface which is more powerful
  • anything that appears when you click a Help button
  • anything that provides premade style templates as an alternative to manual styling
  • the Check Engine light in your car

There are lots more types.  A lot of times, a Dumb Layer is a feature of a machine, with the goal being ease of use for the majority, but there are lots of human institutions that also have a Dumb Layer, implemented formally as a set of rules that employees are instructed to follow in dealing with the public, or informally as an attitude of lackadaisicalness toward anyone who they think they can safely ignore.

Dumb Layer design wasn’t very common when I was young.  Nowadays, Dumb Layers seem to be everywhere.  And if you want good service, getting it depends on how adroit one becomes at penetrating through the Dumb Layer to reach someone who is empowered to think about what they’re doing.  This becomes annoying if one has to have an extended back-and-forth over several days, particularly over the phone, as you may have to redundantly re-navigate the Dumb Layer on each new call.

Sometime, designers and authorities are seduced into believing that the Dumb Layer should be able to do everything, and there’s no need to let anyone through to anything smarter.  This makes economic sense if you’re providing some service at a super low price and can’t afford to give interactive support.  But it can also afflict systems and institutions that really don’t have any excuse.  Such systems can gradually become acutely dysfunctional, even as superficially most business goes on normally with no problems.

Note: sometimes what appears to be a Dumb Layer is actually a security layer, and needs to be there to limit unauthorized access.  And sometimes it’s a safety feature, like traction control and antilock brakes.  (A Harley-Davidson has no dumb layer.)

I wrote a post here a while back about how Google and other search engines seem to actually be getting less useful as they “improve”.  I think this is an instance of the dumb layer taking over.  The smart layer of Google is getting more and more inaccessible, and Bing and the others don’t seem to be any better.

Lastly, I just want to applaud some organizations which have chosen to have no Dumb Layer.  Wikipedia is one: you search for advanced quantum mechanics or unsolved problems in mathematics, and you get the whole enchilada plopped down right in front of you, not filtered or simplified in any way — just as the ability to add new content is right there with no filter, as long as you don’t abuse it.  To me, this makes it not just the largest and most accessible encyclopedia ever assembled, but also the most genuinely useful.  People may put in facetious stuff, such as describing Solange Knowles as “Jay-Z’s 100th problem”, but every other encyclopedia loses far more value because of all the detail that the editors decide has to be left out as not of broad enough interest.

I don’t use Photoshop, but as I understand it, it has no dumb layer.  This is one reason people consider it the definitive tool.  (I use less expensive competing software such as DxO, which does have a bit of a dumb layer, but you can quickly un-hide all the smart bits.)

One reason people often prefer Android to iOS is because its dumb layer is more permeable.

Anything that is shared as “open source” is a strike against dumb layers.

May 31, 2014

the unifying force of conservatism

Filed under: Rantation and Politicizing — Supersonic Man @ 11:48 am

The Republican party, which as we know is in a fairly desperate situation with regard to attracting younger voters, may have hit on something that will help: they’re beginning to dabble in marijuana reform.  This has always  been attractive to libertarians, and as the religious fundamentalists and old-south racists fade away as electoral powers, a libertarian wing of the GOP looks like it might be coming into ascendance.

There seem to be two branches to the GOP base.  (We leave aside, for the moment, the monied establishment insiders, and look just at mass voting groups.)  One includes the aforementioned fundies and racists, plus assorted other know-nothing antigovernment types such as the militia movement.  This is the traditional GOP base, the angry white male low-information voters, the foundation of the “southern strategy”… Bobby Jindal’s “stupid party”.  One thing that unifies this block is a negative attitude toward science and intellectualism.  This group is concentrated in the older generations and is, thankfully, fading away with time.

The other branch is not anti-intellectual at all, and isn’t particularly religious.  Many of its members are successful high-tech engineering types, or smart go-getters in finance.  It is racially colorblind as a rule, and not notably patriarchal or misogynistic (though it still may attract men a lot more easily than women).  Unlike the first group, they usually have modern attitudes about issues having to do with sex and drugs.  They have no problem indulging in the kinds of decadent lifestyles that the first group loves to hate.  This group includes libertarians, objectivists, and a variety of more mainstream groups who just happen to believe in enterpreneurship, small government, low tax, light regulation, and an ideal of meritocracy, such as for instance the Log Cabin Republicans.  It can include people who are in some ways quite radical, such as transhumanists.

The difference between the two groups is so wide that one might naturally wonder what they’re doing in the same party in the first place.  Where is their common ground?

The only one I can see is that they both defend privilege.  Libertarians back a system that lets all the people who have wealth and private power keep it and expand it.  Cultural traditionalists, including religious traditionalists, end up defending traditional advantages for favored groups.  Racists and sexists oppose taking any advantage from white males, who happen to include most of the rich and powerful people protected by the other groups.  They can all agree, basically, on letting the rich get richer, and though none of the three actually wants an impoverished middle class, they’re all capable of lying to themselves about that somehow being unrelated to the protection of the wealthy which they support.

So letting the rich get richer is now the only thing that the Republican party, as a unified whole, stands for.

May 29, 2014

why the insurance industry is highly regulated, and needs to stay that way

Filed under: Rantation and Politicizing — Supersonic Man @ 10:41 am

I currently work for an insurance company.  Not a great big one, but a leading one in its niche, and one that seems to treat its clients pretty well.  And I get to see firsthand the impact of heavy regulation on the company.  Some people look at this impact and argue that business would be much more efficient without it.  But that won’t work in this case.  I will explain here why the insurance industry needs this regulation.

Let’s assume that some insurance company executives are more ethical and principled, and others are less so.  Let’s consider the incentives acting on the latter group.  Getting high premiums is good, but drives away new customers.  Advertising lower premiums brings in new customers, allowing total revenue to go up by more than enough to compensate for the loss of margin on each.  So the incentive is to compete with other insurers with a low price.

Paying claims is bad.  It not only loses money right away, it forces the raising of premiums.  The more claims you can avoid paying, the more competitive a rate you can offer to new customers.

Now, the revenue of premium payers is steady, but the cost of claims is not.  One year might be twice as expensive as another.  This means you need a cash reserve.  But this needs slightly higher premiums, so the incentive is to keep it small.  The incentive is to underestimate the degree of variability, to assume that the exceptionally bad claim years won’t happen.

The cash reserve can earn money through investment.  Some investments pay more than others, but the ones that pay more also pay at a less predictable rate.  So the incentive is to underestimate that variability and invest in overly risky opportunities for capital growth.

Whenever a company has a few good years in a row, the amount of saving they’re doing for future claims starts to look excessive, and the incentive is to think that it’s now safe to cut back on savings, and lower premiums.

Put all this together and what you get is a situation where market forces push insurers toward being underprepared for major claims.  If some companies prepare well and others prepare poorly, the ones making poorer preparation will tend to get customers away from the ones preparing well, by offering lower prices.  This can force the companies that are reluctant to do so, to also shave their premium prices.  So the number of poorly prepared companies has to increase.  And because the rewards of underpricing are immediate but the negative consequences are uncertain and may not occur for many years, it’s very easy for people to convince themselves that they’re not underpricing at all.

Eventually, a crunch will come — the investments may go south, and at some point the big claims will hit.  If the investments do poorly enough, even routine claims may suddenly be too big.  Once this happens, the company can either exhaust their capital, or cheat their customer, coming up with some technicality as a grounds to refuse to pay a claim.  The latter might work, or it might make them liable to a costly lawsuit.  Either way, it’s not sustainable.  Eventually, you either have to give up the money, or lose your reputation so customers won’t trust you with their money anymore.

Now a free market apologist might argue that the market will eliminate these bad companies eventually, thereby leaving the good ones behind.  But once some companies turn bad, competitive pressure forces the others to either get down in the mud with them, or shrink and become minor niche providers.  This means that the majority of customers buying insurance end up being not really insured.  Legally they’re covered, but in practice they aren’t — they’re still subject to the risk they tried to eliminate by insuring themselves.  The result of this is that the entire industry ceases to function; no real insurance is being provided except to a select few who are willing to pay a top-shelf price for it.

As a result of this, the only way to have an insurance industry that really insures people is, firstly, to force them to expose their financial data so that everyone can see if they are at risk of failing to pay claims. And to test those findings against some standards of financial preparedness.

But that’s not the only issue.  Even when all the insurers are financially healthy, each one still has an incentive to resist their obligations to pay claims.  There’s always an immediate reward for finding ways to deprive a customer of coverage when they make a claim.  And if some companies do so and others don’t, those companies gain a pricing advantage in the short term.  This can mean that the whole industry, again, can be dragged toward failing to really cover their customers.  This isn’t theoretical: a situation like this did happen in the pre-Obamacare health insurance industry.  It became the norm to cheat and rob many customers through legalistic trickery.  (We shall see in time how much difference the new law manages to make.)  The auto collision insurance industry has allegedly also showed tendencies in this direction, when the regulatory climates of particular states allowed it.

Again, the free market does not weed out the bad apples — or rather, it doesn’t weed them out quickly enough to discourage their proliferation.  Instead, it pushes the good ones to emulate the bad.  And even if you clean out all the bad ones, the least good of the ones remaining will still exert a slight downward pull, which increases with time.

The only way to overcome this steady downward movement is to put a floor under it, which means setting and enforcing minimum standards of dependability in paying claims.  Our current regulatory environment, unfortunately, is spottier in this area than it is in the financial one.  But when and where it’s done properly, the insurance industry can function pretty well, providing a valuable and necessary service.  When it doesn’t, the industry can gradually shift from useful to parasitical.  If that shift becomes complete, one might as well never have bothered to start insuring oneself.

May 13, 2014

cosmic inflation

Filed under: Hobbyism and Nerdry,thoughtful handwaving — Supersonic Man @ 10:24 am

The cosmological inflation theory always sounded weird to me.  I’ve been reading a bit about it, trying to get my head around what they’re claiming.  And I’m unconvinced that it’s a valid theory, even though it’s currently winning at prediction.

The classical big bang theory certainly has problems that need addressing.  Now, there’s no doubt that there was a bang which was big.  Everything we see in the universe is flying apart from everything else, and behind everything we can see is a thermal glow which, if you account for redshift, appears to come from hot gas just at the point where it cools enough to be transparent. (This is around 3000 Kelvin, about as hot as a halogen lamp filament).  So there’s no way around the conclusion that, thirteen point something billion years ago, the entire observable universe was packed into a much smaller volume which was dense and hot, and that it exploded out at terrific speed.  That much is clear.

The problems arise when you try to extrapolate what came before the ball of dense hot gas, which clearly was already expanding.  The math says that it must have all expanded from a volume that was much smaller and hotter still.  In fact, the classical mathematical solution to the big bang insists that the initial explosion must have taken place in a region that wasn’t just tiny, or even infinitesimal, but in no volume at all: an absolute single point where density and temperature equalled infinity.

This answer is nonsensical.  Scientists are now rightly rejecting it, as many of them also reject the notion that the center of a black hole must be a singularity of zero size.  Clearly both are an oversimplification.

The trouble is, if you postulate anything else, the consequences run up against observed data.  The key point of observation is that the universe, as far back as it’s possible to see, has an essentially uniform density and temperature in all directions.  It appears very much as if the volume of hot gas which existed at the earliest moment we can see was in a state of thermal equilibrium with itself, as if all energy differences had been allowed to settle down and blend themselves together at a smooth common level of heat.

But such a blending could not have happened.  There was no time for it to happen in.  And worse, the uniformity includes regions which could not have interacted with each other, because light is only now managing to cross from one to the other, or hasn’t even done so yet.  If we look at opposite sides of our sky, we see as far as light has travelled either way in all of history, which means the total distance from one side to the other has only been crossed halfway.  Since no influence can move faster than light, the two opposite sides cannot possibly have exchanged any energy with each other.  It’s true that they may once have only been a foot apart, but they moved away from each other so close to lightspeed that the light from one side still hasn’t reached the debris of the other.  This means they cannot have exchanged any form of energy.

The smallest random quantum fluctuations back then should loom large today as differences in the cosmic backround temperature, and in the density of galaxies.  The differences are too small to account for without some means of smoothing them away.  It can’t have been ordinary thermal equilibrium.  What could it have been?

Similar questions apply on more esoteric levels too, like why the curvature of space appears to be so near to the ideal value that makes it absolutely flat.  It’s not that it’s close today that’s the problem, since our measurement of it is not all that precise… it’s that any small departure would increase over time, so if it’s close now, then in the distant past it must have been ultra-close with improbable precision.

The inflation theory is an attempt to answer these questions.  It postulates that some unknown repulsive force caused space itself, and the matter in it, to undergo some kind of self-generating expansion which kept the universe hot and dense while it grew at an immense rate.  Then, at some point, it ran out of gas and the universe started coasting outward in a conventional way, as we now observe.

The math they postulate for this process would have the effect of ironing out the irregularities that preceded it, making everything steadily smoother and flatter as long as it continued.  And it allows for a time before the inflation started, in which parts of the universe that are now inseparably distant could have achieved energy equilibrium.  It would only have required a brief instant of delay between the initial creation, when all the matter may have been trapped by gravity in a very small initial volume, and the commencement (by entirely hypothetical means) of inflation.

It might even mean that the initial appearance of the pre-inflated universe could be possible as a quantum fluctuation in vacuum, because the amount of energy that needs to appear from nothing might be comparatively tiny.  In these versions, the universe’s gravitational field constitutes a store of negative energy, and the total net mass of the cosmos is zero.

But if you run it backwards to see what initial conditions could have worked out that way, many say that it falls victim to the same problems as a conventional big bang.  Whatever state preceded it has to have been already uniform to an utterly unnatural degree.  Some say that it actually makes the problem worse.

But aside from that, the whole theory just seems like the ultimate in ad-hockery, a contrivance of arbitrary rules and conditions based on imaginary physics, tuned to “predict” observed results. And it makes some claims that are hard to credit, like that the inflationary expansion was in some sense faster than light. Apparently this is a mathematically allowed solution to the equations of general relativity.

We already knew that we can’t account for the ultimate question of why there is something and not nothing.  Even a religious hypothesis doesn’t have an answer for that.  But even leaving that aside, it’s looking like we’ve really got no answers as to what physical conditions must have existed in the early part of the big bang — at a point where matter and energy had fully come into being and started obeying the physical laws of our cosmos, but before what can be observed.

I’d say both the conventional big bang theory and the inflation theory must be missing something essential.  There’s something big going on there which we’re totally failing to see yet.  Probably something that will look blatant and obvious in hindsight someday. Maybe brane theory, or something equally far out, can provide a missing piece if it develops enough.

The inflation theory may be half true. I’m sure some parts are valid. Maybe even most parts. The part where energy and gravity cancel each other out and can therefore be created together in arbitrarily large quantities, for instance, sounds pretty attractive. But I think we’re probably still missing some key piece of context in which these parts can make a good overall theory.

It’s clear that the inflation hypothesis in its current form is incomplete… I wouldn’t be surprised if whatever comes along to complete the picture ends up discarding a large part of it in the process, and the hypothetical inflation stage seems like a prime candidate for something that might turn out to not be needed anymore if we only knew what was missing.

October 22, 2013

“pension crisis”

Filed under: Rantation and Politicizing — Supersonic Man @ 5:30 pm

I’m starting to hear policymakers complaining again about the exploding cost of pensions, and looking for ways to screw their older workers out of the pay they were promised for doing their jobs.

Let me make one thing clear: there is no soaring cost associated with pensions.  Pensions are no more expensive, and no more valuable, than they ever were.  It is not their price that has gone up.  It is our willingness to pay that has gone down.  Pension costs are only exploding exponentially in comparison to what we currently like to pay working people.  Or in comparison to our willingness to raise tax revenue from investors and corporations.

The only reason we don’t realize how much less we’re all paid is because we can still afford all the insanely cheap consumer products we import from China, which don’t even make a profit for their manufacturers in many cases, and in a fairly real sense are probably paid for more by the federal reserve’s export of dollars for use by overseas capitalists as an international trading currency, than by the retail prices we pay in the big-box stores.

The anti-pension propagandists would have us believe that it was runaway pension costs that sunk the municipal government of Detroit.  Not true.  What ruined Detroit financially — besides the obvious problems of massive cutbacks in manufacturing employment throughout the region — was an “interest rate swap” scheme sold to them and other cities by Wall Street banks Merrill Lynch and UBS.  Sold, in some cases, by directly giving campaign contributions to the politicians who agreed to the plans.  Under this scheme, surprise: the banks got rich and the cities got poor.  Because this scheme was applied to the pension funds, suddenly the big budget hole was a “pension cost”.

September 28, 2013

the end of the US trade deficit?

Filed under: Rantation and Politicizing,thoughtful handwaving — Supersonic Man @ 9:55 pm

Two years ago, I wrote a post about deficits, and whether we should respond to the recession by printing even more money.  It discussed the causes of our deficit spending, and the theory which says that it’s impossible not to run a constant governmental spending deficit as long as we run a trade deficit… which means, in other words, as long as other countries keep selling us goods in exchange for dollars that they don’t spend back here, but instead use as the medium of capitalism at home.  This theory, as best I know, is most identified with the liberal economist James Galbraith.  It says that as long as we act as treasurer and currency supplier to capitalists overseas, we must run corresponding deficits domestically, which means the shortfall has to be either borrowed or printed (and watch out you don’t print too much).

But now I’m hearing about interesting happenings on this front from the more conservative economist John Mauldin.  And there’s big news here that I never expected.  I had thought that the “strong dollar policy” — the political decision to encourage worldwide demand for dollars, which produces all our deficits — would be something that it might be very difficult to back our way out of.  Among other things, it would require that some other currency could step up and be the new “reserve currency” that people use for international wealth.  I thought no other currency was in a position to take on that role.

Well, it looks like the Chinese renminbi (or yuan, informally) is starting to do just that.  And at the same time, the US is starting to do a lot more exporting.  Some of this has to do with increased fossil fuel production, some of it has to do with decreased wages, some of it has to do with Obama administration policies… and maybe some of it is actually a direct result of yuans coming out to play.

So quicker and easier than I ever thought could happen, we might be scaling back our production of dollars to a volume more suitable for a domestic economy.  And we’ll start seeing “Made in USA” on products again… which means that the incredible cheapness we’ve become accustomed to from Chinese products will become a thing of the past. (Some of the products we import now actually lose money for their makers.)  This would mean people will start to feel how poor they’ve actually become, and want better pay.  It’ll cause some discomfort.

And meanwhile, the newly ascended Chinese economy will find its greatest source of wealth drying up, and suddenly have to create prosperity on its own.  These factors will all tend to create a negative feedback on any such change, causing an inertia that will slow down the transition.  But Mauldin believes it will still happen faster than anyone expects.  Maybe, maybe not.

Mauldin’s friend David Brin, the SF novelist, points out one aspect of this which I hadn’t appreciated: namely, that the strong dollar policy, though best known here for how it hurt American workers, has done a tremendous amount of good for people overseas.  It has subsidized the creation of middle classes in China, India, and many other countries. It has helped lift a huge number of the world’s citizens out of poverty!  Brin has been accused of exaggerating the economic importance of this money, but still, in that light, the mild impoverishment of the American working class suddenly doesn’t seem like such a high price.

The “quantitative easing” that has been supporting the big banks since the last crash has also done a lot to stimulate developing economies, Mauldin says. That obviously needs to wind down. The Fed has warned that the first decreases may start soon.

Maybe the strong dollar policy’s job is now largely done and it’s time to move past that phase — time to wean the developing world from the dollar teat, in Brin’s terms.  It’ll have to be a gradual thing — too sudden a move might cause a crash in areas with a lot of current dependency on that cash flow.  And even a very gradual reduction is going to eventually cause some kind of shakeout. But apparently that transition is under way.

But it does sound alarmingly like it all depends, more than anyone likes to admit, on natural gas fracking. Which means that tightening the environmental regulations on that very unclean practice might end up restoring all our deficits. At least for the short term. This certainly helps me understand why policymakers are so eager to continue fracking despute the blatantly awful environmental costs… because it produces wealth vast enough to reshape the entire world economy.

September 27, 2013

Why I marched against Monsanto

Filed under: Rantation and Politicizing — Supersonic Man @ 10:24 pm

(I originally posted this on Facebook — this is just a copy-and-paste.  Another march is coming around…)

I’m pro-science and pro-innovation, so I’m not against genetic engineering as such. If somebody wants to Frankenstein up a zucchini that tastes like bacon, or a caterpillar that shits dental fillings, I say have at it. So why oppose Monstanto’s GMOing?

Well firstly, because the main thing they’re using it for is to add more pesticides to our food. And not just the pesticide itself, but the DNA to keep making more… if that gets taken up by one of your gut bacteria and expressed there, you’re in trouble.

I don’t fancy eating food soaked in Roundup (glyphosate) either.

On the longer term front, Monsanto has been particularly bad at abusing the legislative lobbying process to short-circuit any decent oversight, or legal responsibility. They’re trying to make it so that even when they screw up and create some kind of environmental disaster, they won’t even be held responsible for it after the fact. This is bioscience done in the style of wall street banksterism.

Note that I say when, not if, they screw up and create a disaster. I say this because of two clear trends. One is that the more familiar we get with genetic engineering, the more we’re inclined to treat it as familiar and predictable and safe. Our perception of the risk trends downward. (And Monsanto’s perception of risk is even lower — helped along by the connivance of bought legislatures who want to insulate them from even the risk they acknowledge.) But while our idea of the risk is going downward, the actual risk we take is going nowhere but up.

When Calgene came up with the Flavr Savr tomato, the first GMO to be approved for human consumption, it was treated as a big scary thing to be handled with great care. But the actual risk was minute; it pretty much couldn’t go wrong. Nowadays, things like Bt Soybeans are far more seriously risky — they and their fellow insecticidal crops may well be responsible for the poor health of honeybees across North America, not to mention any number of possible intestinal health complications — but we treat it as if it were safe and routine.

Sooner or later, the lines on the graph, of rising risk and dropping perception of risk, have to cross. Sooner or later, we’ll take one chance too many. And then we get an environmental disaster. And perhaps not just a one-time incident that can be cleaned up, like an oil spill… it might be a disaster that keeps on inflicting additional damage indefinitely into the future. And as long as we don’t get our attitudes right and watch the GMO industry in a way appropriate to the real risk they present, it is guaranteed that risk-taking behavior will only be checked when a disaster is eventually produced. That’s the way it goes in every other industry that isn’t watched. And the potential downside risk for GMOs is enormously large.

Monsanto needs firm opposition not so much for what they make now — though that’s objectionable enough — but for the way they’re actively working to keep the road to future catastrophe as wide and open as they possibly can.

Next Page »

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.