Points of Connection, Part Two: A is A . . . or is it?

What can one say about Watchmen that hasn’t already been said?  Since its initial publication in 1986-87, more ink has been spilled about the graphic novel by writer Alan Moore and artist Dave Gibbons than probably any other modern comic book narrative. It’s been named as one of the greatest (defined as most artistically accomplished, most influential, or most successful—take your pick) comic book projects in history, and even one of Time magazine’s Top 100 Novels, period.  Since the release of Zack Snyder’s 2009 film adaptation and DC’s decision to publish the Before Watchmen prequel books in 2012, there has been even more commentary and debate. I don’t intend to add more to the pile of Watchmen verbiage outside of the narrow scope I established in my last Medleyana post: the use of doppelganger characters.* Alan Moore’s influence on the mainstream has lessened as his projects have become increasingly idiosyncratic in recent years, but it is impossible to discuss the reworking of characters and the exploration of archetypes without bringing him up.

Watchmen is probably the best-known use of pastiche on a grand scale in comics. Originally, Moore meant to write his story about characters from the Charlton publishing company, which had been acquired by DC; after DC decided those characters could be profitably relaunched within DC’s established continuity, they were off the table, and Moore chose to create new characters along similar lines: the Blue Beetle became Nite Owl, the Question became Rorschach, and so forth.

Here’s where it gets interesting: as established previously, an intertextual double (a misprision, in poetic terms) follows the same broad outlines as the original, but is a character in itself, independent of its source.  Points of connection between the two are also points of departure: in other words, the double is only beholden to the original up to the threshold of reader recognition (for purposes of commentary) or satisfying the needs of a given character type (for narrative purposes); after that, they are effectively a blank slate, just like any other original character.  The difference between an effective analogue and a ripoff, then, has nothing to do with “originality” (a much overestimated quality, and especially meaningless in such a codified genre as the superhero), and everything to do with the creator’s success in infusing him or her with convincing motives and actions.  If a character would live, it must have the spark of life: nothing else matters.  (I’ve alluded to the writer’s role in crafting a convincing character, and that is just as true for “original” characters as doppelgangers, of course.)  For Moore, whose entire purpose was to establish a psychological realism to a degree that had only been spottily attempted in the superhero narrative previously, the inner life was a given, but for the achievement to have impact, the characters would also have to resonate as plausible superheroes.**  The Charlton stable were important models, but Moore and Gibbons also drew on the broader common property of superhero archetypes and the visual tropes of costume, accessories, and even the illustration styles of pulp novels, comic books, and advertising art in order to create a convincing, lifelike world, divergent from ours but believable nonetheless.

To cite an example from Watchmen, I had little familiarity with Steve Ditko’s severely moralistic vigilante the Question, or his follow-up character, the even more stringent Mr. A (whose uncompromising slogan, “A is A!” was taken directly from Ayn Rand’s Objectivism), when I first read the graphic novel.  Still, Rorschach is a clear enough character type: a vigilante with a moral code so strict that no one can live up to it, with equal contempt for criminals, their victims, and even other heroes if they aren’t willing to go as far as him.  The details that Moore invents for Walter Kovacs, Rorschach’s alter ego, speak to Watchmen’s interest in both the social problems and individual psychoses involved with superheroics: childhood sexual trauma, a connection to the infamous Kitty Genovese murder, and of course the horrific crimes that sped along Kovacs’ psychotic break.  One doesn’t need to know Ditko’s original characters to appreciate the drama, but it adds some intertextual depth (if anything, reading some of Mr. A’s cases show how little Moore had to exaggerate Rorschach’s ruthlessness and black-and-white morality).

The Question dons his mask; art by Steve Ditko

The Question dons his mask; art by Steve Ditko

Origin of Rorschach's mask; art by Dave Gibbons

Origin of Rorschach’s mask; art by Dave Gibbons

Likewise, I was familiar with the Blue Beetle from his introduction into DC continuity rather than his original Charlton adventures, but I didn’t immediately connect him to Nite Owl when reading Watchmen: he too is a familiar type, a “gadget hero” like Batman (or, to a lesser degree, Iron Man).  Within the narrative, Dan Dreiberg is actually the second Nite Owl, borrowing his name and persona from a Golden Age model, Hollis Mason (the first Nite Owl, representing both the ideals and the institutional memory of the original costumed heroes).  This pattern was true of the Blue Beetle, but also of characters such as Green Lantern and Hawkman who had very different Golden and Silver Age incarnations.

Watchmen also benefits from an important opportunity afforded by pastiche: the ability to replace the ad hoc jumble of origins and histories typical of established continuity with a streamlined history that both gives all the characters a common point of reference and allows for meaningful points of connection between them that goes beyond the simple “team-up.”  Although, as Geoff Klock points out, Moore has in many cases deliberately introduced the kind of contradictory history that plagues long-running comic book series into his original stories, in Watchmen he plays it straight, with his “real-life” costumed heroes taking inspiration from fictional comic book characters, and eventually supplanting them.  As for points of contact, in addition to the obvious shared history between them, there are subtle connections: the shape-shifting cloth which Rorschach wears as a mask, and from which he takes his name, is referred to as a spin-off of technologies introduced by Dr. Manhattan, the only truly superhuman character in the novel; other technologies and businesses mentioned are part of the empire of Adrian Veidt, the “self-made” superhero Ozymandias (and a major driver of the plot).

Film adaptations of superheroes often make connections where none exist in the comics in order to tighten up the plot, as for example the Joker/Jack Napier being identified as the killer of Bruce Wayne’s parents in Batman, or Ra’s al Ghul serving as both Wayne’s mentor and eventual antagonist in Batman Begins. Such circularity is more dramatically satisfying, and easier to establish, in a two-hour film or self-contained novel, although asserting such symmetries can be one function of rebooting or “retconning” an established series.  As an example from another narrative, when J. Michael Straczynski rebooted the Squadron Supreme for his 2003 series Supreme Power, he started from the ground up, effectively creating a “trope of a trope:” in Straczynski’s version, the escape pod that brought “Mark Milton” (Hyperion) to earth as an infant was part of an alien battle, the shrapnel from which also gave powers to the Blur (a trope of the Flash, replacing the original Squadron Supreme’s Whizzer, because really: the Whizzer?) and provided the “Power Prism” to Doctor Spectrum (a trope of Green Lantern, here reconceived as a special ops pilot nicknamed “Doctor” because of the surgical precision with which he executes his missions); some of the villains Hyperion faced were created through government experimentation with his own DNA.***  The reboot/misprision allowed Straczynski to focus on the elements that most concerned him: instead of the Squadron imposing its rule in the name of the greater good, as in Mark Gruenwald’s narrative, the Squadron are tools of a shadowy, not always benevolent government that doesn’t reveal its purposes to its super-military (as exemplified by Mark Milton’s upbringing by government operatives instead of Ma and Pa Kent), the expression of a twenty-first century anxiety that remains as relevant as ever.

The two incarnations of the Squadron Supreme by Alex Ross (l) and Gary Frank (r). Source: I love comic covers

The two incarnations of the Squadron Supreme by Alex Ross (l) and Gary Frank (r). Source: I love comic covers

Next time, I’ll examine a few examples from movies and television.

* But for the record, I liked Moore’s original “space squid” ending, and I think it could have worked on film if it had been reconceived in cinematic terms by a director more concerned with duplicating the feel than the look of the book.  How terrifying—and believable—could Peter Jackson or Sam Raimi have made that ending?

** Interestingly, Moore and Gibbons have stated that Mad‘s parody “Superduperman” was an influence on their approach, revealing depravity and greed beneath the slick costumes.  It’s not uncommon for transgressions that are comical to one generation to be taken seriously and developed in earnest by the next.

*** Marvel attempted something similar with its “New Universe” line in 1986, sort of the flip side of DC’s unification of its universe, and showing that it isn’t easy to build a compelling narrative world from scratch.

Points of Connection, Part One: the Many Children of Krypton

Hyperion.  Supreme.  The Sentry.  What do these characters have in common?  All are doppelgängers, or doubles, of Superman, and not just in the sense that all costumed heroes descend from the Big S, or in the debt they all owe to Philip Wylie’s Gladiator and Friedrich Nietzsche’s Übermensch, nor even in their monomythic relation to Joseph Campbell’s Hero With a Thousand Faces.  Rather, they are thinly veiled copies, different enough in detail to escape litigation (or avoid confusing readers) but readily recognized by key elements of their persona, history, and/or supporting cast.

The double, or pastiche, is a powerful fictional technique, in which an established character is effectively remade (and frequently repurposed); it’s especially common in comic books, where “copycatting” is an established (if not especially reputable) practice.  As an example, the core members of DC’s Justice League of America—Superman, Batman, Wonder Woman, et al—have been copied numerous times, individually and as a team.  It should be noted that I’m not speaking so much of identical twins or copies of the same characters inhabiting parallel universes, although those are equally common story-telling tropes. The doubling to which I refer is almost always intertextual, allowing a writer to tell a story including (a version of) a character owned by another publisher, or including story elements that would be unacceptable for a well-established (and ongoing) character.

Adhering to genre conventions is not enough: recall that National (DC’s parent company) sued Fawcett over alleged similarities between Superman and Captain Marvel, yet the elements the two characters have in common—super strength and other powers, colorful costumes, secret identities, and an ethos of doing good—are practically universal among Golden Age heroes, and in other specifics the characters are quite different.  Superman, orphaned son of the doomed planet Krypton, doesn’t have much in common with Billy Batson, who is given his powers by the wizard Shazam.  It is precisely those details that a writer can exploit, filling in the pastiche character’s backstory with variations that are functionally the same; sometimes it is as simple as changing a few names (Superman’s Krypton becomes Hyperion’s Argon), at other times a more thorough reworking is undertaken, but the connections are still apparent because of the overall dynamic of the story.  This goes beyond parody, although the line can be fuzzy: Mad’s “Superduperman” and “Captain Marbles” are clearly a joke, but one intended to reveal, among other things, the venality and absurdity hidden beneath the costumed hero’s civic-minded facade (“Once a creep, always a creep!”). Hyperion (from Marvel’s Squadron Supreme) and Alan Moore’s take on Marvelman/Miracleman (instantly recognizable as Superman and Captain Marvel, respectively) are largely dramatic in their treatment, but just as flawed.

The value of the double is summed up by Geoff Klock in his How to Read Superhero Comics and Why, a study that looks at the evolution of superhero narratives through the lens of Harold Bloom’s theory of the anxiety of influence:

The current character, though obviously in debt to its source, can often act as a powerful misprision [a reflection, or reinterpretation] of that original character, while the fact that it is not actually the original frees the writer from the constraints of copyright and continuity.

For example, earlier in his book, Klock argues that “Warren Ellis’s Four Voyagers [from the pages of Planetary] are a trope of Marvel’s Fantastic Four, which is to say that while the Four Voyagers are characters in themselves, they are also an interpretation/metaphor of characters that have come before” (emphasis added).

Such misprision is most useful when the writer has something to say beyond aping an already successful character: in Klock’s scheme, informed by Bloom’s statement that “the meaning of a poem can only be another poem,” well-known characters stand in for their creators, so that one generation of writers can exorcise or assimilate the influence of the preceding generation.  (And obviously, the technique of parody allows the writer to zero in on whatever element of the original character they wish to critique, exaggerating it, sometimes to the point of absurdity–see above.)  One doesn’t have to agree with all of Klock’s conclusions to see the value of this dialectic approach, and in fact the finest realization of a pastiche character isn’t always written by the person who first created it.  Alan Moore took over Supreme, a character created by Rob Liefeld, and transformed him into a meditation on Superman; the resemblance was already present, but Moore brought it into focus.  As another example, Mark Gruenwald used the Squadron Supreme, Marvel’s trope of the JLA (originally introduced by Roy Thomas), to examine the relationships of the characters to each other, bringing out unspoken subtext or real-world concerns (such as the tendency toward paternalistic fascism inherent in the concept of super-protectors; the alienation of super-beings’ human friends and family; and the finality of death, as opposed to comic book characters’ typical return from the grave for shock value, marketing purposes, or narrative convenience) that would halt an ongoing series in its tracks if acknowledged. (Another version of the Squadron, effectively a trope of a trope, was launched in 2003; more about that later.)

Such concerns, when addressed at all, used to be the domain of the parallel universe or “imaginary story:” What if the Justice League used their power to oppress humanity instead of protecting it?  One answer was Earth-3’s Crime Syndicate of America; another was the Squadron Sinister, created as part of an unofficial “Avengers vs. JLA” crossover (since by Comic Book Law, when two characters meet for the first time, they must test their powers against each other in battle; the Squadron Sinister later, of course, became the Squadron Supreme). Later, such projects as Frank Miller’s The Dark Knight Returns, Mark Waid’s Kingdom Come, and Darwyn Cooke’s DC: The New Frontier would address many of these subjects using flagship characters in speculative settings outside regular continuity, but Squadron Supreme (1985) predates the more critical approach to characterization kicked off by Alan Moore’s Watchmen, and by Miller himself, and the aforementioned projects benefited from the more fluid approach to continuity that became fashionable after the high water mark of Crisis on Infinite Earths’ obsessive attempt to keep things in fixed positions.

Time is short tonight, so I’ll save a discussion of Watchmen, one of the most prominent and influential reinventions of this type, for next time.

Am I the only one who goes back to read my old comments on online forums?

I’ve been active to varying degrees on a few different websites over the years (no, I’m not saying which ones—those things are pseudonymous for a reason!), and most commenting systems have the option to look at all of the comments made by an account at once.  A few years ago, I mentioned to a colleague that while commenting online includes being part of a conversation, it is also something like a mirror.  It was difficult to explain what I meant by that, but I think I had the review function in mind: going back (sometimes years, in the case of a few websites I’ve spent way too much time on), I can see a clear picture of who I was, what I was doing, and what my thoughts were.

As I mentioned before, I was once a regular journal-keeper and diarist, recording my thoughts for posterity.  Part of the appeal of journaling is the idea that someone in the future might want to read your writing, perhaps because your thought process and opinions would be worth knowing, or at least because your observations are clear enough to give an accurate picture of the world you live in, for history’s sake.  In that sense it’s just a few drafts away from being a memoir, composed one day at a time.  There’s also the more immediate pleasure of revisiting your own thoughts: very often I’ll encounter a detail in my writing that I had completely forgotten, and the written word will cause a flood of memories.

Reading my comments online can be like that, but very often it’s less like a diary and more like the conversation books left by Beethoven’s visitors late in his life: because of the composer’s deafness, visitors had to write their side of the conversation for him to read, leaving a record of only half the discussion.  It’s one thing to reread a comment that contains a fully-formed opinion and think, “Ah! Yes, that sums it up!” or “I remember that!”  It’s quite another to look at a comment reading “I agree!” (or, God forbid, “LOL”) and not remember what it was in response to, or read a comment that was obviously a real zinger in context, knowing it was part of a very funny comment thread, but falls flat or simply makes no sense in isolation.  Online interactions may be saved on servers forever, but not all exchanges were meant to be timeless: sometimes you just had to be there.

Taking part in online conversations has also helped me to sharpen and clarify my opinions: one can hardly write anything on the internet without facing disagreement, so writing (and defending) opinions, and accepting that others will see things differently, is an excellent spine-strengthening exercise.  I’ve seen more than one forum poster claim that taking part in the forum helped them to become a better writer, and to the extent that participating helped them solidify their point of view and express it clearly, I believe it.

Of course, all of this assumes a certain level of civility, not always easy to come by online.  I’m not sure the internet has truly lowered the level of discourse, as is sometimes claimed, or if it just allows us to see more of it than we would normally encounter without the flood of information coming to us through Facebook, Twitter, et al.  (And of course, even traditional media outlets now expect that their audience will want to talk back, a development that is mostly positive but which is also an open invitation to kooks everywhere.)  I avoid the comments sections of news sites like I would avoid bad neighborhoods; I resist the quixotic urge to correct every misinformed thinker I encounter online.  In retrospect, there are a few occasions I wish I had spoken up, but mostly I just get worked up and agitated arguing with people I don’t even know, and the well of ignorance sometimes seems bottomless: arguing with people could be a full-time job, and for some people it apparently is. I’ve come to believe that strong moderators are essential for preserving lively discussion without descending into flaming and abuse, especially in the early going; after a forum has been around a while, with a number of regular posters, a tone is established, in general set by the content of the site and the guidelines set by the moderators.  To state the obvious, speech online isn’t that different from everyday speech: you aren’t free to shout “Fire!” in a crowded theater.  The website isn’t “censoring” anyone: as a private enterprise, it is free to set its own standards of conduct.

I’m not as active in forums as I once was, and for some of the same reasons I don’t journal: I don’t have the time, and If I’m going to spend time writing I’d rather put the energy into something more permanent.  Being able to comment online is like having a bar or coffee shop in your home, open twenty-four hours a day, where you can always get into a conversation (or pick a fight).  That’s a strong temptation, and for most websites it comes hand in hand with a continuous flow of new content to spark discussion.  In that sense it’s not that different from the way I used to watch television, but it can feed into the feeling that I need to be entertained every moment, that I can never be alone with my thoughts.  I know I’m not the only one who feels that way (witness the productivity programs whose selling point is the ability to lock you out of your email and social media so you can get some work done); it’s a battle I keep waging, even if I know I’ll be more successful some days than others.

Everybody’s Looking for Some Action

There were big tables covered with comics standing upright in long rows.  A sign hanging from the ceiling said All Comics 5¢.  We began to flip through the comics.  Alan had a list with the titles and numbers of the comics he wanted.  It was slow work.  The only comic he found that was on his list of wants was a copy of Action Comics Number 1—but he didn’t buy it because there was a corner torn off the cover.  He said he only bought comics that were perfect.

That throwaway line, from Daniel Pinkwater’s young adult fantasy Alan Mendelsohn, the Boy from Mars, is played for irony, of course: even in 1979, when Alan Mendelsohn was first published, a copy of the first issue of Action Comics—the comic book in which Superman made his first appearance—was something the average collector could only dream of finding, in any condition.  It also establishes Mendelsohn’s character: exacting to the point of eccentricity, and confident enough to pass up the find of a lifetime because it’s not exactly what he’s looking for.  Later, Mendelsohn sells his comic book collection to finance the greater adventure he and Leonard (the narrator) are on: Mendelsohn remains cool while his buyer goes increasingly crazy for the rare finds Mendelsohn has.  By the end, Mendelsohn has the buyer eating out of his hand, and he and Leonard get the money they need, and then some:

 “The difference between that man and me,” Alan Mendelsohn said, “is that I am a connoisseur, and he is a fanatic.”

Both scenes play into powerful fantasies for young collectors: finding a holy grail—there have been more expensive comic books, but few that are as recognizable as Action Comics No. 1—and being able to leverage our finds down the road, using our connoisseurship to get one over on the drooling fanatics who’ll pay any price for what we have.

Action-Comics-No-1

Sadly, for most collectors, the fantasies remain just that.  For the past few days, the comics blogosphere has been chewing over an article in Businessweek pointing out that you’re probably not going to be able to retire on the proceeds from your comic book collection.  As an example, columnist Frank Santoro offers an anecdote that stands in for the general trend:

He recently had to break the bad news to a friend’s uncle, who was convinced his comic collection—about 3,000 books—was worth at least $23,000. “I told him it was probably more like $500,” Santoro says. “And a comic book store would probably only offer him $200.”

When I read this, my first reaction was: “Well, duh.”  While auction prices for “key” Golden Age issues have continued to rise, it should be obvious that there is a big difference between Action Comics No. 1 and Secret Wars II No. 1, and the collections owned by the forty-something men in the article are likely to be more laden with the latter than the former.  I know, because I’m one myself, with a collection of bagged and boarded comics in the basement, and I doubt I’ll ever get much out of it in monetary terms.  Sure, it was disillusioning the first time I realized I wouldn’t get what I thought something was worth out of it, but I’m willing to settle on a realistic price for anything; it’s just that “realistic” can look very different depending on whether one is buying or selling. I’ve encountered my share of junk shops, garage sales and Craigslist ads run by deluded souls convinced that they’re sitting on a gold mine.  I’ve seen scratched-up Beatles albums for $40 or $50 (and not the rare ones) and highly-promoted but far from rare comics from the ‘90s with high price tags.  Anyone who has collected anything, or even tried to buy a used car or piece of furniture, could tell similar stories.

The seller may have an inflated idea of the scarcity of their item, or they may have been swept up in the hype of ever-rising prices for collectibles in general; they may even have a price guide to back them up, which only proves that someone was willing to pay a premium for the item (in mint condition, which is usually not the case) at one time.  Still, something is really only “worth” what someone else is willing to pay for it, so I imagine those albums and comics are still sitting on the shelf, or were marked down or put into storage—or found a buyer as gullible as the seller (leading to the description of the back issue market as a “Ponzi scheme” in the Businessweek article).

But clearly, the idea that your old comic book collection would put your kids through college is an old one, an article of faith (or folklore) that predates the speculation boom of the 1990s.  As Alan Mendelsohn shows, it was already alive and well in the 1980s, when I was collecting, and it didn’t only come from the publishers and retailers who had a vested interest in promoting “collectability.”  The belief among collectors that we were stockpiling a monetary investment for the future had a “revenge of the nerds” quality, like the stories we told ourselves that we would all become successful inventors or entrepreneurs, getting the last laugh on the jocks, the bullies, the “normals” who got in our way.  I guess it worked out that way for a few people, but for the majority it was a self-serving myth (and for the record, Alan Mendelsohn got a hundred and eighty-five dollars and a brass potato for his collection; he didn’t become a millionaire).

As an extreme example, consider “Gather Ye Acorns,” a 1986 episode of the anthology series Amazing Stories.  In it, Mark Hamill (in one of his few post-Luke Skywalker, pre-voice acting roles) plays Jonathan Quick, a dreamy young man growing up during the 1930s, obsessed with comic books, pulp magazines and toys.  Pushed by his parents, who urge him to “grow up” and cast off his childish belongings, Jonathan is approached by a mysterious, gnome-like figure, a folkloric wild man (played by David Rappaport), who encourages him to keep the things he loves, to hold onto the magic of childhood.  What the world needs, the troll tells him, “is a few more dreamers.”  Over the years, Jonathan turns down the prospect of a normal life, descending into poverty and eventually living in a pitiful shack with his shabby old car and all his old junk; just when he has lost faith, he encounters (at the “Last Chance” gas station!) a knowledgeable (and wealthy) collector.  It turns out that the world has come around, and his belongings, which for so long were precious only to him, are now highly collectible.  The owner of a comic book shop is shown going through his treasures in disbelief: why, that’s Action No. 1, the first appearance of Superman!  The episode ends with an auction of Jonathan Quick’s collection; now wealthy, he encounters the troll one last time, and while trying to thank him, makes the acquaintance of an attractive lady.  Perhaps it’s time for Jonathan Quick to finally settle down.

In the broadest sense, that of allegory, “Gather Ye Acorns” is a story of holding on to the magic of childhood, of not letting others define you or devalue your passions. But I doubt I was the only viewer who took the story’s moral literally, as a vindication of the collecting lifestyle: “See?  All that stuff had value, and he ended up rich!”  It’s probably also a middle finger to the parents of Baby Boomers (like Steven Spielberg, who produced the series, and whose story Stu Krieger’s teleplay is based upon) who threw out all their kids’ baseball cards and comic books, and who, like the overbearing parents in “Gather Ye Acorns,” probably never saw that junk as anything but a waste of money in the first place.

Looking back at actual history, however, there was more than just the ever-present Generation Gap at work: the social upheaval and increased mobility of the Depression and World War II made the maintenance of big collections an unlikely prospect, even for those who might have been so inclined.  It’s hard for us nowadays to imagine how few possessions most families had compared to the present (and forget about renting extra storage space, as so many of us do now!).  Remembering the adage “every move is like a fire,” it’s also likely that in the migrations of the war years (from the Dust Bowl to the West; from the deep South to the factories of the North; from rural areas to cities, and from cities to the suburbs), preserving ephemera like comics was simply not a high priority for most people.  Finally, the war effort included paper and shellac drives that undoubtedly consumed thousands of comics, magazines, and records.

It’s for all the above reasons that the “key” issues from the Golden Age command such high prices, and why, barring a similar national upheaval, later issues probably never will.  Even as a kid, it was obvious to me that if everyone was saving their comics (and baseball cards, and whatever else), they would never become as scarce as material from the 1930s and ‘40s (or ‘50s, when a great number of comics were destroyed as part of the moral panic that led to the creation of the Comics Code).  In some cases, it was now parents who enabled the preservationist instinct, Baby Boomers themselves who didn’t want to repeat the mistakes of their own parents.  And of course, scarcity is only part of the equation: it has to be something people want in the first place, or low supply will do nothing to drive up demand.  Even if all those variant-cover comics from the ‘90s disappeared, it’s unlikely they would ever be as sought-after as historically important issues like the first appearances of Superman or Batman.

Since we don’t know what will be scarce and desirable in the future, should we save everything, just in case? Interestingly, at the midpoint of “Gather Ye Acorns,” when Jonathan Quick is squatting in a shack in the desert, he resembles a figure that has become much more visible since this episode was broadcast: the hoarder.  His anger at the troll, his disgust with the “treasures” he’s spent his life hanging onto, and above all his despair—“I have nothing!” he howls—are the flip side of the usual narrative, and are a frighteningly real moment in a story that otherwise has the broad outlines of a fable.  Even with the happy ending, the story seems to stretch things when it suggests that his years of struggle were worth it, because he lived life on his own terms; this comes awfully close to romanticizing poverty, as if there were no middle ground between his parents’ rigid standards and life as a “bum.”  As writer Noel Murray asked when examining two current portraits of Americans’ relationship with our possessions, the TV shows American Pickers and Hoarders, “So which is it? Are we supposed to hang on to all of our old crap just in case it turns out to be valuable, or is that kind of packrattery the sign of a disordered mind?”

Mark Hamill as you, the reader

Mark Hamill as you, the reader, on Amazing Stories

Of course, you could still read your comics.  When I was a kid, I had a friend, Jason, who often came around to trade comics.  On the one hand, he was always interesting to talk to, and had a knack for digging up unusual books I’d never seen; on the other hand, he’d drive me crazy by going through my stacks, getting things out of order, and wanting to trade for issues that would break up continuous runs.  Condition didn’t matter much to him; it was the stories that were important.  Jason was a throwback, the kind of comic book reader who had supposedly disappeared with Leave it to Beaver: he didn’t bag his comics—he’d even roll them up to stick in his backpack, to my horror.  (In retrospect, I wasn’t much more careful, but I could be an awful snob.)

Despite my efforts to preserve my comics like a good investor, my best memories of being a comics reader in the 1980s are of getting together with friends to read and discuss comics, and even those marathon trading sessions that left me cleaning up and reordering my collection for the rest of the afternoon.  Similarly, some of the comics I most fondly remember finding at garage sales were reprints, some with the covers missing, of no monetary value at all.  I tried valiantly to be a connoisseur, but I guess I was really a fanatic all along.

Instruments of Death

“The Torture Garden: It’s where the Devil calls the tune . . . to play a concerto of fear!”

–Trailer for Torture Garden, 1967

danceofdeath

In honor of Halloween, it’s time to look at the spookier side of musical instruments, specifically the roles some have played in mystery and horror fiction.  On the one hand, the organ has the most sinister reputation of any instrument through its association with the Phantom of the Opera and his fictional descendants: there’s just something about the full organ’s portentous sound and the gloomy atmosphere of the Gothic cathedral that goes hand in hand with cobwebs and candlelight, so expect to hear many renditions of Bach’s Toccata and Fugue in D Minor (or at least the opening bars) during October.  The organ, nicknamed “the king of instruments,” also fits nicely with the popular association of criminal masterminds with classical music: we like our villains to have refined taste, whether played by Vincent Price or Anthony Hopkins.  In the same way, the organist seated at his instrument, surrounded by ranks of keyboards, pedals, and organ stops ready at his command, is a neat visual shorthand for a master manipulator, sitting at the center of a web, controlling everything around him.  (In at least one case, the direct-to-video Disney sequel Beauty and the Beast: The Enchanted Christmas, the organ is the villain, conniving to make others to do its will even though it cannot move from its place.)

Lon Chaney, Sr. in the 1925 film The Phantom of the Opera

Lon Chaney, Sr. in the 1925 film The Phantom of the Opera

Brian De Palma's 1974 update, Phantom of the Paradise

Brian De Palma’s 1974 update, Phantom of the Paradise

The violin, on the other hand, is often associated with the Devil, as in such pieces of music as Danse Macabre, L’Histoire du Soldat, and “The Devil Went Down to Georgia.”  In folk tales, the Devil enjoys wagers, betting his own gold fiddle against the souls of his opponents.  He may also bestow musical talent in exchange for a soul, a prominent part of the myth surrounding Tartini’s “Devil’s Trill” Sonata. Later, the great Italian virtuoso Niccolò Paganini was the subject of lurid rumors that he had sold his soul, and worse: Theosophy founder Madame Helena Blavatsky included Paganini in her story “The Ensouled Violin,” and graphically embroidered on the notion that the strings of Paganini’s violin were made from human intestine, and that his uncanny ability to mimic the human voice with his playing actually came from a spirit trapped within the instrument.

A similar story is part of the mythology of the Blues: Robert Johnson was supposed to have met the Devil at a crossroads at midnight, where he traded his soul for his legendary guitar-playing ability.  The legend formed the basis of the Ralph Macchio film Crossroads and was parodied on Metalocalypse (in the episode “Bluesklok”).  Interestingly, Elijah Wald, in his book Escaping the Delta, has shown that the same story was originally attributed to a Tommy Johnson and then transferred to Robert when his legend outpaced Tommy’s.  Naturally, the whole thing has roots in folklore: Wald points out, “When Harry Middleton Hyatt collected stories of musicians going to the crossroads to gain supernatural skills, as part of a vast study of Southern folk beliefs in the late 1930s, he reported as many banjo players and violinists as guitarists,” as well as an accordionist.

Why is there such a connection between fiddling and death?  In the Middle Ages, instrumental music was considered both profane and frivolous, closely associated with itinerant, always-suspect actors and minstrels and the drunken singers in taverns.  In depictions of Death (usually as a skeleton, the same as now), musical instruments were often a symbol of the sinfulness, vanity, and futility of all human activity, not just music.  (The popular image of Nero “fiddling while Rome burned” probably owes much to this symbolism, as the violin had yet to be invented in Nero’s day; likewise, contrast the supposed indolence of grasshoppers with the industry of ants.)   The image of a grinning skeleton “playing” his victims into the grave may have struck the medieval viewer as cruel irony, a just punishment, or as a warning.

The medieval dance of death.

According to one author, the connection between the violin and mortality was more than just poetic: in 2006, Rohan Kriwaczek published An Incomplete History of The Art of Funerary Violin.  According to Kriwaczek, there had once been a Guild of Funerary Violinists, whose work, repertoire, and indeed their very existence had been suppressed by the Vatican during the Great Funerary Purges of the 1830s and ‘40s.  After 1846, the few remaining members of the Guild went underground, and Kriwaczek, eventually entrusted with their legacy, was able to piece together this secret history and bring it to the public. Kriwaczek describes the Funerary Violinist as playing a potent intercessionary role:

In his tone the violinist must first convey the deep grief that is present in the gathering, and then transform it into a thing of beauty.  By the time he is finished, a deep and plaintive calm should have descended, and the bereaved should be ready to hear the eulogy. . . . The violinist’s is a position of great responsibility, akin in many ways to that of a priest or shaman, and should not be taken lightly.

Alas, the book was a hoax, supposedly concocted by Kriwaczek to increase his bookings as a violinist at—you guessed it—funerals.  Still, I can’t help but feel that Kriwaczek’s story, with its dueling Funerary Violinists, buried secrets, and cameos from outsized characters including composers, Popes, and virtuosi, would make a smashing TV program, a historical saga with more than a touch of gothic intrigue.

Sometimes the instrument is cursed: in the short “Oh, Whistle, and I’ll Come to You, My Lad” by the master of the English ghost story M. R. James, it’s an ancient bronze whistle (proving that another James title, “A Warning to the Curious,” could equally apply to almost all his stories):

He blew tentatively and stopped suddenly, startled and yet pleased at the note he had elicited. It had a quality of infinite distance in it, and, soft as it was, he somehow felt it must be audible for miles round. It was a sound, too, that seemed to have the power (which many scents possess) of forming pictures in the brain. He saw quite clearly for a moment a vision of a wide, dark expanse at night, with a fresh wind blowing, and in the midst a lonely figure–how employed, he could not tell. Perhaps he would have seen more had not the picture been broken by the sudden surge of a gust of wind against his casement, so sudden that it made him look up, just in time to see the white glint of a sea-bird’s wing somewhere outside the dark panes.

Just as frequently it’s a MacGuffin that activates the plot: a Stradivarius is as valuable as a van Gogh, and serves as well as any other objet d’art as the motivation in a murder mystery.  An example is the three-quarter sized Strad, the Piccolino, at the center of Gerald Elias’ mystery Devil’s Trill, the first of a series centered on violinist-sleuth Daniel Jacobus.  And despite its unusual varnish, the titular instrument of the 1998 film The Red Violin is haunted more by tragedy and human foibles than by any supernatural evil.

The weaponized instrument is an infrequent literary device, but there are a few examples: the murder in Dame Ngaio Marsh’s Overture to Death is accomplished by a revolver hidden inside an upright piano, rigged to fire when the pianist plays the third chord of Sergei Rachmaninoff’s Prelude in C-sharp Minor, a Rube Goldberg arrangement that sounds about as practical in real life as this:

Likewise, it doesn’t seem that it would be that hard to escape the vengeance meted out by the grand piano in “Mr. Steinway,” a section of the 1967 anthology film Torture Garden, based on stories by Robert Bloch.  In the story, the piano in question belongs to a prominent virtuoso, a gift from his mother, and his devotion to it is tested when a young lady (played by Barbara Ewing) enters his life.  The black wing shape of the piano is a looming presence in the film version, always in the background or casting its shadow over the doomed couple, and the Oedipal implications of the pianist’s relationship with his mother, never seen but personified by the piano, are left as unspoken subtext.  So far, so good, but by the time the piano lurches into motion and pushes the intruding girl out the window, we’ve entered the realm of delirious high camp.  The lesson: music is a jealous mistress.

Finally, as a bonus, I present one of the most bizarre (and gratuitous) examples of this trope, from the 1976 film The Town That Dreaded Sundown: death by trombone.  Happy Halloween!

Play (virtual) Ball!

In honor of the World Series—go Cards!—I thought I’d take a look at a few ways the game of baseball has been translated into America’s other national pastime: video games!  There have been so many different adaptations of baseball that I’m limiting myself to a very unscientific survey of a few baseball video games I happened to already have in my collection (what, you expect research?).  If you want more, check out the link at the end of the article.

Intellivision Lives! gave me two opportunities to play that console’s baseball games, but they turned out to be very similar: World Championship Baseball was an upgrade of Intellivision’s original Baseball of 1978, reprogrammed to take advantage of the increased memory on the system’s later cartridges.  The original version only supported two-player games, so I played a few innings with both controllers in hand, pitching, hitting, fielding, and running the bases like a chess player taking both sides of the board.  The later version allowed one-player games, with the computer taking a side; it even allowed zero-player games, so I could watch the computer take on itself, endlessly running its routines, just like in WarGames. 

Intellivision Baseball

Intellivision Baseball

The superiority of Intellivision’s version to Atari’s was a cornerstone of Intellivision’s ad campaign, and it does look good: the entire field can be seen, and the players are recognizably humanoid, with little running legs (and a catcher’s squat when at rest).  If anything, the player is given too much control, down to the catcher’s responsibility to return the ball to the pitcher after a play, and with the entire field on screen, the player controlling the pitching team has to be able to switch control quickly from one fielder to another, while the runner has the advantage of a much narrower range of options.  Unsurprisingly, the batting side ran up the score quickly, but at least I could take credit for both sides’ performance.  (Playing with the Intellivision’s controllers surely would have helped: this was one of several games that came with inserts that could be placed over the number pad, so instead of remembering which number to press for a given play, you could press “Bunt” or “Second Base” or whatever was needed.)

From mightygodking.com

From mightygodking.com

Several different baseball games were available for the Atari 2600, from the very primitive (the inspiration for the parody box cover Every Sport Ever in Pong Form) to games that squeezed every last drop of power out of the console. An example of the latter is Pete Rose Baseball (1988, very late in the Atari’s lifespan; okay, a little research went into this) for the Atari by Activision, a company formed by Atari programmers who went off on their own when they became disillusioned with the inequality between Atari’s profits and their own small paychecks and low status within the company.  As soon as they established there was no legal or technical barrier to releasing their own third-party software they were off (incidentally opening the door to dozens of other companies flooding the market with games).  (I played the Gameboy Advance port, retitled simply Baseball for obvious reasons, found on the Activision Anthology.)

Pete Rose? Never heard of him

Pete who? Never heard of him.

A challenge for any version of baseball is the independence of the players on the field.  There are routines, and patterns, but not formations: each player has a role and must be able to execute it, and in addition to the challenge of computer A. I. there is the question of giving control of these independent actors to the player.  Activision’s Baseball (programmed by Alex DeMeo: crediting programmers with a byline was both part of the company’s marketing and a contrast to the anonymity of Atari’s practice) gets around this problem by not showing the entire field at once.  Six different “TV-like” views of parts of the field are shown, allowing close-up control of the pitcher and batter and limiting the fielders onscreen to only four at a time.  When fielding, the player can choose which team member to control by aiming the joystick and pressing the fire button: it takes a little getting used to but becomes automatic quickly, and strikes me as a reasonably elegant solution given the limited control scheme of the Atari.  The pitcher and batter have quite a few options, and the player has enough control that skill really matters: the first inning I pitched, the computer scored thirteen runs off me—suddenly I was the Dodgers, and the computer was the Cardinals.  (I should also note the sound effects: the game begins with the last few notes of the national anthem, and a square-wave “organ” periodically plays between at-bats; after a play, the “crowd” roars, actually modulated white noise similar to the countdown to Armageddon heard at the end of every Missile Command game.)  Over all, it’s not a bad effort for a console with well-known programming constraints.

Neither the Intellivision nor Atari games offer any variety in the lineup, just interchangeable hitters and fielders.  Jumping ahead a few console generations (I told you this wouldn’t be comprehensive!), the increased power gave players the chance to manage by assembling a dream team and maximizing the effectiveness of the batting order.  I wouldn’t call it realistic, but Mario Superstar Baseball for the Gamecube has both character and strategy in spades.  Like all the Mario sports spin-offs, it’s a somewhat simplified version of the real sport with Nintendo’s extensive cast of characters as the players, and with the addition of power-ups and hazards from the Mario platformers. For example, when Mario is pitching, he has the option to throw a fireball; Chain Chomps and Piranha Plants catch unwary outfielders, and so on.  In Exhibition Game mode you’re free to choose your roster from all the available characters, each of whom have their own strengths and weaknesses (as designated everyman, Mario is the best all-around player; “name” characters have their own personality and specialties, with goombas, shy guys and other background characters filling out the rosters.) In Challenge Mode, you start as one of five team captains (Mario, Peach, Donkey Kong, Yoshi, and Wario) and play the other captains’ teams in succession: beating them gives you the option to recruit players from their line up, something that is essential to beat Bowser in the championship game. There are six different ball parks, reflecting the personalities of the team captains; only Mario’s could be considered “standard:” Princess Peach’s park includes floating question mark boxes which, if hit by the ball, reveal bonus stars (which in turn can be used for power-ups); Bowser’s park is set inside an active volcano, so outfielders can be thrown off by periodic earthquakes or follow a ball into open lava pits.

Donkey Kong unleashes his "Banana Ball" pitch in Mario Superstar Baseball

Donkey Kong unleashes his “Banana Ball” pitch in Mario Superstar Baseball

Unlike the other baseball games covered here, the outfield has an advantage, although the computer A. I. still sometimes makes baffling misdirections.  Just as in the real game, pitching makes a big difference in MSB, so I like to use Peach and Waluigi, the two best pitchers; they’ll even get tired if you keep them in too long, so you have the option to swap positions mid-game.  As you can tell, this is one I come back to frequently; it has a great soundtrack and they get the characters right, and the game is just streamlined enough, but the bottom line, I guess, is that I like my sports simulations to have a touch of fantasy.  Mario Superstar Baseball was followed by Mario Super Sluggers for Wii, and a sequel for the Wii U has been announced.

And if that’s not enough, here’s three and a half minutes of virtual baseball over the years:

Maybe I’ll get to some of them in a follow-up column; in the mean time, share your favorite video baseball in the comments!

The Ups and Downs of the “Fashionable Foghorn:” Orphans of the Orchestra, Part Two

In 1925, violinist Ernest LaPrade wrote a charming children’s book entitled Alice in Orchestralia, in which a young girl travels to a magical land of talking musical instruments.  Although obviously modeled on Alice in Wonderland, the book is in the didactic tradition of Britten’s Young Person’s Guide to the Orchestra and Prokofiev’s Peter and the Wolf, introducing young readers to the standard instruments of the orchestra and their roles.  (In fact, one could easily imagine Alice in Orchestralia being turned into a narrative concert piece like Peter, with the only drawback being that the book features even less plot than the similar Tubby the Tuba.)

At one point, Alice encounters a lonely outsider camping at a fork in the road between the villages of the woodwinds and brasses:

“Why do [the brass] turn you out?” she asked.

“They claim I’m a wood-wind instrument, because I’ve got a reed like a clarinet, and they say I ought to go and live in Panopolis.”

“Then why don’t you?”

“Oh, I’ve tried to, time and again, but it’s no use.  The wood-wind instruments say I belong in Brassydale, because my body is made of brass.  So at last I got this tent and pitched it here, halfway between the two villages.  It’s damp and rather lonely, but at least they can’t turn me out of it.”

After a little more discussion, the loner reveals that his name is Saxophone.

Alice in Orchestralia, illustration by Carroll C. Snell

Alice in Orchestralia, illustration by Carroll C. Snell

Nowadays, there is little dispute that the saxophone is a genuine woodwind instrument (using historian Curt Sachs’ terminology, a “single reed aerophone,” like the clarinet), the method of sound production being more important than the material from which the body is made.  In fact, if the saxophone’s metal construction were truly disqualifying, one would also have to evict the metal clarinets and oboes that have been experimented with over the years, not to mention the flute.

There is also no question that the saxophone (or rather saxophones, in several sizes) has earned its place as a recognizable and easily available instrument, at least the sizes in common use.  Unlike the ophicleide it is hardly obscure, and unlike the harpsichord it has never really gone away since its invention.

Still, the saxophone’s dual nature has been problematic since Adolphe Sax patented it in Paris in 1846.  Most texts point out that Sax combined the clarinet’s single reed with the oboe’s conical bore, resulting in an easy-to-blow woodwind with a simplified fingering (the clarinet’s cylindrical bore causes it to overblow at the twelfth rather than the octave, resulting in a more complicated fingering pattern); historian Anthony Baines, however, speculates that Sax may have hit upon this combination by attaching a bass clarinet mouthpiece to an ophicleide—both instruments were specialties of his shop—creating a true woodwind-brass hybrid.  Likewise, its brass construction and wide bell give it a powerful tone that blends equally well with brass or woodwinds, so it’s not unreasonable to consider it a bridge between the two groups.

The saxophone was initially developed with the military band in mind, and it was quickly adopted by the French authorities for that use.  However, the qualities that made it perfect for bands—its volume, its distinctive timbre—have made it only an occasional visitor to the orchestra as a special color, despite Hector Berlioz’s enthusiastic prediction that it—or rather, an entire section of them—would become a regular part of the orchestra of the future.  (It’s often forgotten, in fact, that Sax’s original design included two families: a group in the “band” keys of B-flat and E-flat, and a group in C and F for orchestral use.  Of the second group, the C “melody” saxophone, a tenor that allowed sax players to read from scores in concert pitch, survived the longest but was out of production by the middle of the twentieth century.)

One could easily be led to believe that the saxophone’s adoption by jazz bands in the early twentieth century led to its increased popularity, but the opposite appears to be true, at least in the earliest days of jazz.  Concurrent with the rise of jazz was a fad for saxophones (and other “novelty” wind instruments) on the Vaudeville stage, led by such groups as the Six Brown Brothers (who were active from about World War I until 1933). The saxophone became (along with the banjo) a symbol of student life, as necessary to depictions of 1920s college students as the raccoon-skin coat and football pennant, and a musical shorthand equivalent to the bongos in the beatnik ‘50s or sitar in the psychedelic ‘60s.

Bringing Up Father, 1936

Bringing Up Father, 1936

Manufacturers responded to the instrument’s popularity with a number of short-lived saxophone variants, some (like the slide saxophone) little more than novelties and others simply straightened out standard saxes.  Of greater interest is the “Conn-O-Sax,” a straight F alto with a resonating bulb on the bell, and clearly positioned as a single-reed alternative to the English horn.  The Conn-O-Sax was only made in 1929 and 1930, and examples are now very rare and highly collectible, but it has been adopted by some jazz players and shows like Saxophobia, which specializes in demonstrating a wide variety of old and new saxes.  It is a unique instrument, and it seems that there would be a market for a modern reproduction, or perhaps even a revival by the Conn company.

Conn-o-sax2022M

When the saxophone was heard in jazz of the 1920s, it was most frequently a soprano replacing clarinet or cornet, or bass, replacing the tuba or string bass.  There just wasn’t room for the alto or tenor to play in the improvisatory New Orleans style without stepping on either the cornet or trombone line.  (It is for this reason that the tenor saxophone included in much post-World War II “Dixieland” sounds especially inauthentic.) It wasn’t until jazz migrated to Chicago and New York that a fad for oddball instrumental combinations, at least on record (including such eccentricities as the “goofus,” a kind of melodica*, and even “swing harp”—orchestral harp, that is, not harmonica), made room for the saxophone as a lead instrument.

Exceptions include the larger bands fielded by King Oliver and Fletcher Henderson and the “symphonic jazz” of Paul Whiteman’s orchestra, but in those groups arrangements became more necessary to corral the larger numbers of players.  The differentiation between soloist and accompaniment is clearer, foreshadowing the swing style of the 1930s.  The saxophone’s presence thus became a dividing line between “hot” and “sweet” players, and between New Orleans purists and fans of the coming swing era: some of the harshest criticisms come from jazz historian Rudi Blesh, who as late as 1946 bemoans the replacement of the trombone with the saxophone in the Chicago style in his New Orleans-centric history Shining Trumpets: “For even an inferior trombone breathes new life into the music which the fashionable foghorn, the saxophone, had murdered.”

The saxophone fad eventually gave way, as all fads must, but not before the association between the saxophone and jazz had become permanent.  Even before its versatility and technical fluency made it a natural vehicle for such giants as Coleman Hawkins, Charlie Parker, and John Coltrane, the saxophone became an internationally recognized symbol, embraced by energetic youth and reviled by totalitarian governments.  It’s no wonder: the saxophone may have started out with the body of an ophicleide or bass clarinet, but its shape is unique: in profile it makes a perfect logo.  As Czech novelist Josef Škvorecký writes in his essay Red Music, both the Nazis and the Soviets sought to root out the saxophone (replacing it with the cello in most cases), but for opposite reasons: to the Nazis the saxophone’s association with an African-American musical form made it musically suspect (even before that, Germany had been one of the few nations to exclude the sax from its military bands); to the Soviets the hybrid nature of the instrument was somehow “bourgeois,” not of the people.  Ultimately the saxophone has outlasted both of them.

* A favorite solo instrument of Adrian Rollini, who was also known as the “Wizard of the Bass Sax.” Rollini was truly a renaissance man of offbeat instruments.

Orphans of the Orchestra, Part One

Ophicleide_001

Pictured above is an ophicleide, an obsolete wind instrument from the early nineteenth century.  It was played with a cup-shaped mouthpiece like a modern brass instrument, its length comparable to that of a trombone or euphonium, but instead of valves it had fingerholes and mechanical keys like a woodwind.  The ophicleide was just one of several instruments built along these lines, including the keyed bugle and the picturesque serpent (which predated the ophicleide as the bass member of the family: “ophicleide” actually means “keyed serpent,” in fact).  They filled the need for loud brass instruments that could play chromatic pitches instead of the limited range of notes available to “natural” brass like the bugle or hunting horn, especially in outdoor settings.  Before the invention of valves in the nineteenth century, only the trombone had such a capability.  The keyed brass filled that niche, but imperfectly: when the side-holes were opened, the acoustics of the instrument were compromised, and the sound was something like a tuba springing a leak.  Once valves were perfected and widely manufactured, it was all over for the keyed brass: the ophicleide gave way to the tuba, the keyed bugle to the cornet.

The nineteenth century was a period of great upheaval in instrument design.  In general, the era was dominated by both invention and improvements to existing instruments, sometimes defined as updating historical instruments to fit the demands of new music and the giant concert halls in which it was performed.  Violins dating from the seventeenth century were frequently rebuilt with longer necks and fingerboards to increase the string tension (and thus volume); bridges were raised; the square bow replaced the old curved bow, again in the name of greater focus and projection; gut strings were replaced with more reliable metal wound strings.  Changes like that were largely invisible if one were only examining scores; the advance of musical technique on the players’ part would be obvious, but it was still possible to play the music of Bach or Corelli on the updated strings. In the case of Bach, his music had been largely unknown until its revival by Felix Mendelssohn and others in the early nineteenth century, so there was little concern that modern performances wouldn’t sound like they had in his day.  In any case, it was common to rationalize that Bach would have taken advantage of modern developments if they had been available to him: it wasn’t called the century of progress for nothing.

Still, as tempting as it was (and often still is) to think of music in evolutionary terms, “survival of the fittest” didn’t always mean what its proponents thought it did.  Technological superiority didn’t always lead to success in the marketplace or long-term artistic change.  We often describe the sections of the orchestra as instrumental families, and a historical chart of instruments’ development very much resembles a family or evolutionary tree. In the case of music, however, the “environment” to which technological innovations respond include cultural attitudes, aesthetics and in some cases the whims of artists.  It can take years for new inventions to find a foothold, or perhaps they never do at all.  As with any other technology, the history of musical instruments is one of invention and innovation colliding with social use and craft tradition.  Change is often slow, and the repertoire composed for an instrument may be enough to keep it in use despite acknowledged difficulties.  Just as some argue that Betamax was superior to VHS, or that the QWERTY keyboard wasn’t necessarily the best arrangement for typewriter keyboards, instruments are adopted and thrive for reasons that sometimes go beyond their utility.

The double chromatic harp, a design that failed to catch on. Source: Metropolitan Museum of Art

The double chromatic harp, a design that failed to catch on. Source: Metropolitan Museum of Art

This is especially true in the orchestra.  New instrumental technology is sometimes rejected for being too radical; I won’t generalize about the conservatism of musicians, but suffice it to say that most classical musicians have a deep, lifelong investment in the traditions of their instrument, as well as the literature and institutions of concert music.  Changes in the way those instruments are played do occur, but only after long and careful evaluation, sometimes over generations, and frequently dividing performers over the worth of competing methods.

More importantly, styles change, and sounds that are valued in one era become tiresome or obnoxious to the next.  During the middle ages in Europe, for example, double reed instruments and bagpipes were very prominent.  Trumpets, their bells decorated to look like dragons or other beasts, often had tongues soldered into the bell that would vibrate when played, giving an extra buzz to the sound.  Some of the prominence of double reeds is due to their relative volume—even into the classical period they were among the loudest instruments available, especially for outdoor performance—but there was clearly an aesthetic that favored the bright and nasal, and the use of sympathetic vibration fit well with simple drone-based harmonies.

It’s unwise to count an instrument out too soon: by the end of the nineteenth century, the harpsichord was considered dead, replaced by the piano, and there was nothing unusual about performing the music of J. S. Bach on a twelve-foot grand piano.  Gradually, the harpsichord returned to prominence as the “early music” movement took hold, and not only as a vehicle for historically correct performance: new works were composed for it that took advantage of its dry, tinkling sound (a sound which, not coincidentally, now fit the reigning neoclassical sound better than it had fit the sumptuous and overpowering orchestration of the romantic era).  Even so, the earliest proponents of the harpsichord carried with them assumptions born of the nineteenth century.  Wanda Landowska, a vocal proponent of original intent (“You play Bach your way, and I will play it Bach’s way,” she once said) performed on an iron-frame harpsichord built for her by piano manufacturer Pleyel, and the sound is correspondingly huge, fit for the kind of large concert halls that Bach never knew, but which were standard by the beginning of the twentieth century.

In the end, one of the few composers to use the ophicleide extensively was Hector Berlioz, who included them in his Symphonie Fantastique and other scores.  (Berlioz was an early adopter, enthusiastically seizing on new and improved instruments to expand his orchestral palette; perhaps tellingly, Berlioz was one of the few Romantic composers who was not himself a virtuoso with a strong investment in the established order; like Wagner, he made the entire orchestra his instrument.) The parts are generally played on tubas without sacrificing much of Berlioz’s vision.  However, hearing the Dies Irae section of the Symphonie played on ophicleides, as in this recording made by John Eliot Gardiner with his Orchestre Révolutionnaire et Romantique, makes it clear that there is still a difference.  Such instruments may be historical curiosities, but they need not be forgotten entirely.

In my next installment, I’ll take a look at an instrument that exemplifies many of my above points about invention and tradition: the saxophone.

Piano-Playing Pair Provides Powerful Performance: I review the Wichita Symphony

Ravel: Daphnis and Chloe Suite No. 2; Poulenc: Concerto for Two Pianos; Stravinsky: The Rite of Spring

Poster1_Print-sm

I’m happy to say that there were quite a few younger people at Saturday’s Wichita Symphony Orchestra concert, but there were enough empty seats that there is room for more. Perhaps they were persuaded by the WSO’s aggressive new ad campaign (I’m particularly taken by the suggestion that activities like jai alai will help audiences prepare for the heart-pounding excitement of a symphony concert); I saw several take advantage of the WSO’s $5 student rush tickets (one of the best-kept entertainment secrets in town).  Either way, I’m not inclined to blame them for the repeated interruptions from cell phones during the concert; in my experience, older concertgoers are equally likely to forget to turn them down.  I don’t believe the concert hall should be a mausoleum: Century II has already made the decision to allow food and drink in the hall during performances, probably in the interest of creating a more welcoming environment, and I’m sure it helps the bottom line.  Even so, one could sense the audience’s frustration when Maestro Daniel Hege waited for the ringtones to stop before beginning The Rite of Spring (the concert’s second half), and one still started chirping during the lightly scored woodwind introduction.  At best it’s an annoyance to other patrons; it worst it can interfere with the performance itself. It’s not a lack of education, or the influx of newcomers to the Symphony, it’s simple mindfulness: if the Warren Theatres can police cell phone use at the movies for the sake of a better experience, surely a live music venue can do the same.

Anyway, here’s what I wrote for The Wichita Eagle.