Monday, August 31, 2015

a few links for perusing, mainly on the A&E side of things

These days we’re afflicted with not a scarcity but a glut of biographical information about musicians

I know that the firewood cutting scene in Age of Ultron leads to Civil War because I know what Civil War is about. I know because Feige got on a stage and told us. And even though the Avengers sequel didn’t actually show us any real strife in the superhero community, we’ve been told that it exists because the next movie is about a superhero Civil War. We also know that the ramifications of this event will be extremely temporary, because we’ve also been told that on May 4, 2018 Thanos finally shows up to wreck everyone in Avengers: Infinity War. He’s going to wreck everyone by combining all of the Infinity Stones into a gauntlet, even though we haven’t been show the extent of this plan in the movies. We were told that it would happen in 2008 when Marvel put the Infinity Gauntlet prop on the floor of Comic Con. (HT DZ)

Ever wonder if it was possible for a Broadway show to assimilate hip-hop and celebrate the life of Alexander Hamilton? Read on ...

Sticking with arts & entertainment, Wes Craven is dead

Steve Hendricks at the New Republic on why you shouldn't be in to big a rush to assume the worst about names on that Ashley Madison list, someone else could have put you there without you knowing it


The chief question my wife and I have is: How did I end up in Ashley Madison’s dump pile at all? We have yet to find out, but we have several theories. (Happily, my wife did not for a minute think me unfaithful, just as I would not have doubted her; it’s that kind of marriage.) One possibility is that identity thieves put me in the database. It’s well known, of course, that many of the female members’ profiles on Ashley Madison were fabricated, because the site’s users were disproportionately male. But it is less widely reported that some of the email addresses attached to those accounts may well be the email addresses of real people; addresses can be bought in bulk for around 20 cents each from marketing companies. It is not even necessary for the appropriated addresses to have a woman’s name in them—a man’s name will do just as well—because the addresses attached to the fake profile can’t be seen by anyone but the account holder. Some men (and their spouses) have reported their emails were used in just this way.

There's more where that came from, of course, but it is true that contact information can be bought bulk fairly cheaply.  Hendricks also mentioned spite-listing, but you'd have to have ticked off a lot of people for them to take revenge by adding you to an AM list.

Hendricks mentions that at this point finding out if you're even on that list would require you to get access to stolen data, which you may want to avoid having on your conscience.

Miya Tokumitsu on the illusion of control through curating our online consumer decisions.
... The feeling of control that self-proclaimed curating can provide is in direct contrast to the loss of control unleashed by the very neoliberal policies introduced in the last decades. Flat wages, dwindling public services, and a relatively weak labor market have left many people disempowered and politically alienated. For all the significance placed on “picking stuff” in the age of curation, one thing people are resolutely not picking is political candidates. In last year’s election, voter turnout was 36.4%, a 72-year low. On the other hand, re-arranging “curated” compilations, be they stock portfolios or mood boards, can provide a much craved sense of power, excitement, and importantly—comfort— that comes from self-determination.

per the friends at Mockingbird, this is definitely one of those "illusion of control" pieces.

Jed Perl piece at New Republic on how liberals are paradoxically killing the arts by requiring it be political

... The trouble with the reasonableness of the liberal imagination is that it threatens to explain away what it cannot explain. Nowhere in the past seventy-five years has this tendency to bring art’s unruly power into line with some more general system of social, political, and moral values been more pronounced than in the efforts of scholars, critics, and the public to reconcile their admiration for the experimental adventures of twentieth-century literature with the authoritarian, fascist, and anti-Semitic views of some of the greatest modern writers. Let me again emphasize that I believe there is no question that many of the views of W.B. Yeats, T.S. Eliot, and Ezra Pound are repugnant and ought to be regarded as repugnant; and in the case of Pound, his actions during World War II, when he broadcast on behalf of Mussolini, surely rise to the level of treason. What interests me here is the insistence, when treating these admittedly extreme cases, on some fundamental link between artistic and political or social expression. I know why that link is emphasized. The rational mind, with its desire for logical equations, is upset by the idea that a great artist can be a bad person, and would perhaps prefer that the art also look bad, or at least be tainted. And behind this desire for a logical equation is the liberal imagination’s refusal to believe that art can lay claim to some irreducible mystery and magic.

That Eliot and Pound were able to articulate the debt all contemporary artists have to the past yet were royalist/fascist in their overt sympathies is a reminder that being avant garde in the arts is hardly any assurance of being progressive in any other capacity.  Nor, by turns, is being a traditionalist in politics or even religion necessarily a consistent indicator that a person will be traditionalist in the arts.  Olivier Messiaen was a fairly conventional Catholic in his faith and practice but the music he wrote was an inspiration to the post-World War II avant garde.

But the temptation to insist that a great artist must also be a good person is likely to persist.

over at The American Conservative, a proposal for how the religious right and the libertarian wings splintered from the GOP mainstream

Over at TAC a theory is proposed that reminds Wenatchee The Hatchet of a theory floated by D. G. Hart in his book about the history of American evangelicalism from Billy Graham to Sarah Palin--the  Reagan Coalition was a one-off, non-replicable phase in which traditional conservatives, libertarians, social conservatives and what became neoconservatives had a symbolic person to rally around.  But once Reagan was out of office that coalition began to fracture rapidly

Meanwhile, on the blue state side of things a comparable fracturing seems to have been happening between the centrist and progressive wings in the Democratic party, but perhaps not in the same way or to the same degree (that's not from the TAC piece, more an impression WtH has rightly or wrongly).

Meanwhile, the Religious Right, it's proposed, has sunk the odds of Republicans winning the Oval Office by spoiling what could be a more unified voting bloc, or so it reads.
...There were plenty of blue-state Republicans in the days of Goldwater and Reagan, of course, and even back then the party had distinct factions of conservatives and liberals—“Rockefeller Republicans,” as they were called. Why, then, did conservatives succeed in 1964 and 1980 but never again?
The answer lies in a development that appeared for the first time in 1988: the emergence of a distinct religious right or social-conservative candidate. That was Pat Robertson, who carried four states and won a little over 9 percent of the overall primary vote—behind Bob Dole’s nearly 20 percent and George H.W. Bush’s 68 percent. Robertson’s modest campaign, however, was like a hairline crack in the foundations of the political right. Since then in every election there has been a strong social-conservative contender in the Republican contest: Pat Buchanan in 1992 and 1996, Mike Huckabee in 2008, Rick Santorum in 2012.
The development of the religious right or social conservatives as a bloc discrete from conservatives generally proved to be the undoing of the right in Republican presidential primaries. But this differentiation into two distinct strands of conservatism, represented most of the time by competing avatars in GOP primaries, was not the result of hubris or short-sightedness on the part of religious conservatives. On the contrary, it represents a real philosophical divide that can be seen in the different emphases, attitudes, and even positions taken by social-conservative champions vis-à-vis other conservatives.
It’s not a coincidence that this ideological and political differentiation expressed itself immediately once the Reagan era had reached its end: before Reagan, an all-purpose conservative represented to the religious right—whether organized or nascent—a candidate who might give them the kind of country they wanted. Goldwater’s defeat avoided the disillusionment that victory would have brought. Reagan, however, showed that a general-purpose conservative once elected could only go so far: he appointed Anthony Kennedy and Sandra Day O’Connor to the Supreme Court, after all. Reagan himself did not come in for much blame, but the spiritually diffident conservatism that he and Goldwater represented—neither was more than nominally religious—was no longer enough.

Christian conservatives may no longer be the only ones who have this problem. Libertarians have had cause to celebrate in recent elections, as they too seem to have emerged as a distinct force in the GOP, with presidential standard-bearers of their own in the form of Ron Paul and Rand Paul. But here again, what this differentiation suggests is that libertarian Republicans have a vision distinct from and to some degree incompatible with—unsubstitutable for—that of other conservative Republicans. When religious conservatives came to this awareness, the results proved ruinous as far as winning the GOP presidential nomination went, for themselves and for the older Goldwater-Reagan conservatives. Will libertarians avoid the same trap?

In an irony that might be worth blogging about later, Jonathan Haidt wrote that he began to explore moral intuitions and social reasoning because he thought the liberal/progressive side kept losing by failing to sufficiently motivate its voting base. 

Sunday, August 30, 2015

more music blogging on the schedule, getting back to Ferdinand Rebay, new album out from Eudora Records, review on the eventual way

We're slowly getting back to things animation and music here at Wenatchee The Hatchet and soon, perhaps, we'll get back to blogging about the music of Ferdinand Rebay again. There's seven guitar sonatas to blog about as time, energy and resources permit.

There's also a new release from Eudora Records we'll be hoping to review here before too long.

huh ... Vin Diesel teases "don't be surprised when you hear WB announce the sequel" for ... the Iron Giant?

No disagreement with Vin Diesel that one of his earliest and favorite characters to play was the Iron Giant from The Iron Giant.  Still ... a sequel?  What was there in the film that was going to catalyze a sequel? 

Still, I admit I'd probably go see a sequel out of curiosity, just like I'm going to see The Incredible 2.

Not all sequels must automatically be construed as jaded cash grabs.  Still ... one wonders ... .

Trivia--someone who worked on the cel side of things for that film was Lauren Faust, who would later go on to work on The Powerpuff Girls, and work with McCracken again on Foster's Home For Imaginary Friends and later helm a reboot of My Little Pony. I've respected Faust's work and collaborative activity enough that I watch MLP from time to time.  Favorite is easily Luna.

Ted Gioia--are we all mistuning our instruments and can we blame the Nazis?
Is it really possible that musicians have been tuning their instruments incorrectly during my entire lifetime? Has my piano tuner (perhaps a member of the Illuminati) been duping me all these years? Is the tuning app on my smartphone a kind of cultural malware designed to destroy music as we know it?
According to true believers, music would generate positive healing energy if A were tuned to 432 Hz. This tuning, they claim, is more aligned with the cosmos and the natural world. “The number 432 is also reflected in ratios of the Sun, Earth, and the moon as well as the precession of the equinoxes, the Great Pyramid of Egypt, Stonehenge, the Sri Yantra among many other sacred sites,” explains author Elina St-Onge. And who do you want to bet on: Stonehenge and the Great Pyramid or Goebbels and the Nazis?

These conspiracy theorists aren’t entirely batty. The tuning of instruments has always been filled with compromises and influenced by competing paradigms. Listeners take for granted the conventional “well tempered” tuning of modern instruments, but this itself was a controversial innovation in its day—it represented a rejection of the Pythagorean heritage and Renaissance thinking on music. But it also made possible the chromatically-rich compositions of Bach and his successors.

Andrew Durkin's self-defeating manifesto

Durkin read the review of Decomposition here at Wenatchee The Hatchet and left a few comments. 
So firstly, thanks for coming by and commenting.  It's been great to have the blog have comments that aren't about the usual set of topics the blog's gotten a reputation for.  It's also especially fun for the blog to spark any interaction about music instead of ... the usual set of topics.

It probably comes as no surprise to Durkin that we come from different perspectives.  He's mentioned not having the desire to engage in a lengthy interaction, which is fine.  Whether or not Durkin read all the posts tagged with "decomposition" is less clear.  However, he shared a few things in comments that have cleared up a couple of things about his book, cleared some things up in a way that makes it easier to understand why I have come to consider his entire enterprise to be sententious and self-defeating.

We can quote a few sentences from the start and end of his comments that may suffice to distill his philosophical approach.
Here’s what I think you’re missing in much of your review: I don’t believe concepts can exist outside of the ways we write or speak about them.
Sorry, but I don’t believe in artistic greatness. I believe that all we can do is love the art we love.

It seems necessary to clarify what I began to have doubts about regarding Durkin's book.  He can subscribe to the idea that concepts don't exist outside the ways we write or speak about them but I'm coming more from the side of playing with the idea that how we even think about music is circumscribed by the linguistic parameters through which we learned about music or literature.  We can be hamstrung by what we are or aren't able to think through based on the language through which we learn to think about music.  Which may be a tolerable transition to questions raised about Durkin's engagement with the secondary literature known as musicology.

Durkin did mention having not read a couple of the Meyer books I mentioned. For those not eager to read the length of his comments, he mentioned what books he read and what ideas he referenced.  As mentioned before, I agree with Madison Heying that Durkin seems to have not read widely or deeply enough in musicology to have made some of the points he tried making.  I'd go a step further and suggest that Durkin wasn't widely or deeply read enough in the last 41-80 years of musicology to have made some of his points.

Durkin is more than free to insist that concepts don't exist apart from the ways we write and speak about them.  The strength of his arguments in Decomposition may not derive the force of argument from a premise like that.  If anything, a philosophical premise like that highlights that his book would have benefited from formal and analytic musicology.

When I wrote earlier that if Durkin was going to take aim at demythologizing touchstones in the Western canon he needed to show he knew his work, Durkin may have taken that as a sign he needed to write his book.

Well, the problem in the Ellington/Beethoven dyad earlier in Durkin's book is that Durkin clearly demonstrates direct familiarity with Ellington's music and writings about Ellington's music. Then we get to Beethoven and Durkin shifts to summarizing DeNora on what other people said about Beethoven.  By the time I finished reading Durkin's book everything Durkin said seemed like stuff that could be written by someone who had consulted a wide variety of the secondary literature .. without having bothered to study or play a single piece by Beethoven.  Or listen to one, for that matter. It's not just that I and others have doubted the depth and breadth of Durkin's readings on musicology, I've come in the last year of considering the arguments and content of the book to have some actual doubts that Durkin even knows the classical repertoire of Beethoven or Bach.

Take the chapter where Durkin talked about "The Riff".  It's shooting fish in a barrel to explore how Russian folk songs got worked into Beethoven's string quartets for Russian patrons. It's easily known by anyone who's immersed themselves in the Goldberg variations that a Polish folk song got worked into the Quodlibet.  For that matter, for people already familiar with the music, it's not even that difficult to draw a line from the 9th century Pentecost chant Veni Creator Spiritus through to Lutheran hymnody for Pentecost to the subject of the fugue in Bach's C major violin sonata.  Had Durkin done for Beethoven (or Bach) what he'd done with Ellington, discussing where the riffs came from and how a composer or arranger can use a variety of riffs, not all of which the composer originated, and create vital and interesting music, he could have drawn a historic path from the 9th century to the 18th century to show how the remixing kept going on across nearly a millennia.  That is, if Andrew Durkin were actually familiar at any level with the concert literature of "so-called classical music".

Musicology and formal analysis is precisely the body of literature in which scholars of music point out that Matiegka appropriated the fast finale of a B minor piano sonata by Haydn to become the first movement of a solo guitar sonata.  It's how Kyle Gann can explore that Mozart made use of materials he'd heard from Clementi.

Durkin had plenty of time both in his book and in comments to explore and explain the lineage of what he calls decomposition across the classical canon.  He never really did that in his book and over at his blog Ugly Rug, eleven years worth of blogging has not revealed any particular familiarity with Beethoven's work, or Bach's.  This doesn't mean Durkin doesn't know the music, but eleven years of blogging and a published book is plenty of time to demonstrate a working knowledge of the classical side of things.  A person could invoke the B natural debate about Beethoven's Hammerklavier without so much as having heard the piece.  Reading the secondary literature gets that knowledge taken care of. 

So Decomposition comes off as a book written by someone who has written about what other people have written and said about people like Beethoven without revealing at any point any direct familiarity with Beethoven's work.  If Durkin wanted to show that Beethoven got ideas from folk music he could have.  If Durkin wanted to highlight the insoluable debate of preference over which ending for the Op. 130 string quartet is the preferable one of the two authentically composed-by-Beethoven endings, he could have done that. 

Classical music has a centuries old pattern of composers appropriating the ideas of other people, saying whose idea it was, and running with it.  Martin Luther would adapt Gregorian chants into vernacular hymns. Bach would employ isometric adaptations of sixteen century hymns shifted from their modal form into a major/minor key system. Haydn could take Polish folk songs and work them into the trios of his string quartets.  Matiegka wrote a set of variations on a lieder by Haydn in his Grand Sonata II. Brahms wrote a set of variations on a theme by Haydn (or Handel or Paganini).  Ferdinand Rebay adapted a variation form composed by Brahms into a slow movement in one of his sonatas for solo guitar.  From century to century musicology is able to demonstrate in historical terms what Durkin has described as "decomposition", and yet when it comes to the classical side of his point-making Durkin not only seems to come up short on the secondary literature, he actually at times makes it hard to know if he even knows the primary literature.

Let's take the Ellington/Beethoven dyad again.   Demythologizing Beethoven is shooting fish in a barrel for anyone who knows the primary works.  The problem is that Beethoven's not necessarily the best case study as a contrast to Ellington.  Durkin is bothered at the mythologizing tendencies of language about the singular genius and the authentic instantiation of a musical work. Durkin could have consulted Richard Taruskin's tree-killing two-volume survey of the armies of folk tunes Stravinsky appropriated for his work and how Stravinsky made a career of self-mythologizing.  What makes the mythologizing tendencies in the "lone genius" and the "authentic" part of music-making pernicious is that the people most apt to deploy this language are the creative people themselves. Of course since we live in an era of popular culture that is practically excluded from the public domain people will feel it's dangerous to admit artistic debts in pop music.  The funny thing here is that in classical music these debts are admitted so readily and casually it could have been a case where if Durkin knew any of the classical music warhorses and the lineage of the works he could suggest that the different eras of music have unique things to share. 

Durkin tends to focus on the ways that copyright regimes create problems (and those are considerable in a number of ways) but Durkin read Teachout's biography of Ellington and so Durkin could have highlighted that creativity can involve working AROUND restrictions, the way Ellington urged his son Mercer and Billy Strayhorn to create reams of music during the BMI embargo of ASCAP members.  Ellington was part of ASCAP, if memory serves, and found himself in a sticky spot when it came to composing music.  But his son and Strayhorn could work within and around those restrictions and this was an integral element of the prodigious output of the Blanton-Webster period. Duke would suggest chord progressions and then encourage Mercer and Strayhorn to develop the rest.

I think that a contrast between Ellington and Stravinsky would be more instructive because both composers occupied the same century of creative activity and in both cases myth-making vs reality has become easy to document. As it stands, Durkin may or may not have any familiarity with Taruskin's fantastic survey that demythologized Stravinsky for Stravinsky. We have plenty of documentation by now how, as I put it in an earlier post, even the most apparently solitary artistic person is creating art in a way that is fundamentally a social activity.

If Durkin had the musicological background and interest demonstrating how riffs get reshaped across a millennium of musical activity would be pretty easy to do.  It could have demonstrated the potential vitality and applicability of "decomposition". As it is, "decomposition" is more apt to become Durkin's "that is so fetch".

Now Durkin's certainly able to insist that concepts don't exist apart from the ways we write or speak about them.  This is a point at which Durkin might have benefited from reading some of the work done by social scientists and biologists on heuristics, cognitive development and the like. What if, for instance, concepts are not "just" in verbal or oral expression but are circumscribed by the language(s) we learn from childhood?  It may be true that a concept doesn't exist apart from what you write or say but before you can say or write a sentence thoughts are in your head. Does a concept only take shape once it is written or spoken, or can a concept be formulated by a thinking and perceiving mind before it is articulated?

To give an example, babies don't have the language with which to communicate conceptually but just because a baby doesn't have a mastery of a language yet does that mean the baby is incapable of formulating a concept that we adults could describe as "I'm hungry"? Is the baby not hungry because it can't speak words saying "I'm hungry" or write anything?  Now maybe a person hearing the baby crying might have to rely on other sensory perception than hearing (like a sense of smell) to determine that maybe that baby isn't hungry but has had a diaper blow out. The question at hand is whether the baby somehow doesn't have a concept of being hungry or having a diaper blow out for lack of linguistic categories to express the concept or, to some degree, even perceive what hunger is.  Does hunger not exist for the baby because he or she or it lacks a linguistic framework in the mind from which to identify the sensation we tend to call hunger?

It's possible that the language in which we think circumscribes the range of concepts we can write or talk about, just as it's possible that concepts don't exist apart from the way we write or speak about them.  This would suggest that, if anything, Durkin would have a reason to immerse himself deeply and broadly in musicology and formal analysis.  Yet as Heying has noted, Durkin's attitude toward musicology and formal analysis seems to be dismissive.  If so that's a shame because if concepts don't exist apart from the way they are dealt with in language via writing and speaking then this would elevate the significance of formal analysis because, in that sense, there might be no music apart from the reified expression of music in a fixed form.  If there is a "music of my mind" that can exist independent of formal expression then Durkin has devoted chunks of his book to doubting the legitimacy of the ad hoc languages and lexicons developed over millennia to talk about and get to performing music for reasons that are never particularly clear. 

And in the long run it's difficult to tell whether or not in the end Andrew Durkin has much interest in defending the ideas in his book in connection to his eleven years of blogging, too.  In a comment Durkin mentioned the following:
Sorry, but I don’t believe in artistic greatness. I believe that all we can do is love the art we love.

This sententious assertion needs to be read alongside what Durkin's blogged over the last eleven years which includes a fairly standard issue bromide such as ...
...  I don’t need to be convinced of Ellington’s greatness; I already know his music saved my life. ...

This gets to the self-defeating core of Durkin's approach.  Durkin insists that all we can do is love the art we love, having cast doubt on the legitimacy and utility of linguistic categories we humans have used  since 19 ... always to express such love, while using such hyperbolic language in his own expression of love for the music he loves. 


In the history of humans certain quests and claims keep coming up.  The opportunity Durkin had for a manifesto (or the start of one) would be to propose that the boundaries across the styles we so often think are distinct are ultimately permeable.  The music you love may have more in common with the music you think you hate than you might first realize.  I thought I didn't and couldn't like any country in my teens and then I began to listen to country and learned that the boundaries between jazz and blues (which I did like) and country (which I thought I didn't like) were more permeable than I had imagined them to be. I read interviews with Bob Dylan where he said to not just listen to him but to listen to who inspired him and to listen to who inspired them. So I went back to Robert Johnson and went back to Ellington and Mahalia Jackson. Then I went back to Scott Joplin and learned he was familiar with Beethoven, Brahms and Bach. So I got into Beethoven, Brahms and Bach and learned they were respectively inspired by Haydn and Telemann and Schutz and Buxtehude.  I kept going further back and stopped around Leonin and Perotin and Ockeghem.  Also got around to music from China and Thailand and Japan along the way.

Durkin focuses on what he seems to think negatives to reification. There are, however, positives. The beauty of an age in which music is reified is that we have an opportunity to understand how permeable the boundaries across musical styles and forms are.  Where Durkin seems obsessed with the limitations of notational systems and certainties about whatever a "master" version is, that's all inconsequential to me.  Rather than lament reification what if we take it as a given and see what is possible in light of that reification.  One opportunity is to take a computational approach to the recurrence of riffs across multiple styles in different eras. There's no reason a person can't hybridize the invertible contrapuntal idiom of the 18th century with the blues vocabulary of American blues players.  To borrow a bit from Yoda, no, is not different, only different in your mind. If we're to have music beyond category we may not enjoy formal analysis at first but the beauty of formal analysis is that it can, despite its drudgery, lead us to discover how permeable the boundaries are between early Romantic 19th century guitar sonatas in Spain and ragtime in the early 20th century.
My friends complain about modern pop all the time. I wish I could evaluate it in aesthetic terms. But I feel like I can’t even hear it. It sounds like money to me. I hear the money that went into the production. I hear the money that went into the promotion. I hear the money that is being exchanged every time it is performed. I hear the money that is expected as a kind of birthright. Lord help me, I cant get past the money. 

I dislike contemporary Christian music because when I hear it I feel like I'm looking at a grainy black and white photocopy of a postage stamp reproduction of a Thomas Kinkaid painting.  But if Durkin's going to take his critique of authorship and authenticity seriously he'd have to say that there's no argument that this is inauthentic music or that it's bad based on any question of authorship.  Dismantling the ideologies of authenticity and authorship should have led Durkin to a point where he should be able to celebrate at least some modern pop music regardless of the money issue.

It's not enough to question the viability of "authenticity" and "authorship" as ways of praising the music we love, it's far more important to dismantle them as categories through which we vent about music we hate.  It's when Durkin complains that he can't hear modern pop music as music that he reveals maybe he can't even completely commit to the ideals he has tried to formulate in his own manifesto.  And if he can't go that distance himself, should the rest of us bother to follow?

on riffs that keep returning, the mutation of a Pentecost hymn over the centuries

J. S. Bach fans already know this (probably) but one of the fascinating things about the Baroque era was how isometric rhythm changed the assymetrical phrase into the symmetrical phrase; the odd rhythm into an even; and the modal chant into a major or minor melody. 

What's interesting about the fugue from the C major violin sonata is it's "church" fugue.  The subject for the fugue has, as Bach scholars can tell ya for free online, derived from a 15th century Lutheran hymn for Pentecost.

What's also interesting about that melodic contour is that as, er, Pentecostal tunes go there's a case to be made that the Lutheran hymn has a melodic profile that is an outworking of the old 9th century Gregorian chant Veni Creator Spiritus.

There are differences, of course, but that tune that starts by oscillating around what today would sound like the fifth of a chord is still there (not that there were major or minor keys in the 9th or 15th century, tonality as we've come to think of it would emerge for centuries).

Thursday, August 27, 2015

Kyle Gann's proposals about how and why graduate school studies transform people into bad writers
If the purpose of American grad school, as I’ve long maintained, is to teach young people to write badly, then the function of intellectuals in American life is to paralyze discourse.

Of course, the Swift Boat Graduates always have a point: a lot of complex things go on in the brain in response to a Satie Gymnopedie, and ultimately the Encyclopedia Britannica is just a record of billions of subjective impressions upon which doubt could be cast. Those are interesting, important issues to ponder, but they are rather divorced from everyday life, and few of us can afford to leave everyday life for long. Subjective, objective, complex, simple, are all comparative terms whose absolute endpoints lie outside human experience; and if you’re going to swallow up those words into their intellectually derived absolutes, then we still need other words for the everyday meanings those words hold in conversation. What’s wrong with the Swift Boat Graduates is that they sometimes wax fascistic about disallowing naive uses of their pet words, as though once you’ve discovered a more sophisticated concept for the word, what the naive use once referred to disappears. This tendency threatens to bring musical discourse down to a grad-school level. Part of intellectual maturity is knowing when the exalted meaning is appropriate and when the quotidian meaning is just fine.
ACADEMIA TRAINS YOUNG PEOPLE TO WRITE TURGIDLY AND VAGUELY. And not only young people. Readers of this blog sometimes get upset with me that I seem so anti-academic, that I am always denigrating university culture. I love certain aspects of college life, and I am extremely pro-education, but it has to be acknowledged that academia, as it stands, has a default tendency toward inculcating pomposity in writing and, most of all, a bureaucratic avoidance of personal responsibility

Alex Ross on the long twilight of the symphony
In 1849, Richard Wagner declared, with his usual assurance, that “the last symphony has already been written.” Beethoven’s Ninth, with its eruption of voices in the finale, had, in Wagner’s view, exhausted the form and inaugurated a new age of music drama. The pronouncement went unheeded. In the decades that followed, Brahms wrote four symphonies, Tchaikovsky six, Dvořák nine. After 1900, the idea that nine symphonies represented an outer limit—“He who wants to go beyond it must die,” Schoenberg said, speaking of Mahler’s unfinished Tenth—fell away. Shostakovich produced fifteen symphonies, Havergal Brian thirty-two, Alan Hovhaness sixty-seven. As of this writing, the Finnish composer-conductor Leif Segerstam has generated two hundred and eighty-six (having passed Papa Haydn more than a decade ago, with his Symphony No. 105, “Pa-Pá, Pá-Pa-Passing . . . ”). Composers have also exceeded the seventy or eighty minutes’ duration that was long considered the maximum. Brian’s “Gothic” Symphony lasts almost two hours; Kaikhosru Sorabji’s “Jami” Symphony, which has yet to be performed, would go on for four and a half hours; Dimitrie Cuclin’s Twelfth, also patiently awaiting its première, might devour six.

Ross notes the symphonic cycle that has had the best overall case for being an addition to the symphonic canon has been (surprise, not) the Shostakovich cycle.

And certainly I love some Shostakovich but as symphonic music that people listen to composed since about 1974 or so we could probably have a small consensus that John Williams' Star Wars soundtracks have gotten a lot of play time.

So in a sense Wagner's declaration did come to pass, symphonic music eventually shifted to being a type of narrative drama ... in a way.

The Patrologist on how academic entities and companies de-public domain ancient documents and how closed access peer review "is run for the profit of publishers"
It’s critical editions that are the sticking point. If I read 5 manuscripts and then decide which variants to include in an edition, the current default hypothesis is that I have somehow acquired a copyright over this work. This is the practice of various monopolising bodies, whom you know well, and the overpriced and underutilised editions of ancient works they release. This is, in my view, a fairly insidious example of ‘enclosing’ the public domain. Of taking what belongs to all, and putting a fence around it and re-privatising it.

I also suspect that if put to the test, it might well fail in court. Because while there is certainly work in assembling a critical edition, and more than that, there is skilled and detailed work, there is no creative work. Nothing is added to the work, nothing remixed, nothing generated. There is no new work done. Under many countries’ copyright regimes this does not pass the standard tests for acquiring a copyright to a work. It’s about on the same level as organising word lists or printing phone books. Sheer volume of labour does not copyright make.

Which could be read as another way the academic culture of the United States or other Western post-industrial powers have continued an over-priced racket that pretends to educate more than it actually does?  It's ... possibly strongly implied taken together with this:
... I would summarise Skinner’s concerns in the second post that in a democratised (and that’s probably not the right word) sphere, everyone feels the right to have an equal opinion, and it’s difficult to give expert opinions their due weight. The remedy is (and I’m not saying this is Skinner’s view), traditionally, to point to the process of peer-review. Publishing is the sifting and sorting process that lends publications their authoritative weight. It’s why academia is a closed shop, it’s what the PhD is for: proving you’re ready to take a seat at the secret-society of peers who know about such and such a field.
Peer-reviewed closed-access publishing is run for the profit of publishers, and it’s paid for by the unpaid labour of academics. Is rigorous peer-review a great thing? Undoubtedly. Ought it be the gate-keeper to the conversation? Probably not. We do live in a more democratised world, and although everyone probably would admit theoretically that the only guarantee that you’re reading something worthy of critical acceptance is to read it critically for yourself with the pre-requisite knowledge to evaluate it, we’re all lazy and would much rather see the imprimatur of authority and say, ‘good enough for them, good enough for me’. But the result of that is richer publishers, elitism in academia, and a circle of bias that diminishes the value of peer-review to zero guarantee of truth or quality.

Monday, August 24, 2015

riffing a bit on some ideas implied by Phoenix Preacher on the doctrines that are neglected, a musing on atonement theories

In the last few years Wenatchee The Hatchet has seen some ludicrous assertions made about what theological point Mark Driscoll endorsed over the years that was the sure proof he went into bad territory. That's nonsense. The problem was where his character went.  He wasn't a Calvinist in the earlier years and although he said he was a Calvinist for many a year he may, for all we do and don't know, have only been as Reformed as he thought he needed to be in order to secure financial backing from people in that camp.

Since David Nicholas isn't alive to bear witness to why his public relationship to Mark Driscoll withered on the vine and Mark Driscoll probably cannot be trusted to give either an accurate or honest answer about that, the question as to why they parted ways may never be satisfactorily answered.  But it may be irrelevant in the long-run because as Driscoll pops up on the charismatic conference circuit, it seems his sales pitch for a return as a charismatic without a seatbelt is where he may be heading if he decides to stage a comeback.

If Mark Driscoll announced later this year that he's an Arminian egalitarian charismatic do you think that would make him more fit for formal ministry than when he was a Calvinist complementarian charismatic with a seatbelt?  You're part of the problem if you do.

See, one of the problems with those who have attempted to deal with Driscoll just on doctrinal terms is they don't always necessarily understand the doctrines they're talking about.  Maybe they do, but sometimes they latch on to some pet doctrine they're already into and make that the reason for Driscoll's demise.  Anyone who seriously proposes that Mark Driscoll went astray for not properly adhering to the regulative principle is just a regulative principle junkie who all too often has no meaningful firsthand knowledge of what the culture was like, let alone ever met Mark Driscoll.  For folks who are hung up about Mark Driscoll's "limited unlimited atonement" (conventionally identifiable as Amyraldianism to theology wonks) that, too, isn't really an indicator.  Prophets, priests and kings the problem?  Mark Driscoll transforming those categories into a stupid neo-Calvinist MBTI profiling tool isn't the same thing as the Westminster Confession outlining how because Christ fulfilled the roles of prophet, priest and king He is our perfect savior who is God and man.

Some folks might decide that if they don't endorse the same atonement theories as a guy like Driscoll that's the difference.  It's not.  Back in the 2005 atonement series Driscoll praised a dozen explanations of the atonement and said they are all necessary and vital for a fully-orbed Christian walk.  It's one of the very few things Driscoll's preached from the pulpit Wenatchee The Hatchet would agree with now. 

Of course there are basically three broad categories of atonement--the ransom/champion model; the satisfaction model (from which penal substitutionary atonement is a derivative); and the moral influence or christus exemplar model.  These are just metaphors that are gateways into reflecting on the life and work of Christ.

That "life" part cannot be overlooked.  See, some lazy and ignorant Christians who get tired of hearing PSA junkies beat that drum might be tempted to say,"Oh, well, let's reflect on the Incarnation".  Right, because why God the Son would choose to live as a human among us isn't organically tied to walking the road to Jerusalem and to Golgatha.  At the risk of getting into what might be uncomfortable territory for some Christian readers, the metaphors known as atonement theory have shortcomings.  It's why if you embrace one metaphor and reject the others what you "might" have a problem with is not the metaphor but the atonement itself, though mileage varies.

Jeffrey Burton Russell wrote five fantastic books on the history of thought about the devil.  Along the way he addressed the basic atonement theories as theodicies.  After all, an atonement theory in its photonegative form can be construed as a theory about how God dealt with the consequences of evil. The early popular theories were that Christ was our ransom and that Christ conquered Satan, sin and death.  So far, so good.  But over time Christian theologians began to wonder, what was it about defeating Satan, sin and death that necessitated the Incarnation to begin with?  Here we can begin to see, with help from Russell, that it's not ultimately possible to create a distinction between the Incarnation and the Cross.  What theologians began to worry about was that it sure seemed like Satan and death had a ton of power if Jesus had to come as a man and die.  If God were all-powerful and all-wise it would seem possible to defeat Satan, sin and death without having to go through the Incarnation or go to the Cross, at least in theory.

Now here's some layperson speculation on this riff, it's interesting to make this guess that the early atonement metaphors were popular in cultural and historical settings in which Christianity was not the establishment religion. It's not just that the metaphor of ransom and the metaphor of Christ as victorious warrior against our enemies would be appealing to a minority sect in the Roman empire, it's that we can get a sense that this metaphor also informed the apocalyptic literature of the time and the nature of the ethical instruction in the epistles.  As we await the final subjugation of the enemies of God we learn to live as aliens and strangers in the world in which we live.

After Constantine and after Christendom emerged a shift took place.  Thank (or blame) Anselm for formulating satisfaction theory.  The proposal was that since God, being all-powerful and wise, could conceivably have defeated evil and the devil and sin any way He wished, it was to satisfy God's own sense of justice that Jesus came in the flesh and lived and died and rose again for our benefit. This could later evolve into penal substitutionary atonement.

Thing is, if in the earlier metaphor the shortcoming was how powerful evil seemed to be that it necessitated the life as well as death of Christ, in this new variation there's a different shortcoming.  If Christ is the perfect substitute whose death satisfies the justice of God for our sake, couldn't Jesus have been stillborn or even aborted?  Why did Jesus live among us and then choose to go to the cross?

Thus we get to the moral influence/christus exemplar metaphor, that Christ came to live and die as the example to inspire us, the exemplary life for us to consider the paradigm for true humanity.  That, too, is a powerful and potent metaphor ... it's just that it's pretty obvious nobody can be that perfect. 

Then again, if we bear in mind these are metaphors formulated by humans trying to understand something about Christ, we can consider that no metaphor is all-encompassing. 

Having grown up in the kind of church setting where the moral influence metaphor was basically never used, I got to hear of it from United Methodists and it seemed rather unsavory.  The reason was that too many American Christians who insist on sounding off on atonement theories act as though you have to pick one and the others are lame.  It's more like you accept that Christ atoned for us in both life and death and that you have to make room in your heart and mind for all of the metaphors.  Show me the metaphor of atonement you ignore or reject and I'll have a suggestion as to which metaphor for the atonement you might benefit most from considering. 

Let's play the broadbrush game a little here.  Take the Reformed, they're way into substitutionary atonement/satisfaction theory.  What if they spent time on the ransom/victor metaphor?  Or the moral influence/Christ our example metaphor? On paper who would deny that Christ is our example for those who are Christians?  But at a practical level if your chief understanding of who Christ is is as the sacrificial substitute that can focus on what you've been saved from in a way that forgets what you're saved to.

For folks of a more liberal bent or a social justice bent, the exemplar paradigm seems great. But the trouble is that liberal Protestantism (and the other kinds), particularly the post-millenialist sort that moved swiftly along the American continent, was able to justify a whole lot of bad stuff.  The American Civil War, too.  The problem is that if we over-emphasize Christ as the example and are too confident in our capacity to go and do likewise we may be blind to the need of atonement for that kind of thing we still say and do called sin. Having previously been one to be skeptical of christus exemplar because only liberal Methodists ever seemed to talk about it, Wenatchee The Hatchet considers it a necessary understanding, a metaphor that is inextricably linked to the other metaphors that deal with the atoning life and work of Christ.

If there's an understanding of the atonement that you, as a professing Christian, reject or find unappealing you might as well say "This is the thing that Christ did not need to do for me or for anyone else."

a ten-year anniversary for the 40-Year Old Virgin inspires someone to ruminate on the dillema of how to figure out when an adult is really an adult (it's no longer when a physically adult person is capable of bringing offspring into the world, apparently)
...As A.O. Scott claimed last year, “Nobody knows how to be a grown-up anymore. Adulthood as we have known it has become conceptually untenable.” Which is proximate to another argument, one that was made many times before Judd Apatow came along: that adulthood hasn’t so much passed away as it’s been flattened and dispersed. Young people—via a hypersexualized media culture, via the varying pressures toward economic and social and academic achievement—have been forced to grow up prematurely. Adulthood, meanwhile, has been youth-enized by people in their 20s and 30s choosing work/friends/Netflix/financial self-sufficiency over traditional markers of grown-up-ness: marriage, kids, home-ownership, etc.
It’s a kind of widespread lament that the rituals we used to take for granted, across religions and countries and cultures—bar and bat mitzvahs, quinceañeras, weddings, the loss of virginity—have been denuded of practical meaning, leaving everyone in a state of perpetual youth.

It sometimes seems as though Americans are fantastic at agitating for liberties without taking any interest in confronting opportunity costs.  If you choose path A then path B may forever be closed to you.  It often feels like a mid-life crisis is what happens when a person sees the opportunity costs, at last, and wishes that it was possible to backtrack and go get some of whatever it was that was on the road not taken. We can all tell ourselves we took the one less traveled by but as for that the passing there had worn them really about the same.

Of course a lot of people would rather that Robert Frost poem be a sentimental ode to the triumph of individual perseverance rather than a subtle exploration of self-delusion and stubbornness. But then that's the beauty of Frost's poems, that they're ambiguous enough to invite both modes of interpretation.

The thing is, this flaw of wanting a liberty without paying its price may be one shared by many officially adult folks in the United States.  Something Wenatchee The Hatchet has blogged about over the years is that in the wake of the 2008 housing bubble the median age of first marriage may have gone up, but it's about where it was during The Great Depression.  When economic times and production opportunities improve, it sure seems as though the median age of first marriage drops down to where social conservatives think it "should" be.

Atlantic Monthly on the role of religion on both sides of the Civil and the innovation of total war as aristocratic war conventions gave way to populist war causes
Above all, it was a time when Christianity allied itself, in the most unambiguous and unconditional fashion, to the actual waging of a war. In 1775, American soldiers sang Yankee Doodle; in 1861, it was Glory, glory, hallelujah! As Stout argues, the Civil War “would require not only a war of troops and armaments … it would have to be augmented by moral and spiritual arguments that could steel millions of men to the bloody business of killing one another...” Stout concentrates on describing how Northerners, in particular, were bloated with this certainty. By “presenting the Union in absolutist moral terms,” Northerners gave themselves permission to wage a war of holy devastation. “Southerners must be made to feel that this was a real war,” explained Colonel James Montgomery, a one-time ally of John Brown, “and that they were to be swept away by the hand of God, like the Jews of old.” Or at least offered no alternative but unconditional surrender. “The Southern States,” declared Henry Ward Beecher shortly after Abraham Lincoln’s election to the presidency, “have organized society around a rotten core,—slavery,” while the “north has organized society about a vital heart, —liberty.” Across that divide, “God is calling to the nations.” And he is telling the American nation in particular that, “compromise is a most pernicious sham.”
But Southern preachers and theologians chimed in with fully as much fervor, in claiming that God was on their side. A writer for the Southern quarterly, DeBow’s Review, insisted that since “the institution of slavery accords with the injunctions and morality of the Bible,” the Confederate nation could therefore expect a divine blessing “in this great struggle.” The aged Episcopal bishop of Virginia, Richard Meade, gave Robert E. Lee his dying blessing: “You are engaged in a holy cause.”

When, by 1864, defeat was looking the Confederacy in the eyes, the arms of the pious dropped nervelessly to their sides, and they concluded that God was deserting them, if not over slavery, then for Southern unbelief. “Can we believe in the justice of Providence,” lamented Josiah Gorgas, the Confederacy’s chief of ordnance, “or must we conclude we are after all wrong?” Or even worse, wailed one despairing Louisianan, “I fear the subjugation of the South will make an infidel of me. I cannot see how a just God can allow people who have battled so heroically for their rights to be overthrown.”

Total war, as Yale law professor James Whitman has recently written, was the result of politics, and particularly by the movement of governments in the 19th century away from monarchy and toward popular democracy. So long as government had been the private preserve of kings, then wars had been the sport of monarchs, and were fought as though they were princely trials by combat or a species of civil litigation. The only class of people likely to suffer severely by them was the nobility. The scope of war was limited simply because war was understood to be the prerogative of kings. [emphasis added]

But once democratic governments began to shoulder the monarchs aside—once governments became “of the people, by the people, for the people” and involved the entire people of a nation and not just a handful of aristocrats—war became the instrument of entire populations. No solitary monarch could now call them off; no gentlemen’s agreement could limit their scope. Wars became wars of nations against nations, waged for principles abstract enough to command everyone’s assent, and therefore all the more impossible to win short of the annihilation—not just the defeat—of an enemy. Not religion, but democracy made it necessary to invoke “millennial nationalism,” in order to recruit sufficient mass resources for new mass wars. Theories about justice in war or debates about the proportionality with which war could be waged would only serve as obstacles in the path to unconditional victory.

Appeals to divine authority at the beginning of the Civil War fragmented in deadlock and contradiction, and ever since then, it has been difficult for deeply rooted religious conviction to assert a genuinely shaping influence over American public life.

In exposing the shortcomings of religious absolutism, the Civil War made it impossible for religious absolutism to address problems in American life—especially economic and racial ones—where religious absolutism would in fact have done a very large measure of good. Some leaders, Martin Luther King prominent among them, have since invoked Biblical sanction for a political movement, but that has mostly been tolerated by the larger, sympathetic environment of secular liberalism as a harmless eccentricity which can go in one ear and out the other.

More and more, it can seem that King's appeal to Christian heritage and theological reasoning has been swept under the rug in popular presentations of who he was.
One of the observations Mark Noll made in The Civil War as a Theological Crisis was that the majority of folks in the United States were evangelical Protestant but the slavery issue didn't get clearly settled because the question about race wasn't settled.  Although both sides could insist that they were proving their respective positions from the Bible there was an impasse.  Noll noted that although there were theological traditions that had arrived at the conclusion that slavery in general could be permitted but that race-based slavery as it was practiced in the United States was immoral, these views were sidelined.  Why?  Because Protestants in that time weren't going to take the theological advice of Catholics and Jews. On the other hand, many nominally Protestant groups that had landed at a comparable conclusion were considered outside the pale of traditional Trinitarian orthodoxy. 
As Noll noted in his substantially longer book America's God, even though both evangelicals and Catholics viewed republicanism with deep skepticism Americans embraced it and were convinced the flaw of uneducated mob rule was not ultimately going to be a big problem.  For European theologians the aim was "just" to avoid the arbitrary dictates of kinds and aristocrats on the one hand, and mob dynamics on the other.  So long as you didn't end up in either of those ditches just about anything in the middle was probably okay. Americans found a way ...
then again, reverse-engineering the kind of theology or ideology you want in order to justify what you've already committed to doing is pretty much human nature.

for those with a case of the Mondays, Atlantic author suggests ways it could get worse in light of this year's collected hacks, "As long as your information eventuallyw inds up on a computer connected to the internet, you could be in trouble."

Between the attacks on Ashley Madison and the U.S. government, what we’re seeing play out, in public, is an erosion of the possibility of trust in institutions. No secrets—whether financial, personal, or intimate—that have been confided to an organization that uses servers can be considered quite safe any more. You don’t even have to submit your data online: As long as your information eventually winds up on a computer connected to the Internet, you could be in trouble.
These hacks, and the ones we don’t know about yet, require a quasi-multidisciplinary interpretation. If the IRS, OPM, or USPS hacks seem worrisome, imagine personal information from those attacks counter-indexed against the Ashley Madison database. Wired is already reporting that about 15,000 of the email addresses in the Madison dump are from .gov or .mil domains. An attacker looking to blackmail the FBI agent whose background check data they now hold—or, at a smaller scale, a suburban dad whose tax return wound up in the wrong hands—knows just which database to check first. No hack happens alone.

HT Jim West, Patrologist on Academia as an Honor/Shame Society

Since Wenatchee The Hatchet has linked to Kyle Gann's generally enjoyable rants about the problems of identity politics issues stifling the publication timeline he hoped to have for his forthcoming book on Charles Ives' Concord Sonata, it's interest to read complaints here and there about the academic publishing racket, er, business.

Sunday, August 23, 2015

Leonard Meyer's books on musicology helped Wenatchee The Hatchet realize why he can't stand most Romantic music

In Style and Music: Theory, History and Ideology, Meyer has managed to articulate a pretty clear explanation of why I can't stand most Romantic music.  One the one hand Romanticism was an ideology that rejected convention in favor of whatever was "natural" and ideally naïve genius. The problem, however, was that thanks to the consolidation of the tonal musical language that took place in the 19th century a thorough irony emerged--Romantic ideologues declared it was far better to invent a new process or idiom than to refine an existing one, and yet the Romantics spent the better part of a century finding new ways to play with a harmonic/melodic set of conventions inherited from the Baroque era on the one hand and the formal/procedural tools inherited from the Classic era.  While lacking the truly revolutionary approaches to music that would emerge in the late Romantic/post-Romantic era, let alone the early modern period, the Romantics were already doomed to refine the ideals and options that were developed before them.

So even though Romanticism prized what Meyer called organicism (and that concept in the arts is really great, the ideal that a seed grows into a mature plant, a way of saying that ideas should be able to be expressed in the arts according to the nature of the idea), the movement was stuck.  In his giant book mentioned at the top of the post, Meyer explained that what the Romantics ended up doing until Berlioz and Wagner introduced innovations in the fixed idea and the leitmotif, was basically disguising the conventions they relied upon rather than abandoning the tonal idiom they inherited from the 18th century innovators.  

A crude way of saying what this meant is that the Romantic era composers couldn't improve upon French fries and wanted to not be like the people who came before them, but their chief innovation was not garlic fries or Cajun fries or any of that so much as that they just biggie-sized everything.

Meyer explained that the ideal for the Romantic was self-realization but the full realization of the self was impossible.  A Christian teleological explanation for why this would be is that absent the eschaton, nobody's complete anyway.  Romantic ideology may have simply transmuted progressive sanctification into what Meyer describes as the ideal of Becoming.

This would explain why so many Romantics created bloated pieces of music in which it seemed they never knew when to quit.

Now alert long-time readers could say "But Wenatchee, you like the early 19th century guitarist composers enough to write about them.  And you seem to like some Romantics."  Yep, fair points.  Wenatchee likes some Romantic music.  Guitarists were never able to engage in the biggie-size approach the way pianists and symphonists could during the 19th century. Twenty minutes might be starting up for some Romantic and post-Romantic composers, whereas for a guitarist a twenty-minute work might as well be for that guitarist what a Mahler symphony might be for a violinist. So there's that.

And when Wenatchee considers the Romantics he does like the names that tend to come up are Mendelssohn and Brahms.  No surprise the sorts of Romantics with the most classicist tendencies show up, eh?

Meyer, over the years, pointed out that as the Romantics attenuated the syntax and forms they inherited from the Classic era they didn't just stretch things out, they also began to rely on what Meyer called a "statistical climax".  For those of you not used to the jargon of musicology Wenatchee The Hatchet is going to borrow an analogy from another genre.  If the Classic era had a Haydn finger-picking an acoustic guitar the Romantic era came to be dominated by a Pete Townshend pin-wheeling chords from his Gibson thundering through Marshall amps.  That gives you an idea what a euphemism such as "statistical climax" entails at a practical level. 

This has helped me get why I find a lot of Romantic music (not all of it) so insufferable.  the Baroque and Classic era masters developed and consolidated forms and idioms within music that the Romantics by and large couldn't replace even though they had an ideology that more or less required them to repudiate the conventions they could not replace or particularly improve upon.  So this explains why I love music from the Renaissance up to the high Classic period, and I love stuff from the post-Romantic Impressionists and the early 20th century avant garde.  It also explains a little why I did American popular music (which, though it is thoroughly indebted to the harmonic vocabulary developed by the Romantics a century earlier, has had the good sense to know when to put down the guitars or usher in the fade-out).  From this perhaps peculiar perspective, the Romantic era was less an era of actual innovation than a lame cul-de-sac in which composers substituted ramped up size for anything particularly daring in the way we think about music as an art form.

Fortunately folks like Debussy and Stravinsky found ways to shake free of the conventions that the majority of Romantics didn't manage to transcend. 

looking back on the celebrity letter about celebrity vs pastor more than a year later, considering the priorities of Mark Driscoll as to whose judgment he considers first

... Second, in recent years, some have used the language of “celebrity pastor” to describe me and some other Christian leaders. In my experience, celebrity pastors eventually get enough speaking and writing opportunities outside the church that their focus on the church is compromised, until eventually they decide to leave and go do other things. [emphasis added] Without judging any of those who have done this, let me be clear that my desires are exactly the opposite. I want to be under pastoral authority, in community, and a Bible-teaching pastor who grows as a loving spiritual father at home and in our church home for years to come. I don’t see how I can be both a celebrity and a pastor, and so I am happy to give up the former so that I can focus on the latter. [emphasis added]

Back in 2014 there weren't stories about God giving Mark and Grace Driscoll audible permission to quit, which they did.  The Driscollian stories of "God said Mark could quit" only seemed to show up in 2015 on the conference circuit.

However, the March 2014 letter featured language that, certainly in the wake of the resignation, may be worth revisiting.
To be clear, these are decisions I have come to with our Senior Pastor Jesus Christ. I believe this is what He is asking of me, and so I want to obey Him. The first person I discussed this with was our first, and still best, church member, Grace. Her loving agreement and wise counsel only confirmed this wonderful opportunity to reset some aspects of our life. [emphasis added] I want to publicly thank her, as it was 26 years ago this week that we had our first date. She is the greatest friend and biggest blessing in my life after Jesus. When we recently discussed this plan to reset our life together, late at night on the couch, she started crying tears of joy. She did not know how to make our life more sustainable, and did not want to discourage me, but had been praying that God would reveal to me a way to reset our life. Her prayer was answered, and for that we are both relieved at what a sustainable, joyful, and fruitful future could be. As an anniversary present, I want to give her more of her best friend.

I have also submitted these decisions to the Board of Advisors and Accountability. They have approved of this direction and are 100 percent supportive of these changes. It’s a wonderful thing to have true accountability and not be an independent decision maker regarding my ministry and, most importantly, our church. [emphasis added] ...

Not be an independent decision maker regarding his ministry and the fate of Mars Hill Church?

So it would appear that once Mark Driscoll decided what he really wanted to do it also meant he had talked it over with senior pastor Jesus.  Next up, the wife.  Of course she totally agreed with what her husband wanted to do after he'd talked with Jesus about things. Then once they had settled what they wanted to do Mark Driscoll also submitted the decisions to the BoAA which, of course, approved the direction 100%. 

The words said "no" but the resignation said "yes". Let's keep in mind something about Mark Driscoll's stories about what God told him to do.  Marry Grace, nearly always comes first in all variations of the story.  They were already sexually active prior to marriage and in a large number of stories shared about how he met Grace, Mark Driscoll indicated he pretty much wanted to marry her when he met her before he was even a Christian. So Mark Driscoll has said God told him to marry Grace Martin.  So ... it would appear Mark already wanted to do that anyway before there was any need of a divine directive. 

For the planting churches, teaching the Bible, and training young men--Joe Driscoll talked about his son's achievements in high school in the fundraising film God's Work, Our Witness back in 2011. Most likely to succeed, high school debate team, student body president. He was already an alpha dude in high school in various forms of leadership.  It's not that difficult to propose that God telling Mark Driscoll to lead people with the imprimatur of a story of direct divine commission isn't a "new" direction for Mark Driscoll compared to his pre-conversion stories, it's more of the same. Now if God had told Mark Driscoll the four things he was to do were to 1) break up with Grace 2) devote himself to a life of celibacy 3) return to the Catholic church and 4) study to be a priest would Mark Driscoll have done any of those things?

One of the things Adolf Schlatter wrote was that it is a lie arising from covetousness to remake God into one's own image and to make your own lust to be God's will.  It seems worth asking whether the stuff Mark Driscoll has kept saying God told him to do hasn't been precisely the things he already pretty much wanted to do anyway. 

Saturday, August 22, 2015

on the Ashley Madison hack, information disclosure and ethics in watchblogging

But while it’s impossible to know exactly why so many signed up for Ashley Madison accounts—with their work emails, no less—one can imagine that there was an extent to which the website’s mere existence, its promise of a sheltering and complicit community, soothed many consciences.

Because that’s what Ashley Madison did: it organized and fostered a community around cheating. We speak of the importance of private associations, their ability to inculcate habits of virtue. But here, we see the opposite: we see an association fostering and even facilitating vice. And this is the dark side of community that we forget about: we forget that peer support and approval will motivate us to do things we may otherwise have avoided—or at least felt guilty about.

For numerous, obvious reasons, the fact that someone’s name appears in the Ashley Madison database does not mean they have engaged in marital infidelity. To begin with, it is easy to enter someone else’s name and email address, as happened to The Intercept’s Farai Chideya. [emphasis added] Beyond that, there are all sorts of reasons someone may use this website without having cheated on their spouse. Some may use the site as pornography because it titillates them, or because they are tempted to cheat but are resisting the urge, or because they’re married but in a relationship where monogamy is not demanded, or because they’re researchers or journalists observing this precinct of online interaction, or countless other reasons. This permanent, highly public shaming of these “adulterers” is not only puritanical but reckless in the extreme, since many who end up branded with the scarlet “A” may have done absolutely nothing wrong.

So ... let's just assume everybody knows about the Ashley Madison hack and the basics of what just happened.

Wenatchee The Hatchet is not particularly pleased.  There's no reason to talk about poetic justice for cheating men who cheat on their wives and it's not because they're cheating on their wives, though that's no good.

No, the problem is that cheering the hack suggests we might want to consider what standards we have about information disclosure.  When and why do we consider info-dumping of information that was intended to be kept private to be a heroic act?  When and why do we consider info-dumping of information that was meant to be secret despicable?

It's not always clear that we, as a society, exactly know or care.

By now it would be difficult to escape the idea that the only thing Wenatchee The Hatchet is known for is as some kind of watchdog blog.  It's lame to have been so thoroughly typecast but so it goes.

Give or take some scholarly debates about provenance and textual transmission, the story of how David arranged for the death of Uriah the Hittite in battle can be described as a biblical story in which we have been given, so to speak, to a "hack".

Jacob Wright touches on this aspect of the biblical narrative in his book David, King of Israel, and Caleb in Biblical Memory. He states that the correspondence is most likely a literary license taken by the authors whose work became part of the canonical narrative.  However, for the point of this blog post, it suffices to say that a "leak" or "hack" of royal correspondence made it into a biblical text. Even the staunchest unbeliever will easily appreciate why that literary/historical "hack" happened, nobody in any age, even the most patriarchal bronze age around, thought there was anything remotely ethical or civilized or appropriate about a regional warlord using royal status with the gloss of divine endorsement as a thing to be used to arrange for the death of a loyal soldier to ensure that the sexual use of the soldier's wife was not discovered. 

As a lengthy aside, there are several things about David's conspiracy against Uriah the Hittite that evangelicals seem to perpetually overlook. The first and most basic one is that had David not already engaged in war against the Ammonites he would not have sent people off to war while staying home. Jacob Wright laid out an interesting and persuasive case that this war, being the first war that was undertaken by David not for the benefit of Israel but from a personal sense of honor, the biblical author(s) may imply that David was already on morally shaky ground in what he was using royal power and resources for and why even before he spotted Bathsheba one fateful day.

Secondly, when Nathan confronts David about the murder he doesn't condemn David's polygamy. Had the wives not been enough, Nathan explained, God would have provided more.  Contemporary evangelical commentary to the effect that David was an adulterer misses the boat if it goes beyond Bathsheba, unless Abigail counts on the presumption that David murdered Nabal (which seems to be where Joel Baden has gone on the matter but this blog post isn't about his book).

Third, although in a number of places in the canon Gad the seer gets described as David's personal seer there's no sign of Gad in the narrative where Nathan appears (Nathan will later appear in Kings as a lobbyist for Solomon, more or less, opening up the possibility that today's principled prophet can still end up next year's mercenary backdoor schemer even if the cause is a nominally correct one). Speculative as this theory is, it seems that Nathan felt obliged to speak up and confront David for his crimes because Gad, the king's official prophet, was nowhere to be found. Either the prophet was not around to begin with or Gad was around and potentially looked the other way or didn't know what had happened because David's secrecy was solid.

So even in the Bible, we could say, there's a precedent for a "hack" in which correspondence meant to be secret not only sees the light of day but literally ends up as part of a canonized document.

So that might read as if it were a defense of a hack for those readers who haven't yet read between the lines.

This is not a defense of hacks or leaks.  How does this connect to the title that mentions watchblogging?

We're just getting to that.

The vast majority of material published here about the history of Mark Driscoll and Mars Hill was stuff that was published in social and broadcast media in various ways over the course of sixteen years.  Yes, including all the William Wallace II stuff. Hacking was never used, it was never necessary.  People sent stuff along and for many years Wenatchee The Hatchet was simply given a ton of content or had access to a ton of content by virtue of being a member of Mars Hill and an occasional ministry volunteer.  It was just a matter of the providence of keeping stuff rather than deleting it, and then other stuff got volunteered.

There's a substantial and obvious difference between people voluntarily sending potentially sensitive information and finding mountains of information has been dumped onto social and broadcast media by the parties themselves, and recent hacks. Like it or not Mars Hill leaders have had to face the reality that even if the Result Source deal had not been leaked to World Magazine there was a super-majority of content already made available to the public at large by Mars Hill itself.  When the plagiarism controversy erupted it was only possible precisely because the published works of Mark Driscoll were mass market products.  Wenatchee The Hatchet didn't get any of the RSI stuff.

So what about all those years of leaks from The City?  Stuff was sent.  Simple as that. People trusted Wenatchee The Hatchet with insider communication. So when Driscoll was claiming Mars Hill was somehow not a wealthy church in spite of a roughly $30 million dollar annual budget, people who were still part of the Mars Hill community at some level conveyed this absurd assertion on the part of Mark Driscoll via The City to Wenatchee The Hatchet. The same went for the resignations of Bill Clem and a number of other Mars Hill staff.  Some of the staff transitions were easy to document simply because names started vanishing from the websites.

Back when Wenatchee The Hatchet was a young journalism student one of the bits of advice WtH received was to avoid relying on anonymous sources.  You can't be sure they aren't lying, you can't be sure they haven't gotten the cold shoulder because of their own ethical lapses that may have gotten them fired from a job.  The risks of someone opting to become a source out of retaliation can be too high.  And in many cases what you think you may need a secret source for you don't need that secret source for; a remarkable amount of stuff is sitting in plain sight if you're just patient enough to look in the right places.

Take the million-dollar house in Woodway the Driscolls bought during the "season" when non-negotiable layoffs were happening.  That real estate was found in a roughly ten-second online search based on a select pile of informational statements that led directly to county real estate records.  Wenatchee The Hatchet found that stuff quickly but without setting out to find that stuff, not specifically.  There may still be those out there who think it was terrible Wenatchee The Hatchet found information that's been a matter of public record for years on county websites but it's tough to sty angry at the vicissitudes of providence.  And because that Woodway house purchase is a matter of public record it raised a simple and blunt question, how does a megachurch pastor afford to buy a million-dollar piece of real estate in Woodway, Washington?  If Mars Hill was not a wealthy church where were the Driscoll's getting the money to buy a house in Woodway? 

What was publicly available naturally led to questions about what was not disclosed.  You can't just go buying real estate that expensive if you can't afford it and if you can afford it how affordable is it?

Sources over the years indicated that one of the biggest and most opaque mysteries within Mars Hill was how much Driscoll got paid and how he got paid.  Within the culture of Mars Hill this was one of those mysteries that couldn't be worked out. Gone were the days were the books at Mars Hill were open for anybody to go read.

Now Mark Driscoll and the leaders of Mars Hill had spent years telling the members sand lower level staff to keep sacrificing and keep giving.  They also conveyed that in tough seasons some people had to be let go and that this was part of the mission.  The sheer number of people cut loose in the 2011-2013 period was given witness by the BoAA when it mentioned more than 100 people got transitioned off staff at Mars Hill in that fateful two-year stretch.  It wasn't just that Mark Driscoll and the leadership culture kept urging sacrificial giving, they kept doing so in a corporate culture in which layoffs were legion and the leaders were increasingly evasive about what compensation the top dogs got.

Meanwhile, Driscoll's public career kept picking up steam.  In 2012 he was confident enough to go on a pre-emptive character attack adventure against Justin Brierley.  Driscoll evaded any controversy that didn't make his personality front and center, so he never said a word on record about the disciplinary case involving Andrew Lamb.  Driscoll was so busy promoting Real Marriage in early 2012 some one-time disciplinary case involving some young horny dude at the Ballard campus wasn't worth thinking about, perhaps. Besides, there was that meet-and-greet with T. D. Jakes to look forward to, not that you'll see a whole ton of reference to that even from Driscoll these days. 

For those at Mars Hill who saw the progression Mark Driscoll made from denouncing preachers like Jakes in 2007 to shaking hands with Jakes in 2011; for those who saw how the finances became more opaque even as the requests to sacrifice stayed strong; a few people here and there across a variety of campuses began to share stuff with Wenatchee The Hatchet.  Wenatchee The Hatchet also took the time to document real estate acquisitions and subsequent leadership appointments. 

Wenatchee had no idea that such a thing as Result Source existed.  For years it seemed probable there was some remarkably lazy attribution and sourcing in Driscoll books.  The plagiarism scandal seemed like a high probability but it wasn't until Wenatchee read Real Marriage that it became clear such a scandal was likely to occur.

Mark Driscoll ruefully explained via video last year, that because of the kind of celebrity he had attained he did not have the same kinds of standards from which to plead a point for privacy as other individuals.  That's soft-pedaling it, still, because Mark Driscoll spent decades questing for a level of celebrity that made him a public figure and a public moralist to boot.

So when Driscoll ever sounded off on the wrongness of people plagiarizing the works of others in sermons or books it became a matter of public record and a matter of public service to document any applicable cases in which Mark Driscoll may have not only failed to live up to the standards he judged others by, but flagrantly contradicted the ideals he said people should live by. It's important to keep in mind just what a mind-bending amount of content Team Driscoll and Mars Hill put out there over the years. 

So it mattered a great deal that in Real Marriage it turned out the Driscolls did not acknowledge the work and influence of Dan Allender in the first print edition. Even if somebody were to reject the legitimacy of copyright and intellectual property (which some Christians do) the point is not lost, it's still hypocritical for the leadership of Mars Hill to have lamented in late 2011 in the wake of a trademark and logo scandal that some people copied Mars Hill content without attribution when it would turn out a great deal of Mark Driscoll's published work turned out to have made use of the ideas of others without adequate attribution.  For the folks who don't remember the Mars Hill 2011 trademark/logo scandal ...
Sadly, in addition to giving things away, we’ve also had things taken. We’ve had churches cut and paste our logo, take our website code and copy it completely, had ministry leaders cut and paste documents of ours, put their name on them to then post online as if it were their content, and even seen other pastors fired for preaching our sermons verbatim.

We’re not the only church called Mars Hill, and occasionally there arises confusion between us and other churches that share the “Mars Hill” name, particularly as we now have our churches in four states. This was the case recently when one of our members called us to find out if we had planted Mars Hill churches in the Sacramento, California area. We had not, but when we went to these churches’ websites, it was obvious to us how people could be confused. Each of these three connected churches in the Sacramento region—planted in 2006, 2007, and 2010—bore the “Mars Hill” name and their logo was substantially similar to the logo we’ve used since 1996.

When cases like this arise in the business world, it’s customary for a law office to send a notice asking the other organization to adjust their branding to differentiate it. This is commonly referred to as a cease and desist letter. On September 27, 2011, our legal counsel sent such a letter to these three Mars Hill churches requesting that they change their logo and name. In hindsight, we realize now that the way we went about raising our concerns, while acceptable in the business world, is not the way we should deal with fellow Christians. On Friday we spoke with the pastor of Mars Hill in Sacramento to apologize for the way we went about this. We had a very productive conversation and look forward to continuing that conversation in the days and weeks ahead.

It didn't matter if a person rejected the idea of intellectual property in this case, the flagrant hypocrisy alone was damning.  If you complain in public that other people crib your stuff you better make sure you're not guilty of doing the same thing. There was a time when Mars Hill leaders said copyright was outmoded and not the way of the future.  There could be some story about how THAT changed but that's not the point of this already sprawling post.

By the time Janet Mefferd confronted Driscoll on air about plagiarism Mark Driscoll and the leaders of Mars Hill had already established Driscoll as a public figure willing to sound off on how bad it was that people cribbed from his sermons.  That turned out to not be the only problematic thing about Driscoll's published work.

It mattered that Mark Driscoll said from the pulpit in 2000-2001 that guys shouldn't cheat, they shouldn't take the shortcut to getting what they want because a decade later when Mars Hill contracted with Result Source to rig a #1 spot for Mark Driscoll on the NYT that revealed that the Mark Driscoll who told guys to not take the shortcut to their goals was willing to take a shortcut.  If your book didn't make it to the top of the New York Times bestseller list without a little help did it deserve to be there? 

Whoever leaked the Result Source contract to the press was leaking something that had been hidden not only from the public but from a probable majority of Mars Hill members and even leaders.

That was information that needed to be made known.  Why?  Because it didn't just open up for public discussion that Mars Hill arranged to rig the New York Time bestseller list to promote a book that turned out to have "citation errors" in it, it revealed that this scheme was employed by other Christian authors within the Christian publishing scene.  The company that has done this was already a matter of news, however. 

Leaks of sensitive information to the press, formal or informal, aren't like hacking.  A history of leaked content dealing with public figures occupies a different ethical space than a hack.  A hack cannot, as Greenwald has noted, account for the reality that many people who would use Ashley Madison might never use their real names or contact information.

Information is the foundation of our entire economic system.  We use fiat currency.  Sure, we have paper and metal money but in the normal business day goods and services can be exchanged with a series of 0's and 1's. Who you even are, in this economic system, is a sequence of information.  You're flesh and blood, too, of course, but that's not how you are mediated in an information economy. 

Why is it that people who might object to NSA surveillance might have no problem with the hack? Why is a lack of informed consent a prerequisite people care about in one case but potentially not in another?  If information wants to be free then why wouldn't the NSA have every ground to keep up a surveillance program?  It's not like the military and the industries of national defense didn't, you know, kind of invent the internet.

Hacking or not, when you have access to information that can potentially permanently destroy the livelihood of someone you have to weigh the significance of whether it is worth it to disclose that information.  In the history of blogging about Mars Hill there are occasionally folks who seem to think any and every form of hardball is acceptable. The Ashley Madison hack has opened up the floodgates to a process that will destroy the lives of people who are not public figures or who are not public figures in any sense that could even theoretically merit the disclosure of the information.  That Duggar guy was already in the crosshairs for other things besides what has lately been reported about his name appearing in a list of hacked names.  Even assuming that email/contact information is connected to a real Duggar (and that should not be assumed, per Greenwald's recent writing) the disclosure of the information in itself was not exactly "necessary".

Over the years Wenatchee The Hatchet has repeatedly deleted occasional comments from former Mars Hill attenders with axes to grind against some former employees of Mars Hill. Some allegations were occasionally made that, if they had substance behind them, would be better transformed into actual litigation than some kind of passionate cyber-justice.  A watchblog can preserve for the public record information that has, for the most part, been a matter of public record already.  In exceptional cases information disclosed to thousands of insiders in an organization has been disclosed for public benefit--when you see how cordoned off the Mars Hill campuses were from each other disclosing resignations with leaked City content was a way to disclose what communication to a thousand or more people was, but the eventual and inevitable disappearance of the names from the public roster was happening anyway. 

But Wenatchee The Hatchet has not disclosed everything discovered over the course of five or six years of blogging about Mars Hill.  If "that" were done some people would never be able to move past things that happened to them while they were at Mars Hill.  Although there is no right in America "to be forgotten" there's such a thing as pity and compassion.  The majority of people, even in the leadership culture of what is the dissolving Mars Hill, don't qualify as public figures.  Let's be clear here, Mark Driscoll's arrogance and incompetence brought him down.  He destroyed his own reputation through the way he spoke and published in the public sphere.  There's not much reason to feel bad for him.  Other people who got fired and laid off were not public figures seeking the spotlight.

You have to assess whether the story warrants permanently damaging someone's ability to work in a given field before you run with something.  In the case of a Mark Driscoll, who has at length betrayed so many of the doctrinal and ethical norms he spent decades espousing from the pulpit, that seems clearcut to Wenatchee The Hatchet. The sheer number of points at which Mark Driscoll's conduct has by now contradicted the precepts he espoused as a public figure precludes his legitimacy in ministry as far as Wenatchee The Hatchet is concerned.  It doesn't mean the guy isn't a professing Christian, it just means that, dude, the guy destroyed his own credibility to a point where it can't be repaired.  Now Mark Driscoll has to live with the reality that one day his kids could stumble upon William Wallace II rants and because Mark Driscoll spent decades telling young guys, yelling at them, to think of their legacy and live accordingly, that's something Driscoll has to live with.  He has to live with the reality that in 2008 he compared having womens' ministry to the prospect of juggling knives.
Whatever different feelings he has now compared to 2000, as he told Brian Houston earlier this year, the substance of what Driscoll has said remains. 

There's a world of difference between preserving what Mark Driscoll dumped onto social and broadcast media for his entire public career and a hack. The differences are obvious but people who cheerlead the results of the hack may not be thinking through the implications of those differences.  The ethics of information access and distribution are always going to be issues. The difficulties raised by our technology having evolved faster than our applicable laws has meant that, for instance, we dn't actually have applicable laws about how to deal with data leaks in a way that's always clear.  This year there's the Ashley Madison hack but a while back there were the nude leaks of celebrities, some of whom took to using intellectual property law as a basis for taking legal action against content that was never intended to be formally published.

If this were just a matter of who doesn't want what published it wouldn't remain simple because our entire economy is based on information. Our identities in this economic system are the information about us. It's possible to buy and sell that information without our knowledge or consent in some cases.  Even if you've never logged onto the internet a single day in your life in the United States that doesn't mean basic information about you isn't on the internet.  It doesn't mean a person couldn't, with the right price or the right effort, find out things about you. For those who have fretted about the NSA, worry not, you may have voluntarily dumped more information about yourself through technology use than the NSA would ever even care to know.  There's a case to be made that if you're going to be online at all you shouldn't be doing anything you wouldn't be willing to have a matter of public knowledge but that's pretty clearly not how things work.  People pay their bills online, should the water bill for your home be a matter of public record?  Isn't it already, some might suggest.  If "information wants to be free" which information are we really talking about? Social security numbers?  If you have children and they have medical conditions how many people need to know about that?  Everyone?  Does that information want to be free?

It's not as though there don't come times when things that have been concealed are shouted from rooftops.  The golden rule doesn't become irrelevant in cases like this does it?  What about yourself would you be willing to have disclosed, even at the point of ruining your life, would you condone someone else disclosing about you if it were for some greater good?  Reputation is a zero-sum game, you have a good one or a bad one.  What's worth destroying a person's reputation for?  In a day as ostensibly individualistic as ours it's remarkable how much we have retained a notion of class-based guilt. Not that you asked, dear reader, but Wenatchee The Hatchet got information from people in the community known as Mars Hill in part by not presuming upon a class-based guilt.  Was Obadiah evil for being in Ahab's court?  For saving the lives of prophets whose lives were sought by Jezebel?  Today's categorical approach to guilt can overlook things.  This isn't to suggest there are "heroes" who use Ashley Madison, the question the hack renews for us is to ask about the ethics of information access and disclosure. 

Jonathan Haidt has written that our moral intuitions come first and the explanatory reasoning comes later.  We decide we're okay with something being done and THEN we rationalize it. What a hack introduces, particularly when the information is dumped online for public consideration, is that declaration through action that consent is not necessary for finding out something that was not already disclosed to the public. There's no need for any relational context in which to seek or obtain the information.   If there's a thread in philosophical questions about hacking, content piracy, and government surveillance that have something in common, it's the moral question of when and why it's okay to ignore that you haven't been given consent to gain access to the information you're after.  Haidt has said that we decide something is right first and rationalize later. 

For those whose moral intuition settles that it's okay to gain information of some form without the consent of the one from whom the information is obtained, what's the rationale?  In the case of Wenatchee The Hatchet everything given to Wenatchee The Hatchet was given, everything was volunteered, whether directly or by way of information published to mass and social media.  It's difficult to think of a single thing published at Wenatchee The Hatchet in which informed consent was not the basis of the content received; the people who voluntarily sent in information WANTED to send it. The histories of real estate acquisition and leadership appointments were things Mark Driscoll bragged about from the pulpit. It was simply a matter of preserving things for the record. For those who share content that was not volunteered, what's the incentive?  What is the good for which consent is not only not required but for which consent is irrelevant? 

Of course it's not just in information exchange that informed consent is a crucial ethical question, informed consent is also the crucial ethical question in the exchange of bodily fluids.