Saturday, December 02, 2017

two links for the weekend, The Coming Demographic Crisis for the FIrst World and The New Yorker features anti-natalist author David Benatar

Ecclesiastes 4:1-4 (NIV)
Again I looked and saw all the oppression that was taking place under the sun:

I saw the tears of the oppressed—
and they have no comforter;
power was on the side of their oppressors—
and they have no comforter.
And I declared that the dead,
who had already died,
are happier than the living,
who are still alive.
But better than both
is the one who has never been born,
who has not seen the evil
that is done under the sun.

And I saw that all toil and all achievement spring from one person’s envy of another. This too is meaningless, a chasing after the wind.

For two centuries, overpopulation has haunted the imagination of the modern world. According to Thomas Malthus, writing in 1798, human population growth would always surpass agricultural production, meaning “gigantic inevitable famine” would “with one mighty blow level the population with the food of the world.”

Later, eugenicists like Margaret Sanger in the 1920s fretted over the wrong people reproducing too much, creating what she called “human weeds,” a “dead weight of human waste” to inherit the earth. In 1968, Paul Ehrlich predicted that in the 1970s, “hundreds of millions of people are going to starve to death” because of the “population bomb.” These days, environmentalists worry that too many people will overload the natural world’s resources and destroy the planet with excessive consumption and pollution, leading to catastrophic global warming. 

A strain of anti-humanism has always run through population paranoia, a notion that human beings are a problem rather than a resource. But as Jonathan Last documents in his new book What to Expect When No One’s Expecting, it is not overpopulation that threatens the well-being of the human race, it is under-population. As Last writes, “Throughout recorded human history, declining populations havealways been followed by Very Bad Things.” Particularly for our modern, high-tech, capitalist world of consumers who buy, entrepreneurs who create wealth and jobs, and workers whose taxes fund social welfare entitlements, people are an even more critical resource.

Last, a senior writer for the Weekly Standard and father of three, provides a reader-friendly but thorough analysis of the demographic crisis afflicting the West and the “Very Bad Things” that will follow population decline. Clearly argued and entertainingly written, Last covers the how and why of our refusal to reproduce, and the consequences that will follow.

The facts of population decline are dramatic. Women must average a total fertility rate (TFR) of 2.1 children apiece for populations to remain stable. But across the developed world, and increasingly everywhere else, fertility is quickly declining below this number: “All First World countries are already below the 2.1 line,” Last writes, and the rates of decline among Third World countries “are, in most cases, even steeper than in the First World.”

Japan and Italy, for example, have a 1.4 TFR, a “mathematical tipping point” at which the population will decline by 50 percent in 45 years. As for the rest of Europe, by 2050 only three countries in the E.U., which today has an average rate of 1.5 TFR, will not be experiencing population declines. Those countries are France, Luxembourg, and Ireland.

Immigration from the Third World will not provide a long-term solution, as fertility rates are declining there as well. The average fertility rate for Latin America was six children per woman in the 1960s; by 2005, it had dropped to 2.5. At that rate of decline, within a few decades, Latin American countries will likely have a fertility rate lower than that of the United States.

Compared to Singapore’s 1.1 TFR, or Germany’s 1.36, the U.S.’s 2.0 (an average of varying rates ranging from 1.93 to 2.18) looks pretty good. But, in Last analysis, the negative trends do not bode well for the future. The large numbers of Hispanic immigrants reached 50.5 million in 2010, compared to 22.3 million in 1990, a doubling of their population in 20 years. Hispanic women are outpacing the U.S. fertility rate with their 2.35 TFR.  But that number represents a decline from 2.96 in 1990, plunging nearly 10 percent just between 2007 and 2009.

Last warns, “Our population profile is so dependent on Hispanic fertility that if this group continues falling toward the national average––and everything about American history suggests that it will––then our 1.93 fertility rate will take a nosedive.” The United States should not count on a population surge via Mexico, where 60 percent of the Hispanic immigrants into this country come from. Mexico’s fertility rate has fallen from 6.72 in 1970 to 2.07 in 2009, a trend that points to further decline. In addition, labor shortages in Latin America will likely lead to diminished emigration.

The dire economic and social effects of plummeting birthrates remind us that marriage and childbirth are not just private lifestyle choices. A country with fewer children becomes, on average, increasingly older. Cities and towns begin to empty, while the cost of caring for retirees and elderly sick people skyrockets. Old people spend less and invest less, shrinking capital pools for the new businesses that create new jobs. Entrepreneurs do not come from among the aged: countries with a higher median age have a lower percentage of entrepreneurs.

Most important, a shrinking labor force means fewer workers contributing the payroll taxes that finance old-age care. The Social Security program is already beginning to be impacted by the decline in the worker-to-retiree ratio. In 1940, there were 160 workers for each retiree. By 2010, there were just 2.9. Once some 80 million Baby Boomers retire, the number will plummet to 2.1. This means taxes will have to increase and benefits be cut substantially to keep the program solvent. Medicare is similarly threatened by declining fertility. Both programs will cost more but have fewer workers footing the bill.

Finally, foreign policy will increasingly be impacted by the global decline in fertility. Those who fear China as a future superpower threat to our interests should remember that by 2050, China’s population will be declining by 20 million every five years, and one out of four people will be over the age of 65. China’s public pension system covers only 365 million people and is unfunded by 150 percent of GDP. What we need to prepare for “is not a shooting war with an expansionist China,” Last writes, “but a declining superpower with a rapidly contracting economic base and an unstable political structure. It’s not clear which scenario is more worrisome.”

The author of the above is associated with the Hoover Institution for those who don't just go and follow links.  If the prospect of demographic decline is that every nation in what we now know as the First World is going to stop breeding and die off then that's a handy transition into a feature in The New Yorker about an anti-natalist, someone who argues that nobody should birth children because on the whole it's better to have never been born than to live with the inevitable suffering that life brings with it. Someone like David Benatar may simply be like the author of Ecclesiastes.

David Benatar may be the world’s most pessimistic philosopher. An “anti-natalist,” he believes that life is so bad, so painful, that human beings should stop having children for reasons of compassion. “While good people go to great lengths to spare their children from suffering, few of them seem to notice that the one (and only) guaranteed way to prevent all the suffering of their children is not to bring those children into existence in the first place,” he writes, in a 2006 book called “Better Never to Have Been: The Harm of Coming Into Existence.” In Benatar’s view, reproducing is intrinsically cruel and irresponsible—not just because a horrible fate can befall anyone, but because life itself is “permeated by badness.” In part for this reason, he thinks that the world would be a better place if sentient life disappeared altogether.

For a work of academic philosophy, “Better Never to Have Been” has found an unusually wide audience. It has 3.9 stars on GoodReads, where one reviewer calls it “required reading for folks who believe that procreation is justified.” A few years ago, Nic Pizzolatto, the screenwriter behind “True Detective,” read the book and made Rust Cohle, Matthew McConaughey’s character, a nihilistic anti-natalist. (“I think human consciousness is a tragic misstep in evolution,” Cohle says.) When Pizzolatto mentioned the book to the press, Benatar, who sees his own views as more thoughtful and humane than Cohle’s, emerged from an otherwise reclusive life to clarify them in interviews. Now he has published “The Human Predicament: A Candid Guide to Life’s Biggest Questions,” a refinement, expansion, and contextualization of his anti-natalist thinking. The book begins with an epigraph from T. S. Eliot’s “Four Quartets”—“Humankind cannot bear very much reality”—and promises to provide “grim” answers to questions such as “Do our lives have meaning?,” and “Would it be better if we could live forever?”
The knee-jerk response to observations like these is, “If life is so bad, why don’t you just kill yourself?” Benatar devotes a forty-three-page chapter to proving that death only exacerbates our problems. “Life is bad, but so is death,” he concludes. “Of course, life is not bad in every way. Neither is death bad in every way. However, both life and death are, in crucial respects, awful. Together, they constitute an existential vise—the wretched grip that enforces our predicament.” It’s better, he argues, not to enter into the predicament in the first place. People sometimes ask themselves whether life is worth living. Benatar thinks that it’s better to ask sub-questions: Is life worth continuing? (Yes, because death is bad.) Is life worth starting? (No.)
Benatar is far from the only anti-natalist. Books such as Sarah Perry’s “Every Cradle Is a Grave” and Thomas Ligotti’s “The Conspiracy Against the Human Race” have also found audiences. There are many “misanthropic anti-natalists”: the Voluntary Human Extinction Movement, for example, has thousands of members who believe that, for environmental reasons, human beings should cease to exist. For misanthropic anti-natalists, the problem isn’t life—it’s us. Benatar, by contrast, is a “compassionate anti-natalist.” His thinking parallels that of the philosopher Thomas Metzinger, who studies consciousness and artificial intelligence; Metzinger espouses digital anti-natalism, arguing that it would be wrong to create artificially conscious computer programs because doing so would increase the amount of suffering in the world. The same argument could apply to human beings.
Like a boxer who has practiced his counters, Benatar has anticipated a range of objections. Many people suggest that the best experiences in life—love, beauty, discovery, and so on—make up for the bad ones. To this, Benatar replies that pain is worse than pleasure is good. Pain lasts longer: “There’s such a thing as chronic pain, but there’s no such thing as chronic pleasure,” he said. It’s also more powerful: would you trade five minutes of the worst pain imaginable for five minutes of the greatest pleasure? Moreover, there’s an abstract sense in which missing out on good experiences isn’t as bad as having bad ones. “For an existing person, the presence of bad things is bad and the presence of good things is good,” Benatar explained. “But compare that with a scenario in which that person never existed—then, the absence of the bad would be good, but the absence of the good wouldn’t be bad, because there’d be nobody to be deprived of those good things.” This asymmetry “completely stacks the deck against existence,” he continued, because it suggests that “all the unpleasantness and all the misery and all the suffering could be over, without any real cost.”
Some people argue that talk of pain and pleasure misses the point: even if life isn’t good, it’s meaningful. Benatar replies that, in fact, human life is cosmically meaningless: we exist in an indifferent universe, perhaps even a “multiverse,” and are subject to blind and purposeless natural forces. In the absence of cosmic meaning, only “terrestrial” meaning remains—and, he writes, there’s “something circular about arguing that the purpose of humanity’s existence is that individual humans should help one another.” Benatar also rejects the argument that struggle and suffering, in themselves, can lend meaning to existence. “I don’t believe that suffering gives meaning,” Benatar said. “I think that people try to find meaning in suffering because the suffering is otherwise so gratuitous and unbearable.” It’s true, he said, that “Nelson Mandela generated meaning through the way he responded to suffering—but that’s not to defend the conditions in which he lived.”
He doesn’t imagine that anti-natalism could ever be widely adopted: “It runs counter to too many biological drives.” Still, for him, it’s a source of hope. “The madness of the world as a whole—what can you or I do about that?” he said, while we walked. “But every couple, or every person, can decide not to have a child. That’s an immense amount of suffering that’s avoided, which is all to the good.” When friends have children, he must watch his words. “I’m torn,” he said. Having a child is “pretty horrible, given the predicament in which it will find itself”; on the other hand, “optimism makes life more bearable.” Some years ago, when a fellow-philosopher told him that she was pregnant, his response was muted. Come on, she insisted—you have to be happy for me. Benatar consulted his conscience, then said, “I am happy—for you.”

On the whole there's nothing here that wasn't summarized in four verses in Ecclesiastes quite a few centuries ago by an author who warned us there was nothing new under the sun and if something did seem new it already existed from ages long ago.  It's not too hard to wonder why an author of an anti-natalist profile in The New Yorker didn't lead with a suitable quote from Ecclesiastes, though.

Now having excerpted so much one of the things that was interesting to read in Martin Shield's book The End of Wisdom, a monograph on Ecclesiastes, was that he pointed out that as traditional as it is to ascribe the book to Solomon "son of David" came to refer to a lot of descendants of the Davidic line and that while many things described in Ecclesiastes were associated with Solomon that's not automatically given.  If Solomonic authorship were really the point about the book the resolute lack of identification of the author with Solomon doesn't necessarily imply the traditional association.

But Shields had another, more compelling proposal about the book, which was to point out that the traditional interpretation of Ecclesiastes being a literary process of repentance doesn't seem to hold up under an exegetical analysis of the text.  If this was a Solomon returning to the Lord why were there no references to the Mosaic law?  Why were even allusions to the creation account of Gen 1-3 mainly by way of sidelong doubts that humans have souls or natures that even distinguish them from animals? 

Shields' proposal, which is like anything proposed in scholarly work about Ecclesiastes, bound to be controversial is that the epilogue of Ecclesiastes was written by someone else and that a short prologue and epilogue bracket out the book of Ecclesiastes as a kind of, for want of a better analogy here, a Pentagon papers about the dangers of the Israelite wisdom movement and where it ends.  Qoholet in this interpretation never came back to love or serve the Lord and this could mean, if Solomonic ascription is a given, that there's no evidence Solomon ever turned back to follow Yahweh even at the very end of his life.  Or, if the Israelite king "did" return there's no indication within the majority of Ecclesiastes that the king did so from the text itself independent of traditional interpretations of it.

There's pretty much no way to square Ecclesiastes 4:1-4 with the Cultural Mandate and all the stuff about "be fruitful and multiply" if Qoholet is so certain that better than anything is to have never been born and that everyone is motivated by envy of one's neighbor.

In theory I could write more about Martin Shields' book but I don't feel like doing so just now. There are a few tagged posts on the subject, though, so if you want to indirectly read Shields' case that the nature of the Hebrew construction in Ecclesiastes that could be read as "had been king" invites a possibility that the author of Ecclesiastes was a king who at some point abdicated is interesting but it's been years since I've read the monograph.

Overall the proposal that we should not assume the author of Ecclesiastes ever came back to a godly position seems worth considering, even if there could be compelling arguments against it.  Shields' book was pretty helpful, particularly the overview of the generally dim view of the sages as a caste of elites advising often corrupt royal families as depicted just about everyone in the Old Testament literature except for the wisdom literature.  If all the wisdom of the sages didn't keep the Israelite and Judean kings from sinning so much that God's people got cast into exile why would the sages necessarily help the rest of us? 

Where all of this might be applicable to anti-natalist concerns, they've got nothing much to worry about.  It looks like a demographic implosion in the First World is taking care of itself as something that's going to happen already. 

The Art Newspaper reports "Artists are getting poorer"

The majority of artists in the UK earn less than £5,000 a year after tax—and below $10,000 in the US—according to a survey of 1,533 practitioners, conducted this month by the online artist-to-market website Artfinder.
Dubbed the “Artist Income Project”, this first survey of anonymous data across several platforms, including Artfinder, finds “artists are still not paid fairly. Or at all.” All the surveyed artists are described as “independent”, meaning they sell their art directly, rather than through a gallery.
In the UK survey of 823 artists, 55.1% say they earn between £1,000 and £5,000 net per year while 17.7% earn between £5,000 and £10,000. At the raw end, 9.3% of UK artists state their income as zero. This combined figure of 82.1% is worse than the findings of a previous survey of 1,061 artists, conducted by a-n, an artist data company, which in 2013 found that 72% of artists earned under £10,000.
Of the US respondents, 75.2% make less than $10,000, with the majority (48.7%) in the $1,000 to $5,000 bracket; 5.1% in the US stated their income as nothing. Based on the 98% of artists who stated their gender, female artists across both geographies fared marginally worse than their male counterparts: 83.6% earned under £10,000 (versus 77% men).
This survey may only be a small slice of the whole, but its findings ring true, says Rob Pepper, the recently appointed principal of London’s The Art Academy school—and a practicing artist. “I used to believe that winning prizes was a mark of success for an artist but am now of the opinion that simply living as an artist is the greatest achievement,” he says.


This would be, it seems, applicable to artists whose work would end up in a gallery, not commercial art/advertising or commercial art/comics/animation as such.

I mention that because animation is an art form I've loved my whole life and yet as academic and critical establishment consensus goes, almost nothing is taken less seriously as legitimate art than animation unless we're talking about Christian (TM) film. 

But the thing is you can chart the path of a name like Lauren Faust who did some work on Brad Bird's The Iron Giant and ended up working over on The Powerpuff Girls and co-creating Foster's Home for Imaginary Friends and eventually became executive producer of My Little Pony: Friendship is Magic.  If mainstream arts journalism cared at all about stories of girl power and accomplishment by women I would think that the story of Lauren Faust and her work would be inspiring.  The only reason I even bothered to watch any of MLP: FiM is precisely because I heard Faust was helming the project and the first couple of seasons I saw were actually pretty good. 

But, yes, we're talking about a cartoon associated with toys sold by Hasbro.  But when television critics deign to talk about animated shows at all they reliably gravitate toward a more Comedy Central/Adult Swim vibe of the sort you get with South Park, Archer, Family Guy or Rick & Morty.
I can find positive things to say about three of those four shows but that's not necessarily the point, the point is that the American prejudice within the film and entertainment industry and associated critical apparatus is that animation is kid stuff and even when it's not kid stuff it's even more juvenile than kid stuff and therefore not really worth considering even if just about every parent in the entirety of the United States has probably gotten songs stuck in their heads from movies they felt obliged to take their kids to go see. 

That's the kind of art I'm most curious to consider because that's the stuff that shapes the moral imagination of a generation before they decide they're too "grown up" to keep taking that stuff seriously without realizing that their generational moral compass has been shaped by that kid stuff that they think they can just leave behind. 

If it were that easy to leave that kind of moralizing approach behind, all that Transformers and G. I. Joe and so on would we have people behaving on Twitter the way they do?  I wonder about that lately? 

If artists keep getting poorer the prospect of art school seems more and more foolhardy.  Not saying that as someone who's actually anti-art.  I love animation as a medium and it's mainly because my eyes have been so bad I never stuck with visual media.  I might even eventually write something or other about Satoshi Kon.  All that said, the Western liberal arts art religion seems more and more ghastly.  I never much liked it even when I was a teenager and I consider it more and more disastrous as I day job my way through middle age.  If the return on investment of an arts education is as disastrously bad as is continually being reported maybe we should tell people to stop investing in formal education in the liberal arts.

Are there alternatives?  Sure.  Are there monetized alternatives replete with social capital and rock star level compensation?  Doubtful.  Of course as a Christian (not always or even very often a good Christian, I'm afraid) I can say that what inspires me to keep working in the arts is that it's an expression of religious belief.  Even the matter of attempting to engineer a musical/linguistic idiom in which the boundaries between the vocabulary of pop and art music can be collapsed, what you might call the class distinctions within musical idioms, is motivated by religious reflection.  If in Christ there is no Jew or Greek, slave or free, male or female, then in Christ there is no high or low, indie or mainstream, art or pop.  These categories keep existing in the real world, of course, but they do not have to be regarded as having impermeable boundaries.  If Christ reconciles all things to and through Himself why should musical styles be exempt from this work of reconciliation?

If the Cold War showed us anything it was that secularist ideologist along the liens of communism and neoliberal capitalism were insufficient to bridge some of the pressing social and ethnic divides of the Cold War era.  The Balkans fairly literally exploded with conflicts that neither generations of Soviet repression nor Western liberalism could adequately tamp down., tensions which included religious differences among others.  One of the things that writers such as Leonard B Meyer and George Rochberg (also a composer) said was characteristic of the 20th century was a crisis of how to deal with pluralism in society and pluralism in the arts.  Rochberg was, at the risk of keeping things too simple, part of a generation who saw the end of the Romantic arts culture as a kind of fixe d end.  He wasn't postmodern as we'd understand it in ideological terms but he was postmodern in the sense that he turned back to Beethoven and Mahler once he felt that the post -Webern path of serialism proved to be an emotional and spiritual dead end.  

Meyer concluded that there was likely to be no truly revolutionary new style or idiom in the arts but we'd see a panoply of experiments with stylistic fusions.  Because Meyer's interest in the arts could span not just Western modernism but also Egyptian and Indian art he could see that trajectories of innovation and revolution taken for granted in Western art were not really normative in the majority of Western history itself, let alone the world over.  He'd end up writing a giant and fascinating book on Romanticism I might have to write about later.  To keep a long story short, Meyer could see that what others like Rochberg or Adorno might have seen as a dead end for post-Romantic Western art was not necessarily a dead end, we just can't really know in advance what cycle of artistic consolidation might come next.  Arguably, to go by recent sales announcements, we're living in the musical era of hip hop. 

But the ideal of an art that has a teleological newness and can't be pinned down to a past hasn't died off just yet.  Even if we live in an era in which sampling is increasingly routine and our collective understanding of the ways in which at a historical and cognitive level everything we do is in some sense a "remix" the cult or the Romantic Byronic art hero hasn't died off quite yet.  If something could, God willing, die off for a while in the post-Weinstein moment that kind of cult of the art-priest/art-god dying off would be nice to see die within my lifetime.  Restoring a relationship between art and craft would be nice.  It's always been there, of course, but not all aspects of our Western mythologies about what art supposedly does allow for this. 

When Adorno wrote half a century ago that art was in crisis and that the very legitimacy of the reason art should exist at all was part of that crisis it's pretty clear almost nobody bought that idea.  Adorno's rather long, opaque and highly argumentative Aesthetic Theory proposed, among many, many other things, that once art became completely autonomous from religious custom, cult and practice that though this was what was necessary for art to become art it was a status attained at the cost of any certainty about the legitimacy of its own existence.  Though an art work is always a thing in itself to be analyzed based on what it unites and explores, the art work was always also a social creation that was for an other. 

I might have to write more about that later but that will take time.  Adorno's writing is anything but breezy.  Adorno, too, was trapped in the ideologies and mythologies about the arts characteristic of both the enlightenment and also Romanticism, but I don't have the time or energy to get to the trilogy of books David Roberts wrote, one of which he co-authored with Peter Murphy, on that topic.  The short version is that Adorno was trapped inside the dialectic of enlightenment/romanticism as the dual legacy of European modernism and that this led him to abject the entirety of Western civilization in a way that rejected everything about it at a level that showed he was still trapped in Romantic ideologies.  An irony was that Igor Stravinsky's embrace of "obsolete" earlier styles made Stravinsky one of the first postmodernists in a stylistic sense, even if Stravinsky's religious convictions were old school Russian Orthodox.

Now I've camped out a little in some posts on how Christians who don't have to feel beholden to either of the Cold War ideologies can have an opportunity to embrace a kind of stylistic pluralism that the two respective sides of the Cold War have in many respects failed to attain but mainly by way of allusion and implication.  We may be on the cusp of a polystylistic morass in between cycles of stabilized styles in the arts, and the last time we had roughly two centuries of that it was colloquially known in Western art history as the Baroque era.  The shame of folks like Adorno or Schaffer on either side of the left/right divide is they fixated on the later Baroque era as a synecdoche for the whole tumultuous transition-filled period as the process of the Renaissance collapsing and new forms of pitch organization emerging with innovations in temperament and tuning. If Christians wonder what Christians can do in this era we could try looking at the early and middle Baroque periods.  Rather than take the pessimistic reactionary mythologizing of a lost ars perfecta period such as the "long 19th century" in musical art or literature that was arguably the weakness of both Francis Schaeffer and Theodor Adorno in different ways, we could play with an idea that contrary to an old left elitism and a reactionary right nostalgia that the Christian scriptural message that every knee shall bow and every tongue confess can and should be construed as an invitation to polystylistic appreciation of the love and work of Christ across all artistic styles and media. 

You can still work in the arts, of course, but the art religion we've gotten in the West has been pretty determined to replace the religious beliefs that used to be an inspiration for the arts.  As noted earlier, given the lower and lower return on investments for learning the ways of that art religion in Western arts education it's hard to see how or why that's entirely worth the time these days. 

at LARB Jacqui Shine on millenials and the "kids these days" complaint, proposing that millenials will be the first generation materially poorer than their parents ... Scott Timberg would protest Gen X got to that space first

Nearly 20 years later, as the youngest millennials reach adulthood, we’re stuck with the stereotypes Strauss and Howe proffered, and not much else. Millennials are said to be a generation of tech-obsessed narcissists whose failure to match, much less exceed, our parents’ economic success is evidence of poor moral fiber. We think we’re special; we’re too sheltered; we’re too conventional; and we certainly aren’t achieving enough to warrant our wild overconfidence. (Full disclosure: I’m a millennial.) If that’s so, says Malcolm Harris, it certainly didn’t happen by accident. In his new book Kids These Days: Human Capital and the Making of Millennials, he warns that we ought to take the historical formation of this cohort seriously, because it represents a single point of failure for a society veering toward oligarchy and/or dystopia. We will either become “the first generation of true American fascists” or “the first generation of successful American revolutionaries.”
No pressure, though.
As a member of Generation X who was once part of Mars Hill I'd venture the admittedly pessimistic proposal that we're going to get dystopia and oligarchy all in one go.  Why does it have to be either/or? It's the easiest thing in the world for revolutionaries to also turn out to be fascists.  Adorno's condemnation of the New Left he saw taking shape in the United States was that, in sum, he regarded them as no less totalitarian than the fascists he fled from in Europe in the first half of the 20th century.  The identitarian politicking of American college students and professors we're seeing isn't so far removed from a neo-pagan religion of blood and soil as to be altogether different.  But let's get back to the article.
In November 2011, just as the Occupy Wall Street protests were winding down, two progressive think tanks jointly published a study called “The Economic State of Young America,” which reported that millennials were likely to be the first generation of Americans who were less economically successful than their parents had been. This news unleashed something between a moral panic and a national identity crisis, one that’s only sort of about the material conditions of millennials’ daily lives or the documented effects of growing wealth inequality on the health of our democracy. Someone or something, it seems, had killed the American dream: the idea not only that hard work will be rewarded with social mobility and economic prosperity, but also that justly earned wealth will grow exponentially across generations.
But who was to blame? Was the problem that millennials have failed to live up to the economic challenges that previous generations of Americans had always met, or was it that their parents and grandparents had failed to deliver them a world in which success was possible? Harris, for his part, thinks the answer is clear: “Every authority from moms to presidents told Millennials to accumulate as much human capital as we could and we did, but the market hasn’t held up its side of the bargain. What gives? And why did we make this bargain in the first place?” (Human capital, in Harris’s usage, refers to “the present value of a person’s future earnings”; “the ‘capital’ part of ‘human capital’ means that, when we use this term, we’re thinking of people as tools in a larger production process.”) 
Harris’s thesis is simple: young people are doing more and getting less in a society that that has incentivized their labor with the promise of a fair shake, and that older generations are profiting handsomely from the breach of contract. He doesn’t express it this clearly, though, in part because he is hamstrung by the book’s framing, which is detrimental to his argument (for reasons I’ll explain later). But it also makes Kids These Days an interesting artifact in its own right. It reveals something about how badly we want to believe that we all belong to a bigger American story, and about how essential that belief is to the maintenance of a capitalist regime that maximizes our labor and diminishes our lives.
Well that can certainly be one of the mythologies but one of the other ones is surely that humanity can be redeemed by art religion or the meta religion in art religion of arts criticism of the sort we can read from A. O. Scott in Better Living Through Criticism ... or maybe also stuff published at LA Review of Books. ;)
Some of the analysis in Kids These Days is pretty impressive. In the book’s first two chapters, Harris maps the effects of a hyper-capitalized youth control complex that formed, he argues, in the last two decades of the 20th century. At every level, Harris thinks, the American education system is either a workplace or a profit machine. The highlight of the book is its admirably lucid précis of higher education, the student debt crisis, and the institutional wealth accumulation it fuels. Harris makes clear that higher ed has become a debt machine that profits everyone except students. While outsourcing and labor casualization have cut expenses, price tags at four-year schools have jumped 200 percent or more, and administrators seem to have multiplied like gerbils: an increase wildly out of proportion to the rollback of public funding over the last 30 years. That’s where student loans come in. They represent over $100 billion a year in government funding to schools and, over time, huge returns for the feds. The $140 billion in federal student loans issued in 2014, Harris says, will eventually net a $25 billion profit.
The strength of this argument is that Harris doesn’t try to frame the analysis in the context of the millennial generation. While he briefly discusses the federal government’s failure to offer meaningful relief for loans in repayment — including the observation that the Obama administration’s vaunted reforms amounted to very little — he doesn’t say all that much about the experiences of millennial student debtors, or how they’re distinct from those of Generation X or other cohorts. One can certainly imagine how that piece comes into play without his indulging in the generational grandstanding that otherwise appears throughout the book.
Scott Timberg has been blogging frustration at how he sees Generation X as having been the first generation to be materially worse off than the preceding generation that birthed it, but that the way Generation X has been let down by the "promise" has bee in such a quotidian and tedious fashion that it's simply not newsworthy.  
Ultimately, though, the most frustrating thing about Kids These Days is how Harris keeps coming back to that broken promise framing, encapsulated in those blunt rhetorical questions quoted above: “[T]he market hasn’t held up its side of the bargain. What gives? And why did we make this bargain in the first place?” As a millennial might say, great questions. But the answer to the first has very little to do with millennials per se and everything to do with a set of historical and economic forces that lurched into operation long before 1980. The game was rigged from the start, and the prize was never real. The answer to the second taps into a much, much bigger and more important problem about how the deceptive rhetoric of the American dream fuels our exploitation, and prompts a third question: why are we so surprised we got scammed?
Millenials aren't necessarily in a position yet to be old enough to look back on how "we" did or did not get scammed.  However, Generation X has certainly become old enough to consider it. Looking back on how and why I at one point believed that Mars Hill Church was attempting to be a positive influence in the Puget Sound area the sales pitch was always pretty plain, legacy.  Guys like Driscoll make legacy the pitch.  Not just guys, men and women who want to change things for the better use legacy as a sales pitch to invite people to be part of something that makes the world or the city or the region better than it was before. 
The scam, so to speak, is not the legacy stuff itself, it's the checklist of what hoops you have to jump through in order to gain a legacy and how you have to jump so as to get through them.  If we want to try to articulate the nature of what the scam seems to be it's in that.  One of my friends from college told me he felt like there was this promise that if you just went and got the advanced degree a great well-paying job would be waiting for you and it turned out to be a sham.  This is an intra-Generation X dialogue so experience and mileage will vary but I'd venture to say my friend put it well.  Generation X may be the first to be confronted with the reality that the "dream" was a scam ... but a guy like Driscoll might be a case study of someone who was determined to chase the realization of that dream and got it to "work".  Maybe a guy like Driscoll really believes sheer work and gumption accounts for his current position but it doesn't.  Driscoll name-dropped just enough names that whether it was a Ken Hutcherson or a Gregg Kappas or a David Nicholas or a Mike Gunn or a Leif Moi or a Jon Phelps there were a variety of men who were older and further along who believed in him and invested in him that he was able to rise to prominence.  Which is a way of saying that people like Driscoll managed to rise and the sales pitch of the American mythology has it that it's your effort and not that generosity of a patronage system that gets you where you've gotten. 
What's different these days for millenials?  Well, at the risk of being a jerk about it, earlier generations were probably more steeped in a residual Social Gospel in blue state areas and in a more conventional evangelicalism than in the more Word Faith prosperity gospel teaching or the post Ayn Rand objectivist mentality.  As Sherman Alexie's complaint has been, this newer generation of billionaires may be less racist and prone to killing Indians or owning slaves but they're also less evidently concerned with philanthropy. 
Then again, to go by what's been burbling up to the surface in Hollywood and other industries one of the creepy things we're discovering is that a lot of old dude philanthrophy that looks altruistic at first may have come with strings attached. 
Let me take a shot at that one. I read with incredulity Harris’s suggestion that “[a] look at the evidence shows that the curve we’re on is not the one we’ve been told about, the one that bends toward justice.” He’s referring, of course, to Martin Luther King Jr.’s much-quoted maxim about the arc of the moral universe, which apparently conflicts with some unspecified “evidence” demonstrating that millennials have been denied their rightful economic and cultural inheritance. But conflating matters of moral justice with rising material success implies a frankly impoverished vision of, and for, American life. Nor does this strike me as a book that’s especially concerned with economic justice in more than a facile way, unless you’re of the “rising tide lifts all boats” school.
To be fair, it’s not like Harris is alone here. “We were promised something and we didn’t get it” is not just a millennial refrain: it’s a shared American delusion. But a dream is not an entitlement. The idea that entry into a stable middle class is some sort of American national birthright is ahistorical; that it ever seemed possible may prove to be epiphenomenal. The American middle class to which we were supposed to aspire was vanishingly short-lived, and it was certainly never uniformly accessible.
... and particularly not really accessible if you weren't white.
But it's at this point that I wonder whether or not writers on the left need to get more settled on the question of whether the middle class is enemy or victim in the class warfare narrative.  If it's bad that we're losing the middle class because of rising income inequality which middle class are we berating in the condemnation of bourgeois culture?  The upper 20% perhaps?  That would make sense, but at another level the art guilds seem determined to blast the dread conformity of the suburbs among white cultures even though one of the legitimate complaints that has arisen from the black community has been that they're stuck in the ghetto.  If the suburbs are so deadly in their stifling conformity who wants to live there?  What if the art guild condemnation of the conformity of suburban life itself is another mythology?  I heard it attributed to Robert Fripp that he liked having a mundane and ordinary life so he could be adventurous in his music. 

One of the things Jake Meador brought up in some blogging he did at Mere Orthodoxy is that it seems that a generation was promised something that hasn't been delivered.  One of many responses to that blogging was to say that the market didn't really promise anything.  That may be to willfully misunderstand the nature of the point Generation X and onward have been making about the prescribed script for "making it" in American society.  The script is that you get through high school and get more and more advanced degrees and your level of success, and the level of capital social and actual you accumulate along the way is supposed to be a direct reflection of the amount of time and money you invested in your education once you've gotten past the routine of public high school, etc, etc. 

But it's been turning out that even if you jump through all those hoops you've been told you need to jump through, assuming you can even jump through them all, that corresponding stability in job and social life isn't forthcoming. 

The early Mars Hill Church was a place where "life together" and other ideals had associations with a proposal of coming up with a different way of arriving at social cohesion and working life than what is colloquially known as the American dream.  The goals didn't all have to be arrived at in precisely the same way as formally prescribed.  Driscoll played himself against the parents of Generation X pretty astutely, as was noted in an article mentioning him published in Mother Jones almost twenty years ago. The idea was that not everyone had sold out to the American Dream.

That was twenty years ago and by about 2007 more than just a handful of guys in leadership at Mars Hill were probably sold out to the American Dream.  As men in the leadership culture of Mars Hill were more able to check off the items in the "attained the American Dream" list, it was easier to shift an understanding of how to get "there" from the alternatives that were embraced in the earlier Mars Hill period (such as communal living, which is hardly a new thing) and more toward a checklist of being "grown up".  This was more and more emphatic and patently obvious in Mark Driscoll's teaching and preaching as time went on.  It was always one of his fixations but it became more apparent that how you got all the bullet points was at least as important as getting to all of them.  A community in which men and women simply didn't bother with attaining those goals because, for them, it wasn't realistically attainable, probably stopped being important.

So, yeah, I think I can confidently say Generation X has had plenty of time to figure out how we figured out the American Dream was a kind of cultural shell game.  A lot of what passes for "kids these days" can come from a generation of men (and also some women) who may not realize the extent to which the patronage between generations that was taken for granted in earlier epochs isn't necessarily available in the same way. 

Conversely, the generation that raised or helped raise millenials and regards them as full of narcissistic apathy might not be that eager to concede that somebody raised that generation in a way that bears at least some culpability for those generational vices.  Think about it this way, what did we think we might get from a generation thirty years on from the Star Wars met Joseph Campbell's Hero's Journey of self-actualization?  What did we think might be an outcome of generations raised on Disney princesses who are Jane Galts who stop the motor of the world by not bothering to jump through the hoops of what's expected of them?  What were we thinking would be the outcome if the base line of an American pop culture mythology is that the rules exist for everyone else but you to have to be bound by?  It's hard to really find fault in millenials for their vices when I survey the last few generations of pop culture.  They got the message that was sold to them and their parents, didn't they? 

Friday, December 01, 2017

a piece about academic precarity, proposing tha contemporary academia in the United States is a shell game some sideways mulling over any connection this might have to celebrity Christians puffing up their academic credentials or the credibility of ideas they claim to have

By way of a brief prelude, I've got a relative who didn't complete undergraduate education because it wasn't possible to complete the required final credit courses because there's a point at which a person discovers those credits have been saved for all the underclassmen so that upperclassmen can't even sign on for those courses; then graduation isn't feasible because a person can't fulfill the courses that are still outstanding which aren't possible to take because those have been reserved for those at the level at which they're expected to take them.  So, in a phrase, a relative of mine couldn't graduate on account of, among other things, a bureaucratic double bind. 
Said relative has been more or less a lifelong fan of military history and not just the sexy stuff about this or that battle, but also about the vital role logistics and food distribution can play in how and why some battles are won or lost. If the saying goes that only one or two of ten soldiers is in the actual shooting war my relative has an interest in what the other eight have to do.  So that's a bit of context for a proposal that it's too easy to worry about the military industrial complex for being the obvious complex.  Said relative has begun to muse that we're too apt to hear people complain about the military industrial complex when there's a prison industrial complex and an educational industrial complex, too.  .
There are folks who are coming out the other side of attempts at lives in academia who are saying that, on the other side of it, it's hard not to see the whole thing in its current form as a racket.  It will be easy enough for academics with a certain amount of job stability to blame our contemporary moment on anti-intellectualism but if the people who are inclined to say that have anything clse to an Ivy league education it might be wise to not cast that blame too readily, because, after all, riffs on how getting X education isn't proof of Y level intellect goes back in American letters to F Scott Fitzgerald, doesn't it?   So with all that preliminary ...
Nov 8 2017
I am writing this from a place of mourning. I am writing this from a place of anger.
After four years of working academic precarity, I have left behind a career that I thought embodied my life’s work. Right now, I do hold tremendous regrets.
The specifics of my story illuminate the ways that academic precarity destroys careers, compromises the integrity of higher education and research, and reaps critical resources from those of us committed to this work; as I now stand, I have no idea how I’ll bounce back from what I experienced during my four years as a Visiting Assistant Professor at two R-1 state schools. I’ve moved four times in the past five years and need to reestablish community before I can heal. I need to establish myself and to locate an identity that makes sense to me. There were plenty of good people at these institutions; this piece makes no claims otherwise, nor does it render any specific claims against identifiable people or institutions. This is about me, my voice, and my process in the context of the neoliberal institution of academic precarity. I've let the other stuff go.
Academic precarity is the year-to-year or class-to-class, contingent, underpaid and labor-intensive employment status most Ph.D.s now have to navigate while seeking a protected tenure-track position. After, say, eight years of graduate school, this tacks on another two to four to ten years at a $20-$40,000/year salary. We have crossed over into our thirties and forties in sustained poverty, now separated from our graduate communities and parceled into departments and towns in which we have no belonging or protection. All the while, we must stay on the academic job market, an extremely demanding labor that costs up to 800 unpaid hours a year and expensive attendance at conferences and interviews. These jobs are unprotected, shorter-term, and often require moving to far-flung college towns from year to year. The precariat is charged with developing entire new courses on short notice (I developed two dozen brand new courses in four years, most of which I would only teach once), teaching large classes of students whom they'll never see again, and biting their nails in hopes this will be the year they are going to get chosen.
They are more than thirsty: they have been drawn into the academic shell game long enough and far enough to have, semester by semester, staked their financial, physical, familial and mental health on it. Yet with every passing day, they see that the career they had invested immense loans and a decade of work to build is hostile, empty, and dangerous to the most vulnerable in the Wild West of rapid defunding and administrative power grabs. This is not because they are suckers; this is because higher education is in undeniable crisis. It is imploding, suddenly, leaving us scrambling to understand the circumstances in which our lives are unfolding.
We have been trying to tell you. We see it in sharper focus than anyone. In order to survive, we have had to become anthropologists, understanding the deeper tides and features of contemporary Higher Ed while so many faculty and admins bury their heads in the sand. Our students are suffering terribly. A year or two ago, we were those very students.
And, I often feel, the worst of all: the theft of my original work by men who knew that I would be stuck in limbo long enough for them to spoon lovingly-wrought phrases and theories and keywords out of my dissertation and into their manuscripts, to dial up the artists with whom I had collaborated with for years to inject them into their own projects, or the pilfered syllabus and grant proposal that rendered my own contribution to my department immaterial. The theft of my labor when I was chosen by my senior colleagues to pour dozens or hundreds of hours into the panel, conference, or issue “because it was good for my CV.” [emphases added] I consider this time stolen from my kid.
What was my recourse? I was on the market. Again. And I should have been grateful for the opportunity to shine, again. All I needed to do was push through in time to get my work out there. It would speak for itself. It would have spoken for itself. But I didn’t get the jobs: not the ones I was promised outright as a verbal component of my hire, nor the ones the list of merits on my CV told me I deserved.

The stuff about appropriation of material reminds me that back when Mark Driscoll's books were the subject of controversy regarding intellectual property and plagiarism that a few evangelicals I knew of were seriously trying to argue that intellectual property isn't even really a Christian concept and shouldn't exist.  Well if that's so then here's hoping those sorts of Christians don't have any interest in being taken seriously for the life of the mind at all ever again in the present or future. 
Back when I was in my twenties I wanted to get into academics and the more time goes by the more grateful I am that I couldn't and, as I hope regular readers of this blog will appreciate, that's got nothing to do with a lack of affection for learning or study.  Elements of Sonata Theory was one of the best books I've read in the last three years and Jacques Ellul's Propaganda is a book I regard as a must-read for anyone who would attempt to grapple with the history and implications of the rise and fall of Mars Hill Church.  I'm still working through Adorno's Aesthetic Theory and I have more than a few objections to a few things he wrote in that but it's still interesting to read what he had to say. I'm throwing in the newer translation of Philosophy of New Music to boot.  Even though I think Schoenberg's music has been ignored by a super-majority of the human population with good cause I do think Adorno was right to highlight how many music theorists and musicians misunderstand the nature of Bach's counterpoint.

Besides, the extent to which mainstream American journalists name-drop the Frankfurt School but keep pretending that Adorno's later writings didn't happen might be a red flag that if at many a time Adorno's writings can come across as more than faintly elitist and racist and obsessed with 19th century culture as a kind of ars perfecta it's because he really comes off that way for a 21st century American, even a moderately conservative one. 

So while I can't say I'm really on board with Adorno's Marxism and he still comes off like an elitist and even kinda more than vaguely racist about black music in America too many journalist covering the arts scene in the United States name-dropped the Frankfurt school in the last five years to just ignore these writers.  It's not too difficult to see why more people prefer Walter Benjamin, though a handful of folks really, really believe Adorno was one of the great minds of the 20th century.  Maybe he was, but I'll still take any track by Thelonious Monk over most Schoenberg any day.  I do love some work by Webern and Berg, though. 

Were Adorno around I have this impression he'd regard what Western academia has become as itself being just another extension of the culture industry.  Not to say real education can't or doesn't happen in colleges these days, but the more time goes by the more it seems Adorno was right to regard a lot about the New Left as being as ultimately totalitarian as the fascists.  And yet, weirdly, I can't recall any American journalists mentioning this about Adorno, though a few British journalists and writers have made a point of it. 

Kinda reminded me of this.
We cannot blame this professional anemia on scarce funding. The largest adjunct-faculty increases have taken place during periods of economic growth, and high university endowments do not diminish adjunctification. Harvard has steadily increased its adjunct faculty over the past four decades, and its endowment is $35.7 billion. This is larger than the GDP of a majority of the world’s countries.

The truth is that teaching is a diminishing priority in universities. Years of AAUP reports indicate that budgets for instruction are proportionally shrinking. Universities now devote less than one-third of their expenditures to instruction. Meanwhile, administrative positions have increased at more than 10 times the rate of tenured faculty positions. Sports and amenities are much more fun.

But the problem goes deeper than administration as well. It’s systemic. The key feature of adjunctification is a form of labor-market polarization. The desirability of elite faculty positions doesn’t just correlate with worsening adjunct conditions; it helps create the worsening conditions. The prospect of intellectual freedom, job security, and a life devoted to literature, combined with the urge to recoup a doctoral degree’s investment of time, gives young scholars a strong incentive to continue pursuing tenure-track jobs while selling their plasma on Tuesdays and Thursdays.

This incentive generates a labor surplus that depresses wages. Yet academia is uniquely culpable. Unlike the typical labor surplus created by demographic shifts or technological changes, the humanities almost unilaterally controls its own labor market. [emphasis added] New faculty come from a pool of candidates that the academy itself creates, and that pool is overflowing. According to the most recent MLA jobs report, there were only 361 assistant professor tenure-track job openings in all fields of English literature in 2014-15. The number of Ph.D. recipients in English that year was 1,183. Many rejected candidates return to the job market year after year and compound the surplus.

It gets worse. From 2008 to 2014, tenure-track English-department jobs declined 43 percent. This year there are, by my count, only 173 entry-level tenure-track job openings — fewer than half of the opportunities just two years ago. If history is any guide, there will be about nine times as many new Ph.D.s this year as there are jobs. [emphasis added] One might think that the years-long plunge in employment would compel doctoral programs to reduce their numbers of candidates, but the opposite is happening. From the Great Recession to 2014, U.S. universities awarded 10 percent more English Ph.D.s. In the humanities as a whole, doctorates are up 12 percent.

Why? Why are professional humanists so indifferent to these people? Why do our nation’s English departments consistently accept several times as many graduate students as their bespoke job market can sustain? English departments are the only employers demanding the credentials that English doctoral programs produce. So why do we invite young scholars to spend an average of nearly 10 years grading papers, teaching classes, writing dissertations, and training for jobs that don’t actually exist? English departments do this because graduate students are the most important element of the academy’s polarized labor market. They confer departmental prestige. They justify the continuation of tenure lines, and they guarantee a labor surplus that provides the cheap, flexible labor that universities want. [emphasis added]
Then there was this from a blogger I read moderately regularly.

The adjunct crisis exists because too many departments have too many PhD students. The only cure is for departments to offer PhD’s for the number of jobs there actually are.
Creating 500 PhD holders when there are only 30 positions suitable for those PhD’s is not only immoral, it is driven purely by economic considerations on the part of the University.

The other thing that comes to mind is that a few years ago I wrote about a book called Decomposition that I found disappointing.  It was billed as a manifesto about music but it wasn't really a manifesto or exactly a philosophy of music.  It was one guy in Portland who wrote a dissertation for a PhD in English, if memory serves.  I appreciated getting an introduction to the player piano etudes of Conlon Nancarrow but a lot of the book was a morass of academic-speak jargon that isn't even characteristic of what the author wrote his blog.  It was as though scholar-author hijacked the body of author/musician author and made something in boilerplate academic jargon that didn't need to be.  I'm not against jargon when it needs to be used.  There's a reason to say someone is an Amyraldian on soteriology and get into what that entails and why Reformed people get up in arms about that, but when it's necessary.  Developing a bit of jargon called "contextual collaboration" when "voluntarily pandering to an eager audience" says more or less the same thing more clearly is odd, but it confirms a comment that Kyle Gann made at his blog a few years ago seem salient, grad school has as a secondary aim transforming people into bad writers who use tortured prose to make arguments they could have otherwise made in ordinary English.

Education doesn't have to be a shell game but more and more people from within education seem to be concerned that it is a shell game.  Giving accreditation to trade schools (proposed by a commenter here) sounds like a good idea to me.  Charles Ives seemed to be of a mind that those who work in music might benefit from having more real world jobs every so often.  Were he alive today he might suggest that those would wish to be professional musicians should have normal day jobs like working as an electrician or being a salesperson or a clerk or something, whatever the workaday work might be.  As I get older I find it's hard not to sympathize with the idea that musicians retain some kind of affinity for working class people.  It's true Ives wrote a lot of music that was deliberately challenging and kind of crazy but he peppered his music with recognizable hymns, popular songs, folk tunes and works from the classical music canon. 

What's weird to think about is that in the era of the internet more and more information is available online and in libraries; and more and more literature and art and music that has passed into the public domain for public access than at nay point in the recorded history of humanity.  Yet as the reactions of white liberals to the "Blurred Lines" verdict showed a few years back, we live in an era in which a majority of popular contemporary culture is licensed and copyrighted.  Rather than dig back into the sea of remarkable public domain material it seems people can prefer to just object to intellectual property on what they regard as moral grounds. 

When I've heard self-described conservative Christians and libertarians argue that intellectual property is claiming ownership of ideas I tell them these are idiotic, dishonest and incoherent arguments.  Copyright isn't even really a claim of "ownership" of ideas, it's a legally recognizable testimony regarding the product of labor and the parameters of its use on the market.  Now I've seen Marxist arguments for why contemporary copyright and patent and licensing laws are exploitive and draconian but I haven't seen any coherent libertarian arguments yet for why this legal concept of intellectual property shouldn't exist. 

The sorts of people who make those kinds of arguments very often seem to have never stopped to consider that if you take away the one legal concept by which musicians, writers and artists can be regarded as having a legitimate legal claim within the current economic system of ever being paid at all you take away the incentive to do anything.  Tech companies that would be faced with the prospect of having anything they do instantly become public domain might not have as big an incentive to innovate in any way if it's possible to reverse-engineer a product and immediately put more of that on the market.  This isn't to say there aren't problems with the current regime, it's saying that the people I've come across who think it makes more sense to argue against the legitimacy of a legal concept altogether than bother with reforms are reminders of why I regard the libertarian anthropology as completely idiotic from its foundation on up. 

But to get to a practical example that caught my attention recently.

...  The severely diminished role of record labels as gatekeepers for new music means that just about anyone with the performing skills, time, money, and marketing savvy needed to produce and promote an album can do so with a much smaller number of middlemen than before. Of course, there is a “flip side:” while it is perhaps easier than ever to produce a professional quality album, it is harder than ever to make a reasonable amount of money from doing so. Fewer listeners than ever before actually buy whole albums, much less albums distributed on hard copies like CDs, and royalties paid even from legal download sites and streaming services are extremely small, on the order of a fraction of a cent per play. Artists who once made thousands of dollars per night selling recordings after concerts now make practically nothing in that way; solo recordings have become little more than expensive business cards. I don’t even have to look to other people’s examples to illustrate this. My own solo recording, which cost well over $10,000 to make, has yielded less than $300 in royalties for me since it was released in 2015. Of course, my reasons for creating an album were as much academic as artistic—the target audience was teachers and students, and my university provided the lion’s share of funding for the project. I never expected to make money. Had I needed to make a profit—or even break even—for the recording to be considered a success, I would have never done it in the first place. If there ever was much money to be made in recording brass music, there isn’t any longer.

10,000+ / <300 a="" deal.="" hardly="" like="" monetizable="" nbsp="" p="" seems="">
In classical guitar music the recently departed Matanya Ophee used to complain (with cause) that even a lot of good music written for the guitar was so played to death it was transformed into what he called lollipops, Adorno's word would have been kitsch.  Given the animosity expressed by other concert music practitioners against the guitar a lot of classical guitar music that would have been regarded as unmusical garbage beneath contempt a century ago can get considered more serious art than, say, whatever Justin Bieber's up to lately. 

Seeing numbers like the above it doesn't seem to have ever been a question among music students I remember hanging out with whether or not they'd find work and whether or not it'd be worthwhile to make recordings of music.  For the artistic satisfaction of doing so, of course, there's plenty of reason to perform and record music.  But more and more it seems like the prospect of "making a living" doing that stuff seems low.  Charles Ives, of course, was an insurance salesman rather than what's known as a vocational artist.  Of course Ives also did all his composing in an era before corporate juggernauts roughly the size of the Walt Disney company were able to campaign to change copyright laws and practices to a level where someone like Ives couldn't have quoted as voraciously from existing music as he did without getting a lawsuit.  And yet it seems more popular with a strata of self-identifying libertarians and conservative Christians to just argue that copyright shouldn't even really exist.  Yes, I dimly recall leadership at Mars Hill saying copyright was outmoded and outdated back when none of those elders had book deals or books published.  It's easy to claim people should take a more flexible approach to intellectual property when you haven't made any.  For those of us who want the writers and film-makers and musicians whose work we admire to actually get paid there's something else going on ...

Reading the first blog post linked to in this post was a reminder that based on what's being described about contemporary academia it's almost like there's swaths of it that have a kind of work-for-hire ethos except that in the comics industry the authors and artists actually get credited and paid, whereas academic appropriation by those who have tenure of those who don't have it doesn't even provide a byline.  If there are men and women in academia who think nothing of taking credit for the labor of those who work under their authority then, well, that's patently exploitive, isn't it? 

We seem to be living through an age in which the credibility of just about every institutional dynamic that in earlier eras could be, if not taken for granted, could at least be hoped for, seems more and more like a sham.  "Better is the poor who walks in integrity than the one who is corrupt though rich" seems to not even register with many people now, if it ever did.

If official academia is so much a shell game then in a way it hardly seems like more than a blip on the radar if a celebrity Christian may turn out to have substantially inflated his academic credentials.  As more tales of adjuncts and students having work appropriated burble to the surface the long con of pretending to academic achievement and thought that a person doesn't necessarily have sounds like it's being perpetrated from within the academic scene by people who have, on paper, officially earned their credentials.  This isn't to excuse what seems to have been egregious on the part of Ravi Zacharias but to say, how much less appropriate should it be for academics to exploit the ideas that are the labor of those who never get credited for the work they did? 

But then, of course, those sorts of people who say that intellectual property shouldn't even exist might disagree. 

Wednesday, November 29, 2017

things simmering in the slow cooker

I still want to get around to writing about contrapuntal music written for the guitar. 

I've been meaning to vent some spleen about how disastrous Legend of Korra was as a sequel series to The Last Airbender for years and the more American studios and film-makers attempt to adapt anime the more this seems like something I'll have to get around to.  If the American remake of Ghost in the Shell willfully read Oshii's whole filmography (an indictment of cyber-utopianism) into a cyber-utopianism then a J. J. Abrams headed remake of Your Name (which I only managed to see in the last week) will probably be some sci-fi Freaky Friday comedy that will have nothing to do with the Japanese film in which body swapping experiences and time travel are both intrinsically linked to Shintoism.

It's been a while since I've written about the problems of American attempts to appropriate the inspired narrative conceits of anime story lines while divesting them of Asian spiritual traditions ... actually, I suspect I've never actually blogged about that but I might have to get around to it. 

I did read Justin Dean's PR book and it has stuff I want to get around to writing about.

Driscoll's been sharing enough stuff via enough platforms about the governance battle that that will eventually need to get some attention, too.  It's just it's the holiday season and there's only so much stuff I feel like tackling all at once.  I'm still slogging through Adorno for that Francis Schaeffer project and while I think a case can be made that both Adorno and Schaeffer had polemics suffused with nostalgic middle-class elitist white flight dread it's going to take some time to unpack that.  Schaeffer has not come in for this kind of scrutiny from within evangelicalism the way Adorno has within the left.  Plenty of new left/post-Marxist arguments against the racist and elitist nature of Adorno's condemnation of pop culture are out there.  Adorno's enough of a pariah in arts stuff these days that it almost seems weird to feel like it's necessary to discuss him but for any author at The New Yorker invoking Adorno and the Frankfurt school as predicting Trump there's always the scrupulous backing away from anything Adorno actually argued for and about in his writings. 

As I've been going through Schaeffer and Adorno in toggling reading I'd say they both actually had the same core problem, a nostalgia for a  Euro-centric or Anglo-centric middle class cultural understanding in which Western culture of the 19th century variety was the pinnacle.  Schaeffer wanted to get that back somehow and Adorno felt it had to all be subjected to abjection in Marxist historicist terms so that art could highlight how terrible society was.  The trouble with that was that Adorno's condemnation of jazz comes off as racist and elitist because ... it was racist and elitist. 

In the Baroque era the earlier Renaissance style was not cast off but retained.  I think of the 20th century as the emergence of a new cycle of early Baroque era polystylistic and polymoral activity.  Adorno and Schaeffer were trapped in the mentality of seeing the old seem to be exhausted without sensing that it could be renewed and changed.  Adorno wanted to cast off and reject the past that could o longer be "legitimately" used while Schaeffer wanted it retained in some way.  At least with Walter Benjamin he seemed open to playing with the idea that idioms exhausted within 19th century practice could be reinvigorated and reborn.

And I want to write about the guitar sonatas of Dusan Bogdanovic and also some of the guitar sonatas of Angelo Gilardino. I'm pretty certain that has to wait until 2018! 

The review of the John Borstlap book was supposed to have been written by now but the truth is I've been reading so many books in music history and musicology that I feel are pertinent to a discussion of Borstlap's book I want to wait a bit on that, too.  He raises issues I'm sympathetic to but in ways that invite vitriolic responses because, well, he's better at highlighting what he regards as the problems in contemporary classical music than he is at providing anything like a clear solution.   One thing that concerns me about his approach is that in his rejection or abjection of any bridge between high art music and popular or vernacular music he ignores the obvious debt that composers as famous as Villa-Lobos and Haydn had to popular/vernacular idioms on the one hand, and, on the other, in refusing to concede that popular music or entertainment is art Borstlap is ironically on precisely the same team as Adorno on that area of aesthetic judgment!  Yet in the book The Classical Revolution Borstlap regards Adorno as one of the great villains in music history in terms of his influence on academic musicology and arts funding in Europe.  That may be so, but here in the United States Adorno is regard more often than not as a racist elitist bourgeoise asshole.  Still, he's important enough as a writer I've felt obliged to slog through him.  When he has good ideas the ideas are actually great, even if he seems suffused with a bunch of annoying double binds created by his particular fixation on dialecticism.  I think that ultimately Adorno and Schaeffer can both be seen as reactionary in their aesthetic positions even if Adorno was notorious for championing Schoenberg and atonality. 

So that might help to unpack for any regular readers that there's a lot I'd like to write about but it's not stuff I can just dash off.  I don't really do well writing "this just in, connected to the news cycle of the last three hours" kind of stuff when the subject isn't a former megachurch brand.  It's not even really something I can do for that brand these days, either. 

Sunday, November 26, 2017

juxtaposing a jab from the Baffler at new atheism's idiot heirs to a short excerpt from a Christian celebrity about the technical definition of a eunuch

There was a piece at The Baffler not too long ago in which sexuality was invoked as part of an ad hominem--it wasn't the sort of ad hominem that people might stop to think about much, since it's a riff on post-new atheist internet trolling and some kind of vaguely alt-right something.  I'll get to the pertinent quote so you can literally see for yourself.

The idea that those humans most truly and acceptably socialized into functional adulthood all have to have had sex can be so given it scampers into polemics that would otherwise have no real need for the assumption of the axiom.  Let's take a piece by Alex Nichols at The Baffler called "New Atheism's Idiot Heirs".


This dynamic played out again and again. In 2012, the popular atheist vlogger Thunderf00t (real name Phil Mason) aimed his sights at Watson in a video titled “Why ‘Feminism’ is poisoning Atheism,” thereby reigniting the previous year’s controversy. This time it took off, leading him to create several follow-up videos accusing women of destroying the paradise that was New Atheism for their own gain. In 2013, Mason inaugurated his “FEMINISM vs. FACTS” series of videos, which attacked Anita Sarkeesian, a feminist video game critic who was then receiving an onslaught of harassment and violent threats for daring to analyze Super Mario Bros. This sort of idiocy, combined, again, with the growing popularity of jibes associating outspoken atheists with fedoras, neckbeards, and virginity, led to an exodus of liberals and leftists from the “atheist” tent. Those who remained for the most part lacked in social skills and self-awareness, and the results were disastrous. [emphasis added]

Ah, virginity, because anybody who's really a grown up has to have had intercourse?  In a way ... this sort of assumed put-down by Nichols doesn't seem entirely off platform compared to a riff by someone who used to preach in these parts.
Jesus Has a Better Kingdom
Pastor Mark Driscoll
Esther 1:10–22
September 21, 2012
about 8:39 into the sermon.

Number two, men are castrated. Men are castrated. I’ll read it for you. “He commanded—” and these guys got names.  “Mehuman—” That’s kind of a rapper name, I was thinking, like, ancient Persian hip-hop artist, Mehuman. That’s how  it’s spelled. “Biztha.” Sounds like a sidekick. “Harbona, Bigtha.” That’s my personal favorite. If I had to pick a  Persian name, Bigtha. Definitely not Littletha. I would totally go with Bigtha. “Abagtha, Zethar and Carkas.” Okay, a couple things here. The Bible talks about real people, real circumstances, real history. That’s why they’re  facts. It’s not just philosophy. Number two, if you ever have an opportunity to teach the Bible and you hit some of
the parts with the old, crazy names, read fast and confident. No one knows how to pronounce them, and they’ll just  assume you do.

Here are these guys. So, you’ve got seven guys, “the seven eunuchs.” What’s a eunuch? A guy who used to have a good  life, and joy, and hope. That’s the technical definition of a eunuch. A eunuch is a man who is castrated. [emphasis  added] Proceeding with the story before I have to fire myself.

The guy who had a hope and had a good life and had a future was the guy who could have a sex life would seem to be the implication and insinuation here.  Precisely why the arrival at functionally adult social life is linked to sexuality sounds like a project Alastair Roberts is already tackling but I haven't gotten around to reading or watching any of that stuff in a while.  The pertinent question for this post is why a writer for an apparently secular/left publication and a generally right-identified celebrity Christian can manage to sport in the same assumption that guys who have not had sex yet or can never have a sex life have failed to adult, as one of the contemporary idioms has it.

It's not like sexual assault and sexual harassment charges and claims haven't been made against free-thinkers and secularists but there won't be any change in a stereotyped argument that religious people have hang ups about sex.  Maybe a lot of them do but ever since Richard Dawkins actually claimed he got groped by a schoolmaster and turned out alright ...

the idea that teaching children theological convictions is worse for them than mild pedophilia or sexual assault might be much of why we've heard so much less about and from Richard Dawkins in the last few years than a decade ago. 

But to go by a riff written against the so-called idiot heirs of new atheism, even the attempts to put those people down couldn't be made without being charged with sexual insults in a way that ends up vaguely resembling some jokes told by a megachurch pastor about eunuchs. 

riffs on Greta Gerwig's film Lady Bird as an enchanting replication of enchanted disenchantment

I love Lady Bird with all of my of twisted little ex-Catholic-schoolgirl heart. Greta Gerwig’s coming-of-age movie is a love letter to Sacramento, and to the kinds of bitter fights only mothers and daughters can have, but most of it revolves around a Catholic-school education. Lady Bird is a senior at Immaculate Heart — lovingly called Immaculate Fart — where she and her friends gossip about masturbating while chomping on unconsecrated Communion wafers, stare into space during the priest’s homily, and decorate a nun’s car with streamers and paint, declaring her “Just Married to Jesus.”
The stereotypes about Catholic school — strict nuns, pleated skirts — are usually true, but Lady Bird treats them lovingly. I spent nine years in Catholic school (and then two years at Episcopal school, which was basically the same, but no nuns), and recognized the tenderness Gerwig shows Lady Bird’s all-girls school. As Gerwig put it in an interview with the Jesuit magazine America, “There’s plenty of stuff to make a joke out of [in Catholic schools], but what if you didn’t? What if you took it seriously and showed all the things that were beautiful about it?” In honor of Immaculate Fart, here are Lady Bird’s most Catholic-school moments ...
Journalists who are no longer Catholic writing about how crazy it was being Catholic was a cliché before cliché superhero stories were officially invented.  Of course not everyone was, or would be, won over by the film which I haven't seen and likely won't get around to seeing, even if I thought Gerwig was perfectly cast in Whit Stillman's Damsels in Distress
To demonstrate that yet another cliché in journalism is the notion of high school as the lake of fire ...

Lately I’ve been feeling like I’m living in hell, by which I mean high school. My peers are reading Teen Vogue. I’ve been encouraged not to spend time alone with the opposite sex or say things that might offend particular people, especially women, who could be described, depending on how you view quality/quantity of social media followers, as popular. Gossip is being celebrated as a radical tool for fighting oppression; disagreements are public, often initiated by cryptic denouncement followed by frantic asking around to figure out who did what to whom. Trending topics prompt acquaintances to write lengthy Facebook posts that read like ninth-grade English essays. People having parties to which I was not invited document them, live, on Instagram, and it feels bad. The president just called Kim Jong Un “short and fat.” I recently joined a club
It’s an ideal moment for a movie like Greta Gerwig’s directorial debut, Lady Bird, which follows the charmingly bratty Christine “Lady Bird” McPherson (Saoirse Ronan) through her senior year at an all-girls Catholic school in Sacramento in 2002 and 2003. Not everyone, I’m told, hated high school quite so much as I did, but as with life under capitalism, it’s a comprehensively painful experience that works for a few at the expense of many. It always sucks. What Lady Bird presupposes is: What if it didn’t?
Much of the anecdotal praise I’ve seen for Lady Bird has been rooted in identification with the protagonist, a rave ideally suited to social media for the way it allows people to easily associate themselves with a clearly remarkable character (“I was just like this unrealistically with-it teen in high school!”) while rewriting their adolescences from adulthood. Why Gerwig and her audience would want to mother themselves through the horrors of high school, the necessary milestones that range from un-special to uncomfortable to traumatizing, is not hard to figure out; what’s surprising, or at least dispiriting, is the willingness with which adults sacrifice their hard-won autonomy for an existence so remedial.
Hard-won autonomy?  What kind of idiot sincerely and seriously writes this stuff?  No one is really autonomous and becoming an adult is partly arrived at by recognizing that the world doesn't revolve around you and you need other people to survive.  Adults who live long enough to get some kind of so-called autonomy get it because their parents kept them alive long enough for the offspring to arrive at something that can be labeled as autonomy. 

So maybe I get cranky reading this sort of tripe on principle but what's so pedestrian and foolish about high school in America as hell is the cloying self-pity of it.  A friend of mine from Kenya told me decades ago that you were considered a grown-up once you hit thirteen.  There simply was no teenage phase as Americans know it. The adolescent angst about identity is not necessarily the global or historic human experience of adolescence.  The notion that high school is hell might only have teeth because the real horror of it is not what you go through but that by high school you have been assigned to the caste to which you are probably stuck for the rest of your life and you've likely developed enough of your higher brain functions to realize this about yourself and the people you know.
Per a contributor to The Baffler, the scapegoat is capitalism, not the reality that anywhere you go brutal caste delineation has always and will always exist for as long as there's anything we humans call history.  Pre-histories, by definition, can never count and they all might as well be variants of a Genesis 1-3 fall narrative.
I have had just enough friends who came of age in European areas that were once under Soviet control that the idea that the hell of first and second world high schools is somehow caused by capitalism is a joke.  The joke about first world problems can be summed up in the sort of self-pitying bromide of American high school as hell.  Sometimes somebody like Joss Whedon can get lightning in a bottle and render that teen adenoidal apocalyptic terror at the caste systems of American schools into something that's funny for a few seasons before the show gets taken too seriously ... but the joke is that the apocalyptic dread that imagines that high school is basically a place full of preternatural evil and inscrutable caste systems has simply been literalized in a show like Buffy the Vampire Slayer.
But in a way what these sorts of bromides and counter-bromides suggest to me this weekend is that in Western film there is a kind of rite of passage film in which the enchantment is a process of disenchantment for an adolescent protagonist, in learning the world is disenchanted the world is paradoxically enchanted for the movie-goer as they see the movie character discover the world is not a cake walk.  Of course nobody exactly makes movies about truly ordinary and unmemorable people, and among artists the propensity to write what you know and create some variant of yourself is all but inescapable and if you do manage to escape that these days you're probably guilty of cultural appropriation.
But the first article evoked thoughts for me about disenchantment as a new mode of enchantment, because the special knowledge and the "boon" discovered by the hero in Campbell's monomyth just takes a new form, where the revelation about the real nature of the world is in the disenchantment.  You discover the world is not exactly a beautiful place and therein lays the hero's newly found power.  Watching a film like Gerwig's could be a way to vicariously re-enact your own disenchantment with the world as a kind of paradoxical enchantment.  Regardless of what Gerwig may or may not have aimed to convey in her film, as we've been exploring a bit at the blog lately, the meta-art religion of film criticism can very simply and directly dispense with any pretexts and pretenses and transform a film into an occasion to go off on this riff or that.  I'll be the first to admit that that can be fun ... but I still prefer to try to anchor whatever observations I make about a film in the nature of the work itself.  I don't always succeed but as I slog through Adorno he keeps coming back to this idea that you have to recognize the ways in which art objects convey their own content rather than thinking that everything in criticism hinges on what and how you project yourself on to the art experience of an art object.  There's plenty about Adorno I disagree with but there's that saying that even a broken clock is right twice a day. 
Per my earlier trolling rumination, if everyone in the entire arts and education scenes questioned the ecological viability of the entire Western art complex we might find that the world is done more positive benefit by wiping out the film industry than by making films about how Western society as it is is going to be the ruin of the world.  But in this there may be a potential clue to the extensive nature of a bourgeois art religion that has simply added Marxist criticisms of capitalism as a necessary component to late stage capitalism's perpetuation of an art and entertainment industry that cannot so much fail to imagine life beyond capitalism as life beyond the existence of the United States.  It's possible for ruling castes to embrace ideologies that exempt them from having to recognize that they are part of a ruling caste.  Or, the way Ellul put it, it's possible for ruling castes to embrace ideologies that signal a death wish on the part of the social class or the power broker class that embraces the ideology. 

If that could be the case then perhaps entertainers concerned about climate change making movies about the threat of climate change, if the ecological catastrophe that is considered so near will be as bad as some fear, then making movies about what that may look like could be seen in a century as the paradoxical perversity of Hollywood insisting on making a movie about the end of the ecosphere as we know it in a way that catalyzed that very end, perhaps constituting a remarkably self-fulfilling prophecy.  Or maybe we can all switch to digital everything at the risk of creating a cultural milieu that could crash because archive-worthy materials can have a potentially higher carbon footprint. 

In a way, teens are right in the United States to feel it's an apocalyptic time that sets the course for how their entire lives will play out.  The idea that grown-ups somehow "move on" and stop behaving the ways they did in high school ... you know ... if that were really true would we have a year in which people talk about President Trump or about what our relationship to art should be that was art made by people who turn out to be like Harvey Weinstein or Woody Allen?  Would we be in a moment where the defenses of Bill Clinton made twenty years ago begin to seem even more morally dubious about the relationship between power and influence, or that someone might be a bad person but the interests of realpolitik ensure that someone should be allowed to retain power and influence lest it fall into the hands of the other team, a team that over time turns out to be pretty much just as bad, making the whole thing seem like a contest between perpetrators and enablers?