Posted: February 7, 2014
eople have been pointing out that the modern world increasingly favors the smart since 1958, when Michael Young's The Rise of the Meritocracy appeared. Robert Reich described the increasing rewards going to "symbol workers" in The Work of Nations (1991). Richard Herrnstein and I discussed the implications for social and cultural stratification in The Bell Curve (1994). Robert Frank and Philip Cook described the dynamics of technologically advanced economies, which accelerate income inequality, in The Winner-Take-All-Society (1995). David Brooks skewered the new cognitive elite in Bobos in Paradise (2000) and Richard Florida celebrated it in The Rise of the Creative Class (2002). Countless other books and articles have talked about the same topics, while treating education as the explanatory variable and never, ever mentioning cognitive ability.
Lately, we have two new contributions to this literature: economist Tyler Cowen's Average is Over: Powering America Beyond the Age of the Great Stagnation and Cato Institute Vice President of Research Brink Lindsey's Human Capitalism: How Economic Growth Has Made Us Smarter—and More Unequal. What do we learn from Cowen and Lindsey that we didn't know already?
* * *
In Average is Over, all sorts of things. Tyler Cowen has a rambunctious intellect that makes him curious about topics far afield from economics—the popular blog he runs with Alex Tabarrok, Marginal Revolution, will give you a quick idea—and he has drawn on his many interests to tell the story of Average Is Over.
The basics are familiar. The labor market is increasingly polarized between those who are adept at dealing with the intellectual demands of our high-tech world and those who are not, and it's going to become more so. We're not just looking at meritocracy, says Cowen, but "hyper-meritocracy." He puts his own twist on it by pointing out that the job market won't get progressively worse as one moves down the ladder from the highest-status jobs to the lowest. It's the jobs in the middle that are the most likely to vanish. Why? The answer goes to his main theme. In thinking about how you will fare in the years to come, Cowen writes, the key questions will be:
Are you good at working with intelligent machines or not? Are your skills a complement to the skills of the computer, or is the computer doing better without you? Worst of all are you competing against the computer? Are computers helping people in China and India compete against you?
Many white collar jobs in the middle-jobs held by people with college degrees and modestly above-average cognitive ability—will soon be done better and cheaper by computers. He asks us to think about Siri on the iPhone, that female-voiced robot of whom you can ask questions in ordinary English and usually get a useful answer. She's far from perfect, but she is radically better than the voice-recognition software of just a few years ago, and we can be certain that she will be radically better in the iterations to come. Now think about how many millions of jobs that future versions of Siri will destroy. For example, I still often call an airline's reservations line for human help with complicated options that the airline's website is unable to handle. It's not many years before all those helpful reservation agents will be replaced by an advanced form of Siri that I can access directly from the website. That's what Cowen means about the dire prospects facing people with skills that compete against the computer.
New technology will replace people who work with their hands as well—he gives numbers about the robots that are replacing assembly-line workers—but we're a long way from being able to do without, say, human electricians or human surgeons. But that's where the first of his key questions comes in: are you good at working with intelligent machines? The best electricians and surgeons a decade or two from now may not be those who have the highest technical skills themselves, but those who have "enough" of those skills and are best at tapping into the ways that computers can augment them.
* * *
Cowen makes this point with extended passages, running to dozens of pages scattered throughout the book, that draw upon chess. He knows what he's talking about—he used to play at the master level—and chess is an appropriate test-bed for thinking about the future. It wasn't until 1981 that a computer chess program beat a chess master with tournament time controls. Fifteen years later, IBM's Deep Blue beat Garry Kasparov, probably the greatest chess player in history. Now, a chess program that can beat the reigning human world champion can be downloaded to your laptop computer for about $40.
One spinoff of this history has been the development of "freestyle chess" in which humans can make use of computers as they play. Who are the champions at freestyle chess? Not the strongest chess players, but those who are best at augmenting the power of the computer—sometimes by knowing when to substitute their intuitive strategic understanding of the game to redirect the computer's computation; sometimes by using multiple computers to cross-check results.
* * *
Cowen, who seems immune to group-think, comes up with novel ways to pursue the implications. For example, he spares us earnest calls for more young people to go to college, instead observing that contemporary society is already exposing almost everyone to the skills that are needed to be good at freestyle jobs. Some extremely large proportion of the people who have never been in a college classroom have smart phones and play video games. Those who have learned how to customize their iPhone or are good at playing strategic video games are likely to be valuable employees in tomorrow's freestyle jobs. The person who is good at such jobs doesn't have to be expert in the task at hand; he just has to be expert at collaborating with the smart machine. Conversely, English majors who are not good at working with intelligent machines will be hardly better off in the job market than the high school graduate who is not good at working with intelligent machines.
Another implication of the changing job market that Cowen discerns is the increasing importance of an employee's conscientiousness. One reason is the prevalence of integrated teams of workers in today's workplace. In an age when a diner had three fry cooks, each of them working on individual orders, a slow or lazy fry cook slowed up his orders, but no one else's. Now, look at what's going on in a McDonalds, with its computerized ordering system, the headsets on the staff, and the choreography that goes into filling your order. And that's a simple example. Teams working jointly on vastly more complex tasks are common throughout the economy. "[T]hat means," Cowen points out, "the screw-ups of a single person can damage a very large and very valuable production chain. To hire a risky and iffy worker, without a competent overseer, simply isn't worth it, no matter how low the wage."
Another characteristic of the new economy that puts a premium on conscientiousness is the expanding role of jobs involving health care and personal service. In traditional construction and manufacturing jobs, foremen directly watched the work and inspectors could check it. But nobody can afford the time to monitor whether a nurse's assistant washes her hands at the appropriate times even if no one is watching her, or whether the hotel maid has replaced the used water glass with a clean one or just rinsed out the used one. For nannies and low-skill caregivers for the elderly, conscientiousness—"whether the worker can follow some straightforward requests with extreme reliability and basic competence," as Cowen puts it—is absolutely critical. And guess what: that also helps explain why women are doing so much better than men in holding low-skill jobs in the new economy. Women tend to be more conscientious than men.
* * *
These are just tidbits from a sprawling, breezily written, sometimes rambling tour of the horizon of the coming economy, job market, and society. There are chapters on human intuition (not all it's cracked up to be), on the significance of computers becoming human-like and the likelihood of convergence of man and computer (both possibilities are overblown), the interaction of geography and global economic trends, the future of science and economics, and, my favorite, the future of education. Many people, including me, have prophesied that economics and the IT revolution will force transformations of education from kindergarten through grad school, but Cowen's speculations about how those changes might play out are uniquely imaginative and mostly plausible.
He closes the book with a chapter on what all this means for politics. Here is how he opens it:
We will move from a society based on the pretense that everyone is given an okay standard of living to one in which people are expected to fend for themselves much more than they do now. I imagine a world where, say, 10 to 15 percent of the citizenry is extremely wealthy and has fantastically comfortable and stimulating lives.... Much of the rest of the country will have stagnant or maybe even falling wages in dollar terms, but a lot more opportunities for cheap fun and also cheap education. Many of these people will live quite well, and those will be the people who have the discipline to benefit from all the free or near-free services modern technology has made available. Others will fall by the wayside.
Cowen reminds me of the Duke of Wellington at the battle of Waterloo, when an aide riding beside him had his leg blown off and cried, "By God, sir, I've lost my leg!" Wellington replied, "By God, sir, so you have." And so with Professor Cowen: "Others will fall by the wayside." Is that the best he can say on behalf of a group that could easily constitute a quarter or more of the population? Well, yes, pretty much. As usual, his subsequent elaboration left me both arguing with him and rethinking my own previous opinions.
* * *
Brink Lindsey is not so cavalier. Like Cowen, he combines libertarian preferences for the way the world should work with shadings both practical and ideological, depending on the topic at hand. In Human Capitalism, he focuses on the increasing complexity of modern life, the increasing cognitive demands that success in coping with that complexity requires, and observes that this has created a problem:
[T]oday the primary determinant of socioeconomic status is the ability to handle the mental demands of a complex social environment. If you can do that, you'll likely have ample opportunities to find and pursue a career with interesting, challenging, and rewarding work. But if you can't, you'll probably be relegated to a marginal role in the great social enterprise—where, among other downsides, you'll face a dramatically higher risk of falling into dysfunctional and self-destructive patterns of behavior.
Then Lindsey sets out to describe how this situation came about and to prescribe measures that will allow more people to flourish in the face of complexity.
I have a dog in this fight. That passage from Lindsey's introduction is not a bad way of describing major themes from The Bell Curve, which I coauthored with the late Richard J. Herrnstein 20 years ago, themes that I revisited in Coming Apart(2012). What comes next in Human Capitalism should probably have led me to recuse myself from reviewing it. I will not mince words. In my view, Lindsey uses The Bell Curve as a straw man by grotesquely mischaracterizing it, even as much of his description of the problem evokes the actual presentation in our book. It's happened before, but I haven't gotten used to it, and reading those passages did not warm me toward Human Capitalism. Then, just as I was calming down, I came to the passage in which Lindsey scorned me for offering "little more than plaintive moralizing" in Coming Apart....Well. How can I tell you anything about Human Capitalism that you can be sure isn't an attempt to get even?
First, I can say that Lindsey's recommendations for policy reforms are excellent. He wants to reform K-12 education by unleashing competition, compensate for disadvantaged environments through early childhood education, decrease social exclusion of low-skill adults by building on the successes of the Earned Income Tax Credit, reform the anti-work effects of the disability program, reduce the prison population through reform of the nation's drug laws, improve higher education by limiting tuition subsidies, and remove regulatory barriers to entrepreneurship and upward mobility. All of these are worth considering; in some cases, they are measures that I enthusiastically support and think could make a difference if they could be passed (which they can't).
Second, I can say that Lindsey's accounts of the rise of complexity, the importance of cognitive ability in dealing with complexity, and the cultural polarization that afflicts the United States, are all useful and generally accurate statements of important issues. With the caveat that follows, the book can serve as a good introduction for people who haven't previously read into these topics. As someone who has, I didn't find much that was new, but I'm not the target audience for Human Capitalism.
* * *
What you should treat with the utmost skepticism in Human Capitalism is an argument threaded throughout the book: if only disadvantaged children got better educations, we could go a long way toward helping those who, Cowen casually tells us, are going to fall by the wayside. Lindsey's is a mindset that I call educational romanticism. Like James Heckman, Barack Obama, and most editorial directors of major newspapers, Lindsey insists that better education can significantly raise the cognitive functioning of those in the lower half of the distribution, and thereby mitigate the effects of the new complexity.
This is not the place to engage in an extended discussion of this complicated issue, but since I've told you to ignore Lindsey's optimism, I should at least list a few of the most fundamental reasons for caution.
Improvements in K-12 education don't have much effect on cognitive functioning, and can easily increase rather than decrease inequality in educational outcomes. The unvarying story of the relationship of education to cognitive ability since scholars began looking at the issue is this: Cognitive ability rises substantially when education replaces no education—when children who formerly went straight to work in the fields at age six instead complete elementary school. Cognitive ability rises little, and often not at all, when incremental improvements are made to an existing educational system. Better education can increase how much children learn, and that's good. But learning more doesn't give the child with an I.Q. of 85 at age 6 an I.Q. of 100 at age 18. Furthermore, improvements in education typically boost the learning of the clever more than they boost the learning of the dull, increasing inequalities in the knowledge and skills that children take into the labor market.
The evidence used to promote the long-term effects of preschool intervention is shaky and has been contradicted by recent and methodologically much more powerful evidence. I've written up the details elsewhere, but the short story is that we no longer have to agonize over the many problems associated with the Perry Preschool and Abecedarian Project evidence from half a century ago. We now have a large, rigorous replication of the Abecedarian approach and a large, rigorous evaluation of Head Start that we were assured would be definitive. The results of both are devastating to expectations that we could mount a large-scale preschool program that has any long-term effects on anything, cognitive or behavioral.*
The environment has an important effect on cognitive ability, but almost all of the long-term effect comes from the non-shared environment. This is the elephant in the corner that the educational romantics refuse to confront. All serious scholars of the topic accept that both the environment and genes have large effects on cognitive ability. Whether the split is 50-50 or 60-40 one way or the other isn't especially important. What is important, and has been consistently and repeatedly found in studies of identical twins, ordinary siblings, and children adopted at birth, is that the environment that siblings share—things such as parenting style, socioeconomic status of the neighborhood, schools, number of books in the home, how many millions of words the parents do or do not expose their children to in the first years of life—are not nearly as important as most people assume.** By adolescence, the shared environment has modest explanatory power. The nonshared environment—influences such as peer groups, accidents, psychological trauma, sibling interaction, differential parental treatment, birth order—soaks up most of the effects on cognitive ability that are not genetic in origin. This is the meta-explanation for the lack of effects we observe from intervention programs that "should" raise cognitive ability. Those intervention programs are trying to enrich elements of the shared environment that empirically turn out to be not that important. The comments I've made about cognitive ability apply to personality characteristics as well.
* * *
The takeaway from these observations is not that nothing can be done to improve the lives of children. In the case of preschool intervention, for example, a preschool that gives a child from a punishing home environment a few hours a day in a warm and nurturing environment is accomplishing a good thing, and I am not going to complain that my tax dollars are wasted just because I don't see long-term effects on the child's life trajectory (though I would want to verify that the preschool really does provide a warm and nurturing environment). Many inner-city schools are dreadful and should be improved, even though those improvements have trivial effects on cognitive functioning.
But we need to stop acting as if a Lake Wobegon solution is open to us. The problems that Cowen's new hyper-meritocracy and Lindsey's new complexity bring to people who have drawn the short end of the cognitive stick are real, and whether genes or the environment gave them that short end is immaterial—as of now, we don't know how to fix either the genes or the causally relevant environment.
Instead of focusing on raising cognitive ability, we need to focus on the ways in which the little platoons of America's civic culture historically created niches—valued places is the term Richard Herrnstein and I used in The Bell Curve—for people of all kinds, and why those niches are diminishing. Since no policy changes that might help are politically realistic, I'm also in favor of focusing on the only game in town for accomplishing change, the culture itself. Americans of all sorts, but especially the members of America's new upper class, have gone AWOL in sustaining that civic culture. Resuscitating it requires a change in the way that America's new upper class sees its obligations.
And how is that to be done? It has to begin by jaw-boning, changing the national conversation about our civic culture. That in turn brings us to the glaring defect in both Tyler Cowen's and Brink Lindsey's books: not enough plaintive moralizing.
*The replication of the Abecedarian approach was the Infant Health Development Program. The 18-year follow-up was reported in "Early intervention in low birth weight premature infants: Results at 18 years of age for the Infant Health and Development Program," by M.C. McCormick, J. Brooks-Gunn, S.L. Buka, et al. in Pediatrics, volume 117 (2006), pages 771-780. The Head Start Evaluation (2012) may be accessed at https://www.acf.hhs.gov/programs/opre/resource/third-grade-follow-up-to-the-head-start-impact-study-final-report. My reading of these results may be found at https://www.aei.org/article/education/the-shaky-science-behind-obamas-universal-pre-k/.
**A recent review article, "Why are children in the same family so different from one another?" written by Robert Plomin and Denise Daniels is available online at https://ije.oxfordjournals.org/content/40/3/563.full.