If you’re saying to yourself, “What the hell is going on?” This is your book.
-Colbert

If you’re saying to yourself, “What the hell is going on?” This is your book.

-Colbert

PBS NewsHour interviews Pulitzer Prize-winning journalist Mark Mazzetti about the CIA’s secret army and their competition with the Pentagon.

Here are your 2013 Pulitzer Prize winners

Fiction: The Orphan Master’s Son by Adam Johnson

History: Embers of War: The Fall of an Empire and the Making of America’s Vietnam by Fredrik Logevall

Biography: The Black Count: Glory, Revolution, Betrayal, and the Real Count of Monte Cristo by Tom Reiss

Poetry: Stag’s Leap by Sharon Olds

General Nonfiction: Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America by Gilbert King

The Way of the Knife author Mark Mazzetti talks about the secret history of the C.I.A.’s drone program.

(Source: cbsnews.com)

"He bestrode society like a colossus, commanding attention, everywhere his voice, his face, his name. It did not matter whether you despised or adored him; you looked. Covering Venezuela was like wandering through a vast, boisterous audience that simultaneously booed and cheered the titan who turned the presidential palace, Miraflores, into a stage."

— Rory Carroll on Hugo Chávez, in his book “Comandante.”

There Are More Pets Than Ever Before

"There were 77 million dogs in America in 2010, up from 53 million in 1996. ‘We’ve seen a linear explosion in pet populations in Western countries over the past 40 years,’ one researcher says. ‘People are living more isolated lives, are having fewer children, their marriages aren’t lasting. All these things sort of break down a social network and happen to exactly coincide with the growth in pet populations. What’s happening is simply that we’re allowing animals to fill the gap in our lives.’"

The New York Times Book Review podcast interviews John Homans, author of What’s a Dog For?

Tags: lit news dogs pets

Michael Brick spent a year with Reagan High School in Texas, one of the under-reported success stories in education reform

New York Times: “When ‘Grading’ Is Degrading”
In his speech on the night of his re-election, President Obama promised to find common ground with opposition leaders in Congress. Yet when it comes to education reform, it’s the common ground between Democrats and Republicans that has been the problem.
For the past three decades, one administration after another has sought to fix America’s troubled schools by making them compete with one another. Mr. Obama has put up billions of dollars for his Race to the Top program, a federal sweepstakes where state educational systems are judged head-to-head largely on the basis of test scores. Even here in Texas, nobody’s model for educational excellence, the state has long used complex algorithms to assign grades of Exemplary, Recognized, Acceptable or Unacceptable to its schools.
So far, such competition has achieved little more than re-segregation, long charter school waiting lists and the same anemic international rankings in science, math and literacy we’ve had for years.
And yet now, policy makers in both parties propose ratcheting it up further — this time, by “grading” teachers as well.
It’s a mistake. In the year I spent reporting on John H. Reagan High School in Austin, I came to understand the dangers of judging teachers primarily on standardized test scores. Raw numbers don’t begin to capture what happens in the classroom. And when we reward and punish teachers based on such artificial measures, there is too often an unintended consequence for our kids.
I went to lunch recently with a fine history teacher, Derrick Davis, who is better known in my neighborhood as the basketball coach at Reagan High. He has a particularly wide vantage on the decline of Reagan High, which opened in the 1960s as the pride of the city, complete with consecutive state football championships, national academic recognition and a choir that toured Europe.
When he graduated in 1990, the yearbook still showed a significant number of white faces mixed in with larger black and smaller Hispanic populations. Parents could see from the annual state report that 82.4 percent of 11th graders passed all the standardized tests, just a tenth of a percentage point below the district average.
In 1994, the state education agency started applying its boilerplate labels, which became shorthand for real estate agents. Reagan High was rated “Academically Acceptable,” the second-lowest grade. Families of means departed for the exurbs, private schools and eventually charter schools.
Even so, returning as a teacher, Mr. Davis had high hopes for No Child Left Behind, the federal education reform legislation enacted in 2002 with bipartisan support led by President George W. Bush and Senator Edward M. Kennedy. The law turned a powerful spotlight on the second-class education being provided for poor kids in places like East Austin. Finally, the truth was out. In that sense, Mr. Davis believed at the time, “No Child Left Behind was the best thing that happened to us.”
But that was hardly the case: instead of rallying a new national commitment to provide quality public education for all children, the reform movement led to an increasingly punitive high-stakes competition for standardized test scores, school grades and labels. Within just a few years, Reagan High fell to “Academically Unacceptable.”
Keep reading

Illustration by Bee Things

Michael Brick spent a year with Reagan High School in Texas, one of the under-reported success stories in education reform

New York Times: “When ‘Grading’ Is Degrading”

In his speech on the night of his re-election, President Obama promised to find common ground with opposition leaders in Congress. Yet when it comes to education reform, it’s the common ground between Democrats and Republicans that has been the problem.

For the past three decades, one administration after another has sought to fix America’s troubled schools by making them compete with one another. Mr. Obama has put up billions of dollars for his Race to the Top program, a federal sweepstakes where state educational systems are judged head-to-head largely on the basis of test scores. Even here in Texas, nobody’s model for educational excellence, the state has long used complex algorithms to assign grades of Exemplary, Recognized, Acceptable or Unacceptable to its schools.

So far, such competition has achieved little more than re-segregation, long charter school waiting lists and the same anemic international rankings in science, math and literacy we’ve had for years.

And yet now, policy makers in both parties propose ratcheting it up further — this time, by “grading” teachers as well.

It’s a mistake. In the year I spent reporting on John H. Reagan High School in Austin, I came to understand the dangers of judging teachers primarily on standardized test scores. Raw numbers don’t begin to capture what happens in the classroom. And when we reward and punish teachers based on such artificial measures, there is too often an unintended consequence for our kids.

I went to lunch recently with a fine history teacher, Derrick Davis, who is better known in my neighborhood as the basketball coach at Reagan High. He has a particularly wide vantage on the decline of Reagan High, which opened in the 1960s as the pride of the city, complete with consecutive state football championships, national academic recognition and a choir that toured Europe.

When he graduated in 1990, the yearbook still showed a significant number of white faces mixed in with larger black and smaller Hispanic populations. Parents could see from the annual state report that 82.4 percent of 11th graders passed all the standardized tests, just a tenth of a percentage point below the district average.

In 1994, the state education agency started applying its boilerplate labels, which became shorthand for real estate agents. Reagan High was rated “Academically Acceptable,” the second-lowest grade. Families of means departed for the exurbs, private schools and eventually charter schools.

Even so, returning as a teacher, Mr. Davis had high hopes for No Child Left Behind, the federal education reform legislation enacted in 2002 with bipartisan support led by President George W. Bush and Senator Edward M. Kennedy. The law turned a powerful spotlight on the second-class education being provided for poor kids in places like East Austin. Finally, the truth was out. In that sense, Mr. Davis believed at the time, “No Child Left Behind was the best thing that happened to us.”

But that was hardly the case: instead of rallying a new national commitment to provide quality public education for all children, the reform movement led to an increasingly punitive high-stakes competition for standardized test scores, school grades and labels. Within just a few years, Reagan High fell to “Academically Unacceptable.”

Keep reading

Illustration by Bee Things

Adam Alter (author of Drunk Tank Pink) on the psychology behind why we ignore the threat of global warming:

Suppose you’re a malevolent engineer trying to design a grave threat to Earth. Your aim is to create a force that does plenty of damage by stealth, somehow evading the attention of the governments who might otherwise frustrate your plans. Well, it turns out that the threat you’re looking for already exists, and its name is global warming.Ninety-eight percentof experts agree that the globe is warming, that humans are contributing to the effect, and that our failure to act now will contribute todeath, disease, injury, heat waves, fires, storms, and floods. Despite these dire forecasts, Barack Obama and Mitt Romney —both of whombelieve in human-driven climate change —conspicuously omittedglobal warming from this year’s menu of election issues.
What is it about human psychology that makes meteor strikes and volcanoes so compelling, while global warming languishes as a political afterthought? The answer has many strands, but I’ll focus on three, beginning with The Hollywood Test. According to The Hollywood Test, the content of our culture’s films reflects our most vivid fears. Over the past several decades, Hollywood producers have funded dozens of big-budget disaster films. In descending order of frequency,those filmsdepicted alien invasions (approximately 100), epidemic and pandemic outbreaks (37), tsunamis and destructive waves (20), earthquakes (16), volcanoes (14), and meteor, asteroid and comet strikes (14). Absent from the list is a scintillating portrayal of global warming, though two films,The Day After TomorrowandLost City Raiders, described global warming as the catalyst for floods, tornadoes, hurricanes, and a protracted Ice Age. Al Gore’s important documentary film,An Inconvenient Truth, is perhaps the only film that focuses squarely on global warming, and then it’s long on information, and short on Hollywood stars and scenes of graphic devastation. And that sums up the first major problem with global warming: its precise consequences aren’t vivid enough. Humans are better at focusing on the moderate, specific, localized devastation of a major earthquake than on the great but murky devastation that global warming will bring in the middle part of the 21st century.
One of the best illustrations of this difficulty comes from research in a different domain: on our willingness to contribute to charitable causes.
Read the rest

(Image of Hurricane Sandy via)

Adam Alter (author of Drunk Tank Pink) on the psychology behind why we ignore the threat of global warming:

Suppose you’re a malevolent engineer trying to design a grave threat to Earth. Your aim is to create a force that does plenty of damage by stealth, somehow evading the attention of the governments who might otherwise frustrate your plans. Well, it turns out that the threat you’re looking for already exists, and its name is global warming.Ninety-eight percentof experts agree that the globe is warming, that humans are contributing to the effect, and that our failure to act now will contribute todeath, disease, injury, heat waves, fires, storms, and floods. Despite these dire forecasts, Barack Obama and Mitt Romney —both of whombelieve in human-driven climate change —conspicuously omittedglobal warming from this year’s menu of election issues.

What is it about human psychology that makes meteor strikes and volcanoes so compelling, while global warming languishes as a political afterthought? The answer has many strands, but I’ll focus on three, beginning with The Hollywood Test. According to The Hollywood Test, the content of our culture’s films reflects our most vivid fears. Over the past several decades, Hollywood producers have funded dozens of big-budget disaster films. In descending order of frequency,those filmsdepicted alien invasions (approximately 100), epidemic and pandemic outbreaks (37), tsunamis and destructive waves (20), earthquakes (16), volcanoes (14), and meteor, asteroid and comet strikes (14). Absent from the list is a scintillating portrayal of global warming, though two films,The Day After TomorrowandLost City Raiders, described global warming as the catalyst for floods, tornadoes, hurricanes, and a protracted Ice Age. Al Gore’s important documentary film,An Inconvenient Truthis perhaps the only film that focuses squarely on global warming, and then it’s long on information, and short on Hollywood stars and scenes of graphic devastation. And that sums up the first major problem with global warming: its precise consequences aren’t vivid enough. Humans are better at focusing on the moderate, specific, localized devastation of a major earthquake than on the great but murky devastation that global warming will bring in the middle part of the 21st century.

One of the best illustrations of this difficulty comes from research in a different domain: on our willingness to contribute to charitable causes.

Read the rest

(Image of Hurricane Sandy via)

Gary Marcus, author of Guitar Zero, writes about IBM’s latest project, in the New Yorker:

Half a trillion neurons, a hundred trillion synapses. I.B.M. has just announced the world’s grandest simulation of a brain, all running on a collection of ninety-six of the world’s fastest computers. The project is code-named Compass, and its initial goal is to simulate the brain of the macaque monkey (commonly used in laboratory studies of neuroscience). In sheer scale, it’s far more ambitious than anything previously attempted, and it actually has almost ten times as many neurons as a human brain. Science News Daily called it a “cognitive milestone,” and Popular Science said that I.B.M.’s “cognitive computing program… just hit a major high.” Are full-scale simulations of human brains imminent, as some media accounts seem to suggest?
Compass is part of long-standing effort known as neuromorphic engineering, an approach to build computers championed in the nineteen-eighties by the Caltech engineer Carver Mead. The premise behind Mead’s approach is that brains and computers are fundamentally different, and the best way to build smart machines is to build computers that work more like brains. Of course, brains aren’t better than machines at every type of thinking (no rational person would build a calculator by emulating the brain, for instance, when ordinary silicon is far more accurate), but we are still better than machines at many important tasks, including common sense, understanding natural language, and interpreting complex images. Whereas traditional computers largely work in serial (one step after another), neuromorphic systems work in parallel, and draw their inspiration as much as possible from the human brain. Where typical computers are described in terms of elements borrowed from classical logical (like “AND” gates and “OR” gates), neuromorphic devices are described in terms of neurons, dendrites, and axons.
In some ways, neuromorphic engineering, especially its application to neuroscience, harkens back to an older idea, introduced by the French mathematician and astronomer Pierre-Simon Laplace (1749-1825), who helped set the stage for the theory of scientific determinism. Laplace famously conjectured:
"An intellect which at a certain moment [could] know all forces that set nature in motion, and all positions of all items of which nature is composed, [could] embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes."
Much as Laplace imagined that we could, given sufficient data and calculation, predict (or emulate) the world, a growing crew of neuroscientists and engineers imagine that the key to artificial intelligence is building machines that emulate human brains, neuron by neuron.
Read the rest

Illustration by John Ritter

Gary Marcus, author of Guitar Zero, writes about IBM’s latest project, in the New Yorker:

Half a trillion neurons, a hundred trillion synapses. I.B.M. has just announced the world’s grandest simulation of a brain, all running on a collection of ninety-six of the world’s fastest computers. The project is code-named Compass, and its initial goal is to simulate the brain of the macaque monkey (commonly used in laboratory studies of neuroscience). In sheer scale, it’s far more ambitious than anything previously attempted, and it actually has almost ten times as many neurons as a human brain. Science News Daily called it a “cognitive milestone,” and Popular Science said that I.B.M.’s “cognitive computing program… just hit a major high.” Are full-scale simulations of human brains imminent, as some media accounts seem to suggest?

Compass is part of long-standing effort known as neuromorphic engineering, an approach to build computers championed in the nineteen-eighties by the Caltech engineer Carver Mead. The premise behind Mead’s approach is that brains and computers are fundamentally different, and the best way to build smart machines is to build computers that work more like brains. Of course, brains aren’t better than machines at every type of thinking (no rational person would build a calculator by emulating the brain, for instance, when ordinary silicon is far more accurate), but we are still better than machines at many important tasks, including common sense, understanding natural language, and interpreting complex images. Whereas traditional computers largely work in serial (one step after another), neuromorphic systems work in parallel, and draw their inspiration as much as possible from the human brain. Where typical computers are described in terms of elements borrowed from classical logical (like “AND” gates and “OR” gates), neuromorphic devices are described in terms of neurons, dendrites, and axons.

In some ways, neuromorphic engineering, especially its application to neuroscience, harkens back to an older idea, introduced by the French mathematician and astronomer Pierre-Simon Laplace (1749-1825), who helped set the stage for the theory of scientific determinism. Laplace famously conjectured:

"An intellect which at a certain moment [could] know all forces that set nature in motion, and all positions of all items of which nature is composed, [could] embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes."

Much as Laplace imagined that we could, given sufficient data and calculation, predict (or emulate) the world, a growing crew of neuroscientists and engineers imagine that the key to artificial intelligence is building machines that emulate human brains, neuron by neuron.


Read the rest

Illustration by John Ritter