Tuesday, January 20, 2015

(Kareem Remembers MLK with Perfect Vision)  NeoCons Endanger Planet  (Hebbadabbado Update)  Ivy League’s Meritocracy Lie:  How Harvard and Yale Cook the Books for the 1 Percent



I'm thinking that Kareem would be a powerful political candidate and inestimable force for good government.

Others agree.

Why I Have Mixed Feelings About MLK Day


By Kareem Abdul-Jabbar, TIME

19 January 15

have mixed emotions about Martin Luther King Jr. Day. For me, it’s a time of hopeful celebration — but also of cautionary vigilance. I celebrate an extraordinary man of courage and conviction and his remarkable achievements and hope that I can behave in a manner that honors his sacrifices. And while Dr. King still has his delusional detractors, who have a dream of dismissing his impact on history, it’s not them I worry about.

His legacy may be in more danger from those who admire him.

Why? Because it’s tempting to use this day as a cultural canonization of the man through well-meaning speeches rather than as a call to practice his teachings through direct action.

For some, the fact that we have Martin Luther King Jr. Day is a confirmation that the war has been won, that racism has been eliminated. That we have overcome. But we have to look at the civil rights movement like antibiotics:  Just because some of the symptoms of racism are clearing up, you don’t stop taking the medicine or the malady returns even stronger than before. Recent events make clear that the disease of racism is still infecting our culture and that Martin Luther King Jr. Day needs to be a rallying cry to continue fighting the disease rather than just a pat on the back for what’s been accomplished.

History has a tendency to commemorate the very thing it wishes to obfuscate. When you convince people that they’ve won, they lose some of their fire over injustice, their passion to challenge the status quo. In Alan Bennett’s brilliant play, "The History Boys," one of the teachers explains to his students why a World War I monument to the dead soldiers isn’t really honoring them, but rather keeping people from demanding answers as to how Britain unnecessarily contributed to the cause of the war and is therefore responsible for their deaths.

By appealing to our emotional sense of loss, the government’s monument distracts the people from holding the hidden villains responsible. The teacher says, “And all the mourning has veiled the truth. It’s not lest we forget, but lest we remember. That’s what this [war memorial] is about … Because there’s no better way of forgetting something than by commemorating it.”

One of the major debates this year has been whether or not racism exists anymore in America. Not surprisingly, polls indicate that most African Americans say yes it does exist while most white Americans say that it doesn’t. Blacks point to disproportionate prosecution and persecution of blacks by authorities, and whites point to President Obama and dozens of laws protecting and promoting minorities.

They are both right. There are plenty of laws and government agencies dedicated to eradicating racism. America has made it a priority. Affirmative-action programs have created more opportunities for minorities, sometimes at the expense of whites seeking those same opportunities. That should be acknowledged and appreciated.

But suppressing racism is like pressing on a balloon:  you flatten one end and it bulges somewhere else. Racism has gone covert. For example, the Republican effort to pass laws demanding IDs to combat voter fraud is itself fraudulent and racist. It is a form of poll tax, which was outlawed in the 24th Amendment to the U.S. Constitution. The poll tax was designed to keep blacks from voting, as is the voter ID. It costs money and time away from work, which is too great a burden for the poor, many of whom are minorities. The justification given is to stop voter fraud. However, a recent study concluded that out of 1 billion votes cast, there have been only 31 incidents of voter fraud.

The reason whites don’t agree that racism is rampant is because most of them aren’t personally racist, and they resent the blanket accusation. In fact, they see themselves as victims of reverse racism. They, too, are right. Dr. King would have acknowledged their pain and fought to alleviate it by reminding us not to confuse institutional racism with the good hearts of our neighbors. The civil rights movement would not have achieved as much as it has without the support and sacrifice of white America.

Dr. King would have been proud to see so many people across America — white and black — joining together to demand accountability in the deaths of Michael Brown and Eric Garner. He would have praised the millions who marched in France in support of freedom of speech. As he once said, “Injustice anywhere is a threat to justice everywhere.”

He would have also been disturbed by the violence and rioting that has occurred during these protests. We must remember that Dr. King’s cause was not just equality for all people, but achieving that equality through nonviolence. The ends do not justify the means; the means and the ends are the same. Violence insults his legacy. To him, anything won through force is not won at all — it is loss. He wanted equality achieved through love because he wanted to win over his enemies, not defeat them. As he said: “Love is the only force capable of transforming an enemy into a friend.” His goal was to cleanse the community, not to cleave it.

Martin Luther King Jr. was only 39 years old at the time of his assassination nearly 47 years ago. When he died, those whom he had inspired were there to pick up the banner of the cause and continue marching. “I’ve looked over, and I’ve seen the promised land!” he told us. “I may not get there with you, but I want you to know tonight that we as a people will get to the promised land.”

Forty-seven years later, we must continue stepping lively, not in his name but for his cause.

As if we weren't already aware.


Princeton Study:  U.S. No Longer An Actual Democracy
America's neocons, who wield great power inside the U.S. government and media, endanger the planet by concocting strategies inside their heads that ignore real-world consequences. Thus, their "regime changes" have unleashed ancient hatreds and spread chaos across the globe.

PCR fills us in on the realities and inadequacies of the snooze media regarding the now-revealed false flag Paris CIA shootings of "Je Suis Charlie!"?

It's better than a Le Carré novel.


Charlie Hebdo Update

January 16, 2015
Paul Craig Roberts
Readers, with the exception of neoconservative William Kristol, appreciated the questions I raised about the Charlie Hebdo affair.
http://www.paulcraigroberts.org/2015/01/13/charlie-hebdo-paul-craig-roberts/
Europeans sent me videos and news reports from Europe.

One video compares the car in which the killers escaped with the car in which the ID of one of the accused brothers was allegedly found and makes the point that the two cars differ. The car in which the ID was found apparently is not the escape car.

Another video, which seems to be part of a news report, shows a large force of police waiting as the metal screen over the deli storefront rises. This is the deli in which Amedy Coulibaly is reported to be holding hostages.

As the metal screen rises, police fire into the deli. There seems to be no return fire, and it is unclear who the police are shooting at. Perhaps it was the heavy firing by the police that killed the hostages.

Police enter and turn to the right. Then Coulibaly appears from the same direction as the police entered.He is in a running stumble as if he has been pushed into the line of fire. There is no weapon in his hands, which appear to be tied together. He falls or is shot down at the door in front of the police, who then fire more bullets into the downed man.

It looks like an execution. It most certainly is not a gun fight. Coulibaly was down and could easily have been captured and questioned.

Instead, we have reports of pre-recorded confessions to take the place of capture and questioning.

The connection between Muslim murderous ire against French cartoonists and Coulibaly’s alleged attack on a Kosher deli is asserted but not explained. If Coulibaly was incensed over cartoons drawn by French persons, why wasn’t he with the killers in the cartoonists’ office? Why pick on random patrons of a deli unrelated to the reason for the attack?
Once you look at this independently of the official news presentation, there are problems everywhere.
The terrorist attacks, if that is what they are, are extremely convenient for Washington and Israel. France had just voted with Palestine against the US/Israeli position. French President Hollande had just stated that the sanctions against Russia must end.
Among Europeans sympathy was rising for the Palestinians, and support for Washington’s and Israel’s Middle East wars was declining. Now France is back under Washington’s foreign policy umbrella, and European sympathy has shifted from the Palestinians to Israel.
In my day in order to qualify for the Ph.D. degree, the candidate had to demonstrate French and German reading skill. Time and scant use have taken their toll of my skill, so I cannot state with any assurance the conclusions of these reports. I can only report what the videos show visually.
Also, the videos have come to me in forms that I do not know how to post. Those of you adept at Internet search possibly can find them.
I would welcome for this website a report from Europe. I believe that a report from a knowledgeable and aware European about the official news reporting and public reaction to it, and the extent to which inconsistencies and loose ends of the official story are recognized and challenged, would be welcomed by readers of this site.
I have been left with the impression that informed Europeans are uncomfortable about diverging publicly from accepted opinion. As long as you let me know who you are, you can remain anonymous. We in America would just like to know if your reporting is different from ours and if the European public, except for the anti-immigrationists, buys the official story.
The wars into which Europeans have been thrust by ambitious and deceptive leaders have been more devastating for European populations than the wars thrust upon gullible Americans. Perhaps Europeans are less willing to believe government assertions.
If not, more war is a certainty.

Poor news coverage heightens paranoia?

Why wouldn't it as we march toward the third (or fourth) Iraq War?

Charlie Hebdo:  Report from Europe


Paul Craig Roberts

(Note: The first video has the most visual information.)

Here is a video of the execution of Amedy Coulibaly. It is a German website with the actual live French video of the police assault on the deli. There are three videos. The first one repeatedly shows Coulibaly with tied hands containing no weapons shot down and killed when he could easily have been captured.
It is as if the order was to make sure that there is no live suspect whose story might have to be explained away. The first video also repeatedly shows the execution in slow motion. Commentary in French accompanies the video. http://alles-schallundrauch.blogspot.co.at/2015/01/amedy-coulibaly-wurde-hingerichtet.html
In response to my Charlie Hebdo update, http://www.paulcraigroberts.org/2015/01/16/charlie-hebdo-update-paul-craig-roberts/, European readers report that the situation in Europe is much the same as in the US and UK. The “mainstream” print and TV media parrot the official line and raise no unsettling questions. The independent Internet media is where real information is reported.
The German print and TV media have suffered dramatic declines in readers and viewers. This decline accelerated when Udo Ulkotte’s book about CIA penetration of European media was published by Kopp Verlag and became a best seller. Thinking people no longer trust the German media. The German media has lost the intelligent part of the population and only retains the somnolent sheep.
There are efforts to infiltrate the Internet media. Sites funded by money, such as "Salon," appear. These sites attempt to discredit all who raise honest and obvious questions. Readers report that the "Huffington Post" has lost credibility by its move into “respectability.” "Salon," apparently, has no more credibility than "Fox News" or "The Weekly Standard."
The view I get from Europe supports my view. The left-wing, or what little remains of a left, supports the official stories of terrorist attacks. For the American left, the stories confirm the left-wing’s emotional need to believe that peoples oppressed by Western colonialism/imperialism are capable and determined and strike back at their oppressors.
The left-wing’s sense of justice demands that oppressed and abused peoples don’t just sit there and take it. The French left sees the Charlie Hebdo attack as an effort by obscurantist religion to attack freedom of expression and brings to mind the French left’s anti-Catholic crusade.
The right-wing accepts the official stories for two different reasons. The anti-immigrationists among them use the terror attacks as evidence against immigration. The patriotic right can go along with this, but also responds to writers such as myself, who defend the Constitution against the government, with the argument that it is the government’s job to interpret the Constitution and I should not use the Constitution in order to criticize our government.
Much of the American right believes that liberals use the Constitution in order to defend criminals and terrorists, who simply should not be tolerated. In other words, the Constitution is seen not as our defender but as a defender of those the right-wing regards as undesirables, such as criminals, terrorists, abortionists, and homosexuals.
The rest of the population has simply succumbed to the many years of demonization of Muslims. Indeed, Israel has been demonizing Muslims for 60 years and has created the image of Muslims as terrorists wearing suicide bombs. If a person has been prepared to regard Muslims as terrorists, the official stories simply fit that already prepared compartment in the brain.
Additionally, although false flag attacks are commonplace and have been used throughout history to advance undeclared agendas, the public has been brainwashed to regard them as “conspiracy theories.” Thus, anyone who raises questions is dismissed as a “conspiracy theorist.”
Many Americans do not even understand that the official story of 9/11, for example, is a conspiracy theory. So is the official explanation of the Boston Marathon Bombing and the Charlie Hebdo attack. What it boils down to is that official conspiracy theories are accepted as true, but everyone who questions them is a “conspiracy theorist.”
Readers point out that as stupid as governments are, populations are even more stupid and that governments succeed in brainwashing populations. Many conclude that in the absence of an adversarial media, democracy is a sham as the people have no inclination or means of confronting the government.
The hope appears to be that the mainstream media will continue to diminish and will be replaced by the independent Internet media, thus releasing populations from their brainwashed state. Others think that this hope will come to naught as governments will assert control over the Internet and that governments will make dissent the equivalent of terrorism.
Those who try to suppress dissent might be simply defending a personal bias or they might be agents of a cover-up. Regardless, it comes to the same thing in the end. People who raise dissenting points and honest questions are ridiculed or demonized in efforts to silence or marginalize them. Whether or not truth can actually prevail, it doesn’t usually prevail in time.
For example, “Saddam Hussein’s weapons of mass destruction” prevailed over truth. After Iraq is destroyed we learn that the basis for the US invasion of Iraq rested firmly on an orchestrated lie.
The culpability of the Western media in lies, death, and destruction is extreme. Consider the Malaysian airliner that went down in Ukraine. The US, UK, EU, and the puppet government in Kiev blamed Russia and forces of the breakaway eastern province for shooting down the civilian airliner. An investigation was convened. It has been six months since the investigation was convened, and the results have not been released.
Clearly, if the investigation supported the Western propaganda, the results would have been released. We can safely conclude that the investigation does not support the West’s propaganda. There has not been one word from the Western media demanding the results of the investigation. The world has forgotten it, but the world remembers the loudly shouted propaganda, and the conclusion, unsupported by any evidence, is that Russia is guilty.

The Western media works the same way when it reports Charlie Hebdo.

Oh well.

On to the next?


You didn't go to Harvard or Yale either?

Relax.

Neither did lots of those holding the diplomas.

Or at least they didn't study much.



Ivy League’s Meritocracy Lie: How Harvard and Yale Cook the Books for the 1 Percent

"We are credentializing a new elite by legitimizing people with an inflated sense of their own merit"



Excerpted from "The Tyranny of the Meritocracy: Democratizing Higher Education in America"

A special lottery is to be held to select the student who will live in the only deluxe room in a dormitory. There are 100 seniors, 150 juniors, and 200 sophomores who applied. Each senior’s name is placed in the lottery 3 times; each junior’s name, 2 times; and each sophomore’s name, 1 time. What is the probability that a senior’s name will be chosen?

Does this kind of question look familiar? For most of you, it probably does: it represents just one of the nearly two hundred questions that presently make up the SAT. (The answer, by the way, is 3/8, or 37.5 percent, for those among us who prefer percentages to fractions.) For nearly a century, universities across the country have used SAT scores and other quantifiable metrics to make decisions about admitting one candidate versus another—decisions that can have far-reaching impact on both the admitted and declined candidates’ educational, social, professional, and financial futures.

On the basis of what? we might ask. Originally the acronym SAT stood for Scholastic Aptitude Test, on the strength of the argument that a high schooler’s success on the test correlated with his or her success in the increasingly rigorous environment of college. As evidence of this correlation dwindled, the name was changed first to the Scholastic Assessment Test (keeping the handy, well-known acronym) and later to the SAT Reasoning Test.

Call it what you will, the SAT still promises something it can’t deliver: a way to measure merit. Yet the increasing reliance on standardized test scores as a status placement in society has created something alien to the very values of our democratic society yet seemingly with a life of its own:   a testocracy.

Allow me to be clear: I’m not talking about all tests. I’m a professor; I believe in methods of evaluation. But I know, too, that certain methods are fairer and more valuable than others. I believe in achievement tests: diagnostic tests that are used to give feedback, either to the teacher or to the student, about what individuals have actually mastered or what they’re learning. What I don’t believe in are aptitude tests, testing that — by whatever new clever code name it goes by — is used to predict future performance. Unfortunately, that is not how the SAT functions. Even the test makers do not claim it’s a measure of smartness; all they claim is that success on the test correlates with first-year college grades, or if it’s the LSAT (Law School Admission Test), that it correlates with first-year law school grades.

As I’ll explain later, such a correlation is slight at best. In any case, it’s certainly not a barometer of merit. Merit is much too big a concept to simply refer to how you’re going to do in your first year of college or law school. Because if all we cared about is how well you do in your first year of college, we would have college programs last only one year, right? Why would you have to be there and pay tuition for three more years? We do and we must care about more than freshman-year grades — we care about whether students learn something in college, whether they grow into themselves on the way to becoming better citizens and making their distinctive contributions to society. What we really care about are all the things that the testocracy can’t measure.

How then did we get to a place where American higher education appears more concerned with applicants’ test scores and alumni financial contributions than with the education of current students and the contributions of alumni to our society as a whole? A review of America’s curious history of—and relationship with—an obsessive culture of testing may help answer these questions.
* * *
“Manly, Christian character.” That was the ideal that Endicott Peabody, a member of the New England Brahmin class, hoped to cultivate in the boys who attended his private boarding school, Groton. Peabody founded Groton in 1884 with the purpose of building character and embedding the value of “noblesse oblige” into the social fabric of late-nineteenth-century America. Groton students, like young men from seven other boarding schools in the northeastern United States, were to embody character, manliness, and athleticism.

The “Big Three” colleges — Harvard, Yale, and Princeton — validated these ideals by admitting nearly all boarding-school applicants and conferring honorary degrees upon Peabody.

Admission into the “Big Three” was fairly easy if the applicant possessed a “manly, Christian character.” He had to pass subject-based entrance exams devised by the colleges, but the tests weren’t particularly hard, and he could take them over and over again to pass. Even if a student didn’t pass the required exams, he could be admitted with “conditions.”

Once enrolled at Harvard, Yale, or Princeton, he would focus primarily on his social life, clubs, sports, social organizations, and campus activities, while often ignoring his academic work.

Admissions began to change, however, when Charles William Eliot became president of Harvard in 1869. Annoyed with “the stupid sons of the rich,” Eliot sought to draw into the university’s fold capable students from all segments of society. To ensure that smart students could attend Harvard regardless of their means, Eliot, in 1898, abolished the archaic Greek admission exams that were popular up until that time. He also replaced Harvard’s admissions exams with exams created by the College Entrance Examination Board because it tripled the number of locations where applicants could be tested. The result of Eliot’s changes was the admission of more public school students, including Catholics and Jews.

A. Lawrence Lowell, Eliot’s successor, attempted to reverse the trend of admitting those without WASP status and values. The “Jewish problem” in particular alarmed Lowell. The number of Jews at Harvard had increased steadily, from 10 percent in 1909, to 15 percent in 1915, to 21.5 percent in 1922. In addition to their growth in numbers, Jews generally outperformed non-Jewish students academically. Lowell worried that Harvard might suffer the same fate as Columbia, which experienced “WASP flight” as more Jewish students started to enroll.

In response, Lowell limited freshman enrollment to one thousand and altered the admissions criteria to include an emphasis on “character,” legacy, and athleticism rather than solely on academic achievement. Additionally, the application process now required interviews and photos, as well as letters of recommendation. Initially a method to limit Jewish enrollment, the notion of a “well-rounded” applicant was born in the first half of the 1920s.

But altering admissions criteria to benefit socially desirable students was not enough for Harvard. With an increasingly complex university admissions process, a new and uniform system was needed to separate the wheat from the chaff. The SAT became the solution that the ruling elite had been desperately seeking for all this time to perpetuate itself:   a testocracy, disguised as a meritocracy.
* * *
The origins of the SAT can be traced to the turn of the twentieth century, when the College Board, the nonprofit organization that owns the rights to the modern-day SAT, administered the nation’s first college entrance examinations, in 1901. Unlike today’s SAT, these exams were entirely composed of essays that required students to engage with subjects as far-ranging from each other as Latin, world history, and physics.

The birth of these exams came at about the same time as another social scientific phenomenon:   intelligence testing.

In 1905, French psychologist Alfred Binet developed the world’s first IQ test, which aimed to produce a set of predictable results from which one could “derive a rating of . . . ‘mental age’ ” and “identify slow learners [who] could be given special help in school.”

Binet’s theories were eventually adapted by the United States military during World War I, when Harvard professor and IQ-test advocate Robert Yerkes convinced Army brass to allow him to evaluate nearly two million soldiers to identify top talent who could be promoted to the rank of officer.

The results were striking: according to Yerkes, “The native-born scored higher than the foreign-born, less recent immigrants scored higher than more recent immigrants, and whites scored higher than Negroes.”  In 1923, Carl C. Brigham, a Princeton psychology professor and leading figure in the growing anti-immigration movement of the time, authored a treatise titled A Study of American Intelligence, in which he relied heavily upon Yerkes’s findings to conclude that “American intelligence is declining, and will proceed with an accelerating rate as the racial admixture becomes more and more extensive.”

This belief, which Brigham helped to perpetuate, was lampooned by F. Scott Fitzgerald, a Princeton graduate, in his novel The Great Gatsby, published two years later, in 1925.

“Civilization’s going to pieces,” broke out Tom violently. “I’ve gotten to be a terrible pessimist about things. Have you read ‘The Rise of the Colored Empires’ by this man Goddard?”
“Why, no,” I answered, rather surprised by his tone.
“Well, it’s a fine book, and everybody ought to read it. The idea is if we don’t look out the white race will be — will be utterly submerged. It’s all scientific stuff; it’s been proved. . . .
This fellow has worked out the whole thing. It’s up to us, who are the dominant race, to watch out or these other races will have control of things. . . .
“This idea is that we’re Nordics. I am, and you are, and you are, and—” After an infinitesimal hesitation he included Daisy with a slight nod, and she winked at me again. “—And we’ve produced all the things that go to make civilization—oh, science and art, and all that. Do you see?”
There was something pathetic in his concentration, as if his complacency, more acute than of old, was not enough to him any more.
The College Board selected Professor Brigham to spearhead the design of a new, nationwide college entrance exam, and on June 23, 1926, Brigham oversaw the very first administration of what was then called the Scholastic Aptitude Test.

News of the SAT’s success eventually made its way up to Cambridge, Massachusetts, where James Bryant Conant presided as president of Harvard University (from 1933 to 1953).

Unlike many of his peers at the time, Conant openly embraced the Jeffersonian ideal of a “natural aristocracy of talents and virtue” —a forerunner of the twentieth-century idea of the meritocracy. In 1934, Conant assigned two of his assistant freshman deans, Henry Chauncey and Wilbur J. Bender, the task of identifying high-performing middle-class and ethnic-immigrant students for the possible receipt of need-blind scholarships to the university.

The two men offered up Brigham’s SAT as the optimal screen through which eligible candidates could be filtered. Conant accepted their recommendation, mandating that applicants take the test in order to be considered for scholarships.

A battle that had begun with idealistic rhetoric succumbed to a Trojan horse:  the SAT and a budding testocracy confirmed the existing order as inevitable, because the tests demonstrated that the elite possessed unassailable merit. Harvard’s adoption of the SAT subsequently set a new gold standard in the world of education.

Chauncey went on to found the Educational Testing Service, in 1947, which has inherited the College Board’s role as administrator of the SAT (and has developed a host of popular graduate-level entrance exams in its own right). By the 1950s, the College Board had grown to around three hundred members, and more than half a million students sat for the exam every year during that period.

Test-preparation companies, such as Kaplan and the Princeton Review, thrived as a result of the SAT’s rise, and “much of the curriculum in American elementary and secondary education [was] reverse-engineered to raise SAT scores” to ensure admission to top universities.

This leaves us in a particular quandary today, best described by Lucy Calkins, founding director of the Reading and Writing Project at Columbia University’s Teachers College. Referring to the most recently appointed president of the College Board, she asks, “The issue is:   Are we in a place to let Dave Coleman control the entire K–12 curriculum?”
* * *
This is not to say that the testocracy has continued to gain ground unabated. Close to eight hundred colleges have decreased or eliminated reliance on high-stakes tests as the way to rank and sort students. In the current environment, however, moving away from merit by the numbers takes guts. The testing and ranking diehards, intent on maintaining their gate-keeping role, hold back and even penalize administrators who take such measures. The presidents of both Reed College and Sarah Lawrence College report experiencing forms of retribution for refusing to cooperate with the “ranking roulette.”

At the center of this conflict is the wildly popular "US News & World Report"’s annual college-rankings issue — the bible of university prestige. In the book Crazy U, Andrew Ferguson describes meeting Bob Morse, the director of data research for "US News" and the lead figure behind the publication’s college rankings.

Morse, a small man who works in an unassuming office, is described by Ferguson as “the most powerful man in America.” And for good reason:   students and parents often rely upon the rankings — reportedly produced only by Morse and a handful of other writers and editors — as a proxy for university quality.

These rankings rely heavily on SAT scores for their calculations. Without such data available from, for example, Sarah Lawrence, which stopped using SAT scores in its admissions process in 2005, Morse calculated Sarah Lawrence’s ranking by assuming an average SAT score roughly 200 points below the average score of its peer group.

How does "US News" justify simply making up a number? Michele Tolela Myers, the president of Sarah Lawrence at the time the school stopped using the SAT, reported that the reasoning behind the lowered ranking was explained to her this way: “[Director Morse] made it clear to me that he believes that schools that do not use SAT scores in their admission process are admitting less capable students and therefore should lose points on their selectivity index.”

This is the testocracy in action, an aristocracy determined by testing that wants to maintain its position even if it has to resort to fabrication. What is it they are so desperate to protect? The answer initially seems to be that the SAT can predict how well students will do in college and thus how well-prepared they are to enter a particular school.

There is a relationship between a student’s SAT score and his first-year college grades. The problem is it’s a very modest relationship. It is a positive relationship, meaning it is more than zero. But it is not what most people would assume when they hear the term correlation.

In 2004, economist Jesse Rothstein published an independent study that found only a meager 2.7 percent of grade variance in the first year of college can be effectively predicted by the SAT. The LSAT has a similarly weak correlation to actual achievement in law school.

Jane Balin, Michelle Fine, and I did a study at the University of Pennsylvania Law School, where we looked at the first-year law school grades of 981 students over several years and then looked at their LSAT scores. It turned out that there was a modest relationship between their test scores and their grades.

The LSAT predicted 14 percent of the variance between the first-year grades. And it did a little better the second year:  15 percent. Which means that 85 percent of the time it was wrong. I remember being at a meeting with a person who at the time worked for the Law School Admission Council, which constructs the LSAT. When I brought these numbers up to her she actually seemed surprised they were that high. “Well,” she said, “nationwide the test is nine percent better than random.” Nine percent better than random. That’s what we’re talking about.

So, if the SAT does not correlate with the grades a student will get in college, how can a student’s performance in college be predicted? William C. Hiss and Valerie W. Franks, both formerly of the Bates College admissions department, released a report in 2014 that studied thirty-three colleges and universities that required neither the SAT nor its very popular competitor the ACT for admission.

Now, which students did or did not choose to submit their standardized-test scores is in itself interesting — overwhelmingly those students who did not submit a score were women, minority students, or those who would be the first in  their family to go to college, which should tell us a lot about the SAT right there.

In reviewing the performance of more than eighty-eight thousand students, Hiss and Franks found that students who perform well in college were the ones who had gotten strong grades in high school, even if they had weak SAT scores. They also found that students with weaker high school grades did less well in college — even if they had stronger SAT scores.

Summing up their findings they wrote, “Many of us who have spent our careers as secondary and university faculty and administrators find compelling the argument that ‘what students do over four years in high school is more important than what they do on a Saturday morning.’”

So, if the SAT does not measure aptitude — and if it doesn’t even pretend to measure achievement — then what does it measure?

I have argued for years that the SAT is actually more reliable as a “wealth test” than a test of potential, and the most recent results bear this out. Below are figures released in 2013 by the College Board that correlate SAT scores with the family income of the test taker.

FAMILY INCOME
AVERAGE SAT SCORE (OUT OF 2400)
FOR 2013 COLLEGE-BOUND SENIORS
$0,000 – $20,000
1326
$20,000-$40,000
1402
$40,000-$60,000
1461
$60,000-$80,000
1497
$80,000-$100,000
1535
$100,000-$120,000
1569
$120,000-$140,000
1581
$140,000-$160,000
1604
$160,000-$200,000
1625
More than $200,000
1714

Now that is a correlation! This is what I refer to as the “Volvo effect.” In Crazy U, Ferguson talks about how the parents of his son’s friends and classmates were spending $30,000 to $35,000 to prepare their children for college. That isn’t the amount they had to pay for a premier boarding school mind you — that was the amount they paid to hire someone to tutor their child on the SAT and to help them write their “statement of interest” essays on their college applications.

When these students get in to a particular college we say that this process reflects the fairness of the meritocracy, but really it only reflects the fact that the elite dominate the entry to higher education. These students aren’t smarter than the other students. Or to put it another way:  they may be smart, but they are not necessarily those most likely to contribute to our society; they simply come from families that have more money to pay people to prepare them for the SAT, to test-prep them for their high school grades, and to pay for viola lessons so they can stand out more in the admissions process.

The SAT’s most reliable value is its proxy for wealth. It is normed to white, upper-middle-class performance, as numerous studies have shown when the test is viewed through the lenses of race and class. The figures below, from 2013, show this in stark relief.

TEST-TAKER ETHNICITY
AVERAGE SAT SCORE (OUT OF 2400)
FOR 2013 COLLEGE-BOUND SENIORS
Black or African American
1278
Mexican or Mexican American
1354
Puerto Rican
1354
Other Hispanic, Latino, or Latin American
1355
American Indian, Alaska Native
1427
Other
1501
White
1576
Asian, Asian American, Pacific Islander
1645

Is this a case of merit belonging to one race and not to another? Or is it the case that if you have grown up in a particular environment, such as one where your parents lack the funds to prepare you for these standardized tests or lack an advanced level of education themselves, you will not do as well on the SAT?

There are other reasons why students of various ethnicities may underperform on the SAT. One of these is a phenomenon called “stereotype threat,” a term coined by Claude Steele of Stanford University (now provost of the University of California at Berkeley) to describe the anxiety a person may experience when he or she has the potential to confirm a negative stereotype about his or her social group.

Many first- and second-generation immigrants of color test well, for example, because they retain a national identity free of America’s racial caste system and enjoy material and cultural advantages, including professional or well-educated parents. They do not internalize the stigma of race and are thus less affected by the anxiety of confirming assumptions of intellectual inferiority that depresses test scores of highly motivated students who are African American, Mexican American, or of Puerto Rican heritage.

I know this threat is real. One summer not too long ago, I was engaged in a long-term writing project and recruited an absolutely brilliant young man who is Latino. Enrique (not his real name) has a photographic memory. I mean, he blew my mind. I have never seen anybody who could tell you, “Oh, well that’s on page 384. It’s in the middle of the page. I think it’s the first paragraph, not the second one.”

But Enrique could not do well on the LSAT, though he practiced taking it close to thirty times. Enrique grew up in a low-income community, so arguably that had something to do with the verbal references that he might have missed. But a lot more of it had to do with stereotype threat:   he was too tense. Postscript to this story:   Enrique was subsequently selected to be a Rhodes scholar. So what, really, are we talking about here?

If we can agree that the SAT, LSAT, and other standardized tests most reliably measure a student’s household income, ethnicity, and level of parental education, then we can see that reliance on such test scores narrows the student body to those who come from particular households. Then we must decide how to ensure that we open the admissions doors to a greater diversity of students — not just the ones from privileged backgrounds.

I want to make it clear that I am not talking about affirmative action here. The loud debate over affirmative action is a distraction that obscures the real problem, because right now affirmative action simply mirrors the values of the current view of meritocracy. Students at elite colleges, for example, who are the beneficiaries of affirmative action tend to be either the children of immigrants or the children of upper-middle-class parents of color who have been sent to fine prep schools just like the upper-middle-class white students.

The result? Our nation’s colleges, universities, and graduate schools use affirmative-action-based practices to admit students who test well, and then they pride themselves on their cosmetic diversity. Thus, affirmative action has evolved in many (but not all) colleges to merely mimic elite-sponsored admissions practices that transform wealth into merit, encourage over-reliance on pseudoscientific measures of excellence, and convert admission into an entitlement without social obligation.

No, the question, as I said in the previous chapter, is this:  How do we move from admission to mission? Further:  How do we move past that moment of admission, which may only confirm one’s present status, to granting an opportunity for a diverse and worthy group of individuals to learn how to work together collectively and/or creatively to help solve the deep challenges confronting our communities, our economy, and our educational experiences in a democratic society?

Of course, some of this has to do with how we define success. A study of Harvard alumni over three decades, which culminated in the 1990s, defined “success” by income, community involvement, and professional satisfaction. Researchers found a high correlation between those criteria and two criteria that might not ordinarily be associated with Harvard freshmen:   low SAT scores and a blue-collar background.

This is echoed by college admissions officers at elite universities today, who report — when asked what predicts life success — that, above a minimum level of competence, “initiative” or “hunger” are the best predictors. Marlyn McGrath Lewis, director of admissions for Harvard, says, “We have particular interest in students from a modest background. Coupled with high achievement and a high ambition level and energy, a background that’s modest can really be a help. We know that’s the best investment we can make:  a kid who’s hungry.”

That’s certainly the message of Derek Bok and William Bowen’s The Shape of the River, that those who are motivated to take advantage of an opportunity, when given the opportunity, can and often do succeed, often in ways that are different than their more privileged peers. The African American students in the Bok-Bowen study, for example, became leaders within their communities at much higher rates than their more affluent and better-scoring white counterparts.

When I speak here of diversity, I’m not talking strictly along color or gender lines either. When the GI Bill was first proposed, toward the end of World War II, some university officials did their best to get it defeated. They were appalled by the prospect of what they saw as a mob of unprepared, unsuitable men trying to be their students. To their surprise, the veterans — many of them poor, most the first in their families to attend college—proved to be among the best students of their generation.

By broadening access to college for those who had served their country, the GI Bill helped fuel the post–World War II economic boom while leveling the playing field for many Americans. The bill epitomized our country’s dual commitments: to open opportunity across the economic spectrum and to invest in people who will give back to society.

We see the problem of restricted access today in the new elite class, which passes on its privileges in the same way that the old elite from twentieth-century America passed on its privileges. But there is an even more worrisome aspect of the new elite. The old elite felt that it had inherited its privileges; in order to defend the social oligarchy over which it reigned, the old elite felt the need to give back through public service or a financial commitment to the greater good.

The old elite recognized that it had been privileged by the accident of birth, so the message to those who were out of luck was that you were unfortunate but it was through no defect of your own.

The new elite, on the other hand, feels that it has earned its privileges based on intrinsic, individual merit. The message, therefore, to those who are not part of this elite is “You are stupid. You simply don’t matter. I deserve all the advantages I’m granted.”

This attitude manifests in the jobs that college grads now take. For example, the student-run "Harvard Crimson" ran an article in 2007 about that year’s graduating class smirking that “only” 43 percent of female graduates entered finance and consulting compared to 58 percent of male graduates. The article, entitled “ ’07 Men Make More,” explained — with apparent disdain — that women choose jobs in lower-paying fields such as education and public service.

Despite the economic downturn of recent years, the striking number of Harvard graduates entering finance and consulting has persisted. The class of 2013 senior survey showed that more than 30 percent of the 2013 class had jobs in those fields. After consulting and finance, the technology/engineering industry captured 13 percent of Harvard graduates that year. The "Crimson" again emphasized — with what seems to me to be the appearance of similar disdain — the preference of women to pursue less-lucrative work in education, media, and health care rather than in finance, consulting, and technology.

The top career choices of many male Harvard students — whether it is 2007 or 2013 — are severely lacking in any element of service.

This is the damage that we are doing through our testocracy. We are credentializing a new elite by legitimizing people with an inflated sense of their own merit and little unwillingness to open up to new ways of problem solving. They exude an arrogance that says there’s only one way to answer a question — because the SAT only gives credit for the one right answer.

The world, by contrast, provides us with more than one correct answer to most questions. In the face of mounting criticism, the College Board has recently proposed changes to the SAT, including reducing the use of obscure vocabulary words, narrowing the areas from which the math questions will be drawn, and making the essay section optional. But individuals such as Bard College president Leon Botstein find these proposed changes are too little, too late because they don’t address the test’s real problem. In an eloquent rebuttal, Botstein writes:

The essential mechanism of the SAT, the multiple choice question, is a bizarre relic of long outdated twentieth century social scientific assumptions and strategies. As every adult recognizes, knowing something or how to do something in real life is never defined by being able to choose a “right” answer from a set of possible answers (some of them intentionally misleading). . . . No scientist, engineer, writer, psychologist, artist, or physician — and certainly no scholar, and therefore no serious university faculty member — pursues his or her vocation by getting right answers from a set of prescribed alternatives that trivialize complexity and ambiguity.
Meaningful participation in a democratic society depends upon citizens who are willing to develop and utilize these three skills:   collaborative problem solving, independent thinking, and creative leadership. But these skills bear no relationship to success in the testocracy. Aptitude tests do not predict leadership, emotional intelligence, or the capacity to work with others to contribute to society. All that a test like the SAT promises is a (very, very slight) correlation with first-year college grades.

>But once you’re past the first year or two of higher education, success isn’t about being the best test taker in the room any longer. It’s about being able to work with other people who have different strengths than you and who are also prepared to back you up when you make a mistake or when you feel vulnerable.

Our colleges and universities have to take pride not in compiling an individualistic group of very-high-scoring students but in nurturing a diverse group of thinkers and facilitating how they solve complex problems creatively — because complex problems seem to be all the world has in store for us these days.

(Excerpted from “The Tyranny of the Meritocracy: Democratizing Higher Education in America” by Lani Guinier (Beacon Press, 2015). Reprinted with permission from Beacon Press. All rights reserved.)

In 1998, Lani Guinier became the first woman of color appointed to a tenured professorship at Harvard Law School. Before her Harvard appointment, she was a tenured professor at the University of Pennsylvania Law School.


No comments: