Last week the College Board, which administers standardized tests, presented a new version of the SAT to correct the failures of the last version, released in 2005. As the president of Bard College, Leon Botstein, put it, sadly, this entire production is “part hoax and part fraud.” Both sets of fixes were maneuvers driven more by survival interests than by a desire to improve learning or educational opportunity in America. The College Board instituted the 2005 revisions in response to threats by the University of California system, the SAT’s biggest user, that it would stop taking the test into account. The latest redesign is intended to protect the SAT from two new dangers: the growing number of students taking its primary competitor, the ACT, and the proliferation of test-optional institutions, such as Wake Forest University and Bard College.
The latest SAT redesign — expected to take effect in 2016 — makes the writing test, one of the key additions in 2005, optional. Both the writing and reading sections will ask test takers to analyze and respond to a passage of writing. In addition, students will no longer be penalized for inaccurate answers on the multiple-choice section of the test. The vocabulary and mathematics sections will also see overhauls, and for the first time, the College Board is making the test available online.
Regardless of these changes, the crucial question is whether we still need standardized testing to predict college success. One-third of four-year colleges and universities in the United States do not require the SAT or ACT for admission. Many of these colleges know what the College Board hates to admit: High school grades, not standardized tests, are the best predictor of college grades. The SAT and ACT add little to a college’s ability to select undergraduates. A new national study by William Hiss of Bates College published last month found that students admitted to public universities and private colleges without submitting test scores did just as well as those selected on the basis of test scores. And students with strong high school grades and low test scores do better in college than those with high scores and low high school grades. The study concluded that test scores are unnecessary for colleges to select undergraduates.
The importance of high school grades in predicting success in college is underappreciated. “Irrespective of the quality or type of school attended, cumulative grade point average in academic subjects in high school has proved to be the best overall predictor of student performance in college,” the University of California’s president emeritus Richard Atkinson and his statistician, Saul Geiser, wrote in their contribution to my 2012 book, “SAT Wars.” “This finding has been confirmed in the great majority of ‘predictive-validity’ studies conducted over the years, including studies conducted by the testing agencies themselves.”
The technical literature released by the College Board has always admitted the superiority of high school grades over test scores. But for public consumption, it puts the following spin on that fact: High school grades and test scores in combination best predict students’ ability — and that is not a lie. The SAT does increase a college’s statistical ability to predict grades by 1 or 2 percentage points, compared with how it does with high school grades alone. But the marginal increase in the test’s predictive power comes at an unworthy cost.
Last year Nathan Kuncel, a psychology professor at the University of Minnesota who works closely with the College Board, admitted during a National Public Radio discussion about the SAT that tests aren’t the best predictor of college performance. But he argued that we need to use all available tools, not just high school grades, since people are complex creatures.
Kuncel’s argument is disingenuous. Nothing misrepresents the complexity of a youth’s life more than an SAT or ACT score. The tests reduce the students’ entire high school experience to the trial of one Saturday-morning standardized test that doesn’t capture creativity, problem solving, leadership, public service or work ethic. It assigns test takers a number that correlates more significantly with their parents’ bank accounts than with their brains. Test scores are a one-dimensional distortion that, for a tiny increase in statistical power, magnifies social disparities.
Test scores unnecessarily add social discrimination to a college’s admissions machinery. The most reliable academic metric available, high school grades, does not correlate with family income or parents’ education, but test scores do. There is a strong linear relationship between family income and test scores: The higher the family income, the higher the youth’s test score. If a college wants an applicant pool and an incoming freshman class to come overwhelmingly from families with high incomes, all it has to do is publicize a high test score requirement for its students.
Students from low-income families, without the privilege of extensive test prep, simply do not apply with their low test scores. Test-score-selective colleges pick their students from a socially exclusive applicant pool of those with high test scores. It is not an accident that nearly 79 percent of the students at the most selective colleges and universities come from the top economic quartile of America’s families. These institutions claim to select for brains, not bank accounts, but the best brains can be identified by high school records, not by test scores. Test scores disguise social privilege by passing it off as academic advantage; high school grades do not.
Since Wake Forest University, where I am a professor, went test-optional in 2009, our freshman classes have come in with higher high school grades and received higher first-year grades from their professors than previous classes did. Our students are academically stronger for being selected by their high school transcripts and not their test scores. They come from more racially and economically diverse backgrounds than before.
With the latest attempt to rebrand its century-old product, the College Board is acting like a corporate entity driven by market-share calculations. The SAT was rolled out in 1926 by Princeton and Yale universities as an IQ test that was believed — falsely — to demonstrate the superiority of Nordic genetic stock in order to discriminate against Jews.
Harvard University’s search for national scholars, which began in 1933, boosted the SAT’s credibility. But the test was not embraced outside the small circle of private New England colleges until late 1960s, when the University of California, adopted the test in order to compete with the Ivy League — disregarding its own research that found the test useless. While it has gone through several iterations since, the fundamental character of the test hasn’t changed since 1926. It was a false measure of scholastic aptitude then, and it is a discriminatory and unnecessary measure of ability now. The redesigned SAT will continue to have all the fundamental problems of the old one and does not answer the question why we need such a test at all.
Error
Sorry, your comment was not saved due to a technical problem. Please try again later or using a different browser.