Friday, July 4, 2014

Silly Wabbit, Tricks Is for Kids! (Would You Believe (-)10.5% Yearly Growth? Well, How About . . . (-)9.8%?) Goldman Sachs and Bear Stearns Wheeler Dealers Coin Gold from Housing Implosion (How To Catch A Chess Cheater:  Ken Regan Finds Moves Out Of Mind)



As long as our national (convenient) memory holes are well functioning and, once again, an acknowledged expert forgets the fact that Afghanistan's government of 2001 offered to turn over bin Laden and any other 9/11 collaborators to the USA when offered evidence of their involvement. . . .

The Al Qaeda camps, leadership, and infrastructure within Afghanistan had to go. No sitting president could have avoided a military commitment in the wake of 9/11.

One of my favorite blogs, WhoWhatWhy, for some reason has decided to publish an independence day-themed essay by a somewhat newly-minted, well-intentioned but naive militarist lecturer (a West Point-er!) attempting to peddle one more truly unbelievable reason for why the "war on terror" went so terribly wrong. Right. Unless it didn't, of course. But we won't bother with those inconvenient facts on this firecracky day, and we'll carry on as if everything we in this country attempt is pure-hearted and without charlatanry.

However, at the same time, we can access the latest video of what the U.S. National Institute of Standards and Technology (NIST) was really trying to obscure (still - a decade after its first massive water muddying) when questioned on its facts about the World Trade Center 9/11 free-fall collapse by well-meaning and not newly-minted physics professionals.

Happy 4th of July!

Go War Mongers!




Now . . . back to the regular business of this well-intentioned (but probably naively run) blog, which is pursuing the real story (or not) on what's happening in the continuing wake of these events to the economics/financial world of the USA USA USA!

The following is, of course, a fantasyland scenario found mainly on crazy right-wing (or nihilist) alarm blogs.

It's too bad that the figures cited at the beginning of it seem to make a lot of sense.

And they are long-term scary for those at the bottom of the heap.

The national yearly figures citing the USA's lack of growth (and who stole the real growth) are now available for this year.

Everywhere.

And they're all different.

Don't worry about those immigration numbers being factored in.

We know they cause long-term growth.

Don't we?

I always liked to check with Shadow Stats at times like these (which is all the time, actually, now).

No need to rush for the exits we're told.

They're just numbers.

Relax.



Would You Believe The US Economy Is Contracting at 0.1%, 2.9%, 9.8%, 10.5%, 11%?


June 26, 2014

by horse237

The government originally said that the US economy as measured by the GDP shrank at 0.1% in the first quarter of 2014. And that was due to abnormally cold weather probably because Global Warming had been hiding at the bottom of the oceans. Now we are told the US economy actually contracted at 2.9%. This is starting to sound serious. Maybe we can coax that Global Warming to rise up out of the oceans. But Dr John Williams at Shadow Stats keeps much more reliable statistics on the cost of living and inflation. He says that prices are going up at 8.9% which makes sense because food prices alone are soaring upwards at 22%. If we used his figure of an 8.9% increase in the cost of living, then the real GDP after being deflated for inflation is contracting at 9.8%.

Those are Depression level numbers.

The first quarter of 2014 was the fifth quarter in a row of a contracting US economy. Even the Greek economy only contracted 6.2%. The Greeks have 55% youth unemployment and riots every week. That 9.8% annual decline was worse than the 1929-1933 decline of 30% over 4 years in GDP. We are talking about a rate of decline greater than that of the harshest period of life in 20th century America. Millions of Americans starved to death in the Great Depression. And yet we are declining faster and harder now than then.

We could argue that the US economy is declining at an even faster rate than 9.8%. How so? The other factor is GDP per person.

More important to you than how much money is flowing through the economy is just how much is in your hands. The US population has been increasing at a little over .72% per year. That means GDP per capita declined at 10.5% not 9.8%. But that was before President Obama organized this Amnesty rush of illegal aliens. The contraction in GDP per capita for the second quarter of 2014 ought to be above 11% if you factor in both rising prices and the sudden influx of new arrivals at the border and the airports.

The government has also been lying to us about unemployment numbers to conceal from us just how badly they have mismanaged the economy.

The number of Americans on disability has doubled to just over 11 million. I have met people who are recent additions to the Social Security Disability program. They seem to be far healthier than those I met 20 years ago. The Social Security Disability Fund is running low. Next year the Congress and the President will have to choose between cutting benefits 20% and raising taxes to cover the shortfall. Most likely the politicians will pass a temporary measure to get them past the 2016 elections and ask Janet Yellen to print up a few tens of billions more dollars at the Federal Reserve.

The President has people working for him who are smarter than he is. They probably are not working for his foreign policy team but some Bankers are smart. And they do run his administration. Obamacare has employer mandates on companies who have full-time workers. Locally a restaurant manger said she read the company’s new national policy that to comply with Obamacare regulations she was cutting everyone’s hours down from 40 to about 30. She had just cut their wages almost 25%. She said nobody would ever again make enough to support themselves working there. Several of the women cried. The others had Food Stamps and child support to make up the loss.

It has occurred to me that Obamacare is a mandate for job sharing. If this were the most transparent administration in history, Obama would have said, “We are allowing millions of people to enter the country legally and illegally but we are not creating more jobs. So we have to share the few jobs we do have. What I am proposing today is that we cut the hours you work while raising taxes on your dwindling income to pay your fair share of the cost of adding all those immigrants to Food Stamps, welfare, healthcare and the schools. This will not apply to the 30.7 million of you who work for the government at either the federal, state or local levels.”

He did not say that. But that is what he did do.

David Stockman pointed out the other day that the total number of hours worked in 1997-1998 is the same as today. Job sharing by cutting us back from full-time to part-time is a necessity if our politicians and the News Media are to maintain the illusion of a recovering economy.

This takes us logically to the next question. What if we deflated the present economy for inflation since 2000 or 1990 to see what our real growth has been along? Is it true as David Stockman says that our economy is nowhere near what Washington says?

We are told that the US GDP  was $2 trillion in 1974 but it is $17 trillion today. Yet every measure of income says wages are down in real terms since the 1970s. So how do we make sense of these numbers? Obviously prices are a lot higher than 40 years ago. And we have a lot more people. America had 203,211,926 people in 1970 before our wages began to decline. In 1990 we had 248,709,873 people and in 2000 we had 281,421,906.

Today we have 318 million and maybe quite a few more tomorrow thanks to Obama. We have to discount our 1970 GDP for the real inflation rate and the increase in population. If we focus on our more recent history, we can see that our total number of hours worked was the same in 1998 as it is today but we added 30 million people so our GDP per capita should be down 10%.

GDP also measures things we do not consume like tanks, aircraft carriers, depleted uranium bombs and bullets and the like. The wars in Iraq and Afghanistan, Libya, Somalia, Yemen and Syria have added little positive good for the average American consumer. But US spending on the wars increased from less than $300 billion in 2000 by more than $400 billion to $700 billion last year. DHS did not exist before the Department of Defense allowed hijacked planes to fly over 8 US air bases on 9-11-2001.

Strictly speaking, our GDP would decline if the DOD and the DHS were ever audited and taxpayers saved a few trillion dollars over the years from fraud. An economist would have to revise his forecast for the economy drastically downwards if there were a sudden outpouring of honest accounting at the Pentagon.

Another area of rampant fraud in America is medical care. If we subtracted out the more than $5 billion a week in fraud from US government medical programs like Medicare and Medicaid as cited by Economist magazine, our GDP would be $272 billion less. And we could save another trillion from medical care expenses if we spent the same amount of money per person as Germany or Switzerland. But economists would warn you that doing things the German way would cut our GDP 6%. That would be over $900 billion. Clearly, America’s economy has several deep layers of fraud.

The only other numbers that make the economy look good comes from our manipulated stock markets. We learned last week that worldwide Central Banks like the Federal Reserve have $29.1 trillion invested in the stock markets. That certainly pumps up stock prices but that is not the only way governments pump up the markets.

We have had negative interest rates for six years. Low interest rates depress bond prices making stocks look better. I should point out that rates are negative if you subtract the inflation rate from the interest paid. That means our savings have been used to subsidize Bank profits and Bankers’ bonuses.

Bank loans also pump up and distort the economy. We have a fractional reserve banking system. If Mrs Jones deposits $10,000 in her account, she will receive $100 on interest payments at the end of the year. Inflation at 8.9% will cut her purchasing power by $790 more than her interest payment. But look at the Banker. He loans out her $10,000 plus $90,000 he created when he loaned out the money at 29.9% to credit card customers. He could make additional money charging late fees and the like.

The worst thing the government allows Bankers to do is to loan us money at interest that they created out of nothing. The Bankers create a checking account deposit in the Treasury’s account and exchange it for bonds. The taxpayers now owe the Banks for money the government should have created. President Lincoln created  a non-interest bearing currency called the Greenback. If  (we) had Greenbacks, we would not have a a national debt of nearly $18 trillion.

We are entering a period in our economic history that is far worse than 1929-1929. Professor Steve Keen said we are headed to the worst Financial Collapse in 500 years because we have more debts to cancel than at any time in five centuries. I defined a Depression  as a period in time when Unpayable Debts are canceled en masse. We have to cancel tens of trillions in Unpayable Debts.

There are three ways to cancel debts. One is to hyperinflate the currency as in Germany in 1923. Another is (a) default of debts through foreclosures. Our money system does not allow us to have money unless we go into debt. Conversely, when we pay our debts or they are cancelled in bankruptcy court, our money supply contracts. In 1933 Americans starved to death because there was not enough money to do business after all those homes and farms had been foreclosed.

As Steve Keen said, we are entering the worst Financial Crisis in 500 years but Debt Cancellation as originated by the Babylonians thousands of years ago would cancel this Depression rather quickly and relatively painlessly. The Bible writers seem to have copied this idea but did not practice it. It was the reason why Jeremiah said the tribe of Judah fell captive to the Babylonians. Bond slaves are disarmed and cannot fight invading armies.

So there is an alternative to mass starvation, Nationwide Food Riots and a Civil War.

We could arrest the Bankers. Seize the $40 trillion they stole from us. Issue non-interest bearing currencies like Lincoln’s Greenbacks. Ban fractional reserve banking. Seize control of the Federal Reserve. And call an international conference to cancel Unpayable Debts using those seized assets as a bargaining tool.

Related Articles:

Jim Willie: BRICS 80 Preparing To Take Down The Dollar
http://vidrebel.wordpress.com/2014/05/08/jim-willie-brics-80-preparing-to-take-down-the-dollar/
Video: GMO Ticking Time Bomb, The Bankers Want You Sterilized And Then Dead
http://vidrebel.wordpress.com/2012/10/07/video-gmo-ticking-time-bomb-the-bankers-want-you-sterilized-and-then-dead/
25 Reasons To Absolutely Despise Bankers And Their Minions
http://vidrebel.wordpress.com/2011/07/24/25-reasons-to-absolutrly-despise-bankers-and-their-minions/
Catherine Austin Fitts: The Black Budget And The Leveraged Buyout Of The World Using Stolen Money
http://vidrebel.wordpress.com/2011/11/10/catherine-austin-fitts-the-black-budget-and-the-leveraged-buyout-of-the-world-using-stolen-money/


If you've been wondering about how the golden boys of the first financial implosion were able to define and profit so immensely from the housing one that followed, it's no secret.

Any longer.

Global private equity firms have not been, historically, in the business of dealing with pool fences and the other hassles of maintaining single-family houses. But following the housing market collapse, the idea of buying a ton of these foreclosed properties suddenly made sense, at least to investors.

Such private-equity purchases were to make money in three ways:   buying cheap and waiting for the houses to gain value as the market bounced back; renting them out and collecting monthly rental payments; and promoting a financial product known as “rental-backed securities,” similar to the infamous mortgage-backed securities that triggered the housing meltdown of 2007-2008.

Even though the buying of the private equity firms has finally slowed, economists (including those at the Federal Reserve) have expressed concern about the possibility that someday those rental-backed securities could even destabilize - translation:   crash - the broader market.


Since Wall Street was overwhelmingly responsible for the original collapse of the housing market, many have characterized these new purchases as a land grab.

In many ways, Progress CEO Donald Mullen is the poster-child for this argument. An investment banker who enjoyed a brief flurry of fame after losing a bidding war to Alec Baldwin at an art auction, he was the leader of a team at Goldman Sachs that orchestrated an infamous bet against the housing market.

Known as “the big short,” it allowed that company to make “some serious money“ when the economy melted down, according to Mullen’s own emails. (They were released by the Senate Permanent Subcommittee on Investigations in 2010.)

As Kevin Roose of New York magazine has written, “A guy whose most famous trade was a successful bet on the full-scale implosion of the housing market is now swooping in to pick up the pieces on the other end.”
. . . The only worry during the party was the pool, carefully monitored by the adults. Being unfenced, it had been a source of stress since they moved in. Repeated requests to the management company overseeing the property that one be installed had resulted in nothing.

The Cedillos had no idea that the house’s real owner was a private equity firm called Progress Residential LP.  It had been founded in 2012 by Donald Mullen, a former Goldman Sachs partner, and Curt Schade, a former managing director at Bear Stearns, an investment bank that collapsed in 2008. Progress was financed by a $400 million credit line from Deutsche Bank.

The same month that the family rented the house at 1471 West Camino Court, Progress Residential purchased more homes in Maricopa Country than any other institutional buyer. Nationally, Blackstone, a private equity giant, has been the leading purchaser of single-family homes, spending upwards of $8 billion between 2012 and 2014 to purchase 43,000 homes in about a dozen cities.

However, in May 2013, according to Michael Orr, director of the Center for Real Estate Theory and Practice at the W. P. Carey School of Business at Arizona State University, Progress Residential bought nearly 200 houses, surpassing Blackstone's buying rate that month in the Phoenix area.


The condition and code compliance of these houses varies and is rarely known at the time of the purchase. . . . 

Happy 4th of July!

For some odd reason, it seems worthwhile to run this freebie as no one in charge of any of the above gambits could possibly know anything about chess and they could use some logic study.

Just remember, everything is revealed eventually.

If you're lucky you're still alive when it finally is.

Or not.

For the following tale, chess players are.

How To Catch A Chess Cheater: Ken Regan Finds Moves Out Of Mind Print E-mail

By Howard Goldowsky

June 1, 2014


ReganCLLead.jpg
Cover Photography by Luke Copping

The following is our June 2014 Chess Life cover story. Normally this would be behind our pay wall, but we feel this article about combating cheating in chess carries international importance.

This subject has profound implications for the tournament scene so we are making it available to all who are interested in fighting the good fight. 


~Daniel Lucas, Chess Life editor


“Religion is responsibility or it is nothing.”
—Jacques Derrida


in•voke v.
1. To call on (a higher power) for assis­tance, support, or inspiration.
2. Computer Science - To activate or start (a program, for example).
— TheFreeDictionary.com


“What’s God’s rating?” asks Ken Regan, as he leads me down the stairs to the finished basement of his house in Buffalo, New York. Outside, the cold intrudes on an overcast morning in late May 2013; but in here sunlight pierces through two windows near the ceiling, as if this point on earth enjoys a direct link to heaven. On a nearby shelf, old board game boxes of Monopoly, Parcheesi, and Life pile up, with other nostalgia from the childhoods of Regan’s two teenage chil­dren. Next to the shelf sits a table that supports a lone laptop logged into the Department of Computer Science and Engineering’s Unix system at the Univer­sity at Buffalo, where Regan works as a tenured associate professor.

The laptop controls four invocations of his anti-chess-cheating software, which at this moment monitor games from the World Rapid Championships, using an open-source chess engine called Stockfish, one of the strongest chess-playing entities on the planet. Around the clock, in real-time, this laptop helps compile essential reference data for Regan’s algorithms. Regan and I are on our way to his office, where he plans to explain the details of his work. But the laptop has been acting up. First he must check its progress, and Regan taps a few keys. What he’s staring at on the screen reminds him to rephrase his question, but this time he doesn’t wait for my answer. “What’s the rating of perfect play?” he asks. “My model says it’s 3600.  These engines at 3200, 3300, they’re knocking at that door.” In Regan’s code, the chess engine needs to play the role of an omniscient artificial intelligence that objectively evaluates and ranks, better than any human, every legal move in a given chess position. In other words, the engine needs to play chess just about as well as God.  

A ubiquitous Internet combined with button-sized wireless communications devices and chess programs that can easily wipe out the world champion make the temptation today to use hi-tech assistance in rated chess greater than ever (see sidebar). According to Regan, since 2006 there has been a dramatic increase in the number of world­wide cheating cases. Today the incident rate approaches roughly one case per month, in which usually half involve teenagers. The current anti-cheating regulations of the world chess federation (FIDE) are too outdated to include guidance about disciplining illegal computer assistance, so Regan himself monitors most major events in real-time, including open events, and when a tournament director becomes suspicious for one reason or another and wants to take action, Regan is the first man to get a call.

Regan is a devoted Christian. His faith has inspired in him a moral and social responsibility to fight cheating in the chess world, a responsibility that has become his calling. As an international master and self-described 2600-level computer science professor with a background in complexity theory — he holds two degrees in mathematics, a bachelor’s from Princeton and a doctorate from Oxford — he also happens to be one of only a few people in the world with an ability to commit to such a calling. “Ken Regan is one of two or three people in the world who have the quantitative background, chess expertise, and comput­er skills necessary to develop anti-cheating algorithms likely to work,” says Mark Glickman, a statistics professor at Boston University and chairman of the USCF ratings committee. Every time Regan starts an instance of his anti-cheating code he does not merely run a piece of software — he invokes it. The dual meaning of “invoke” conveys Regan’s inspired relationship to the anti-cheating work that he does.

His work began on September 29, 2006, during the Topalov-Kramnik World Cham­pi­on­ship match. Vladimir Kramnik had just forfeited game five in protest to the Topalov team’s accu­sation that Kramnik was consulting a chess engine during trips to his private bathroom. This was the reunification match to unite the then-separate world champions, a situation created when Garry Kasparov and Nigel Short broke from FIDE in 1993. Topalov qualified for the 2006 match because he held the FIDE title. Kramnik qual­ified be­cause he had defeated Garry Kasparov in 2000 to claim a spot through historical lineage. Due to the schism, chess had suffered 13 years of heavy declines in sponsorship, stability, and respect. Kramnik’s forfeiture of game five not only threatened the reu­ni­fication but also the future of the sport.

Kramnik agreed to play game six, which ended in a draw. After game six, on October 4, Topalov’s team published a controversial press release trying to prove their previous allegations. Topalov’s manager, Silvio Danailov, wrote in the release, “… we would like to present to your attention coincidence statistics of the moves of GM Kramnik with recommendations of chess program Fritz 9.” The release went on to report at what frequency Kramnik’s moves for games one, two, three, four, and six matched the “first line” (Danailov’s words) of Fritz’s output.

An online battle commenced between pundits who took Danailov’s “proof” seriously versus others, like Regan, who insisted that valid statistical methods to detect computer assistance did not yet exist. For the first time, a cheating scandal was playing a role in top-level chess. There remained all kinds of uncertainties, including how much time Fritz used to process each move, how many forced moves were played, whether the engine was in single-line or multi-line mode (in multi-line mode machines play slower but stronger, because they enable extra heuristics and do less pruning of unpromising moves), what constituted a typical matching percentage for super-grandmaster play, all kinds of questions that prohibited scientific reproduction of Danailov’s accusation. In just a few weeks, the greatest existential threat to chess had gone from a combina­tion of bad politics and a lack of financial support to something potentially more sinister:   scientific ignorance. In Regan’s mind, this threat seemed too imminent to ignore. “I care about chess,” he says. “I felt called to do the work at a time when it really did seem like the chess world was going to break apart.” 

When Regan satisfies himself with the laptop’s data collection, he walks me out of his basement to the end of his driveway, where he points to a neighbor’s house down the block. Regan’s neighbor’s brother happens to be a college friend with whom Regan toured England before studying at Oxford, and with whom Regan spent a lot of time while on sabbatical at the University of Montreal. (The friend is a professor at McGill.) Regan loves to call attention to the connections and coincidences that surround his life, and as much as his faith drives a moral influence in his anti-cheating work and his interests in chess and mathematics drive a technical influence, his fascination with coincidence drives its own quirky influence. “Social networking theory is interesting,” he says. “Cheating is about how often coincidence arises in the chess world.”

In Regan’s Honda Accord, we talk about how his chess work has spawned non-chess-related ideas, from how to use computers to grade massive open online courses, to how to think about the future economy. Tyler Cowen, Regan’s childhood friend and an economics professor at George Mason University, is the author of Average is Over, which came out in 2013, and Cowen fills a chapter with predictions extrapolated from Regan’s research. Cowen reports how freestyle (human-computer) chess teams play stronger than computers do on their own and argues that the future economy will consist of high-performing human-computer teams in all aspects of society. Regan takes pride in playing a prominent part in his friend’s book.

Randomness affects all aspects of Regan’s life. His wallet oozes scraps of paper that contain names, numbers, and reminders. He doesn’t own a smartphone. When we enter his office, unopened boxes crowd the floor, and spewed across every shelf and workspace lie papers, stacks of books, piles of notebooks, an ancient monitor, a 90’s-era radio, and milk crates full of ephemera. A few months earlier, Regan moved to a new building constructed by the university and he claims he hasn’t had time to unpack. A clean spot the size of two cafeteria trays makes room for a monitor and keyboard. On another small clearing, con­spic­uously placed across from us, sits the only item in the room besides the computer equipment to have received Regan’s apparent care:  a framed portrait of his wife.

A tab on Regan’s browser is open to a fantasy baseball site. He loves baseball, and he was watching the 2006 baseball playoffs and logged into PlayChess.com, an online chess server, when he first heard about the Kramnik forfeit.

Regan feels a responsibility to do for professional chess what steroid testing has done for professional baseball. The Mitchell Report was commissioned in 2006 to investigate performance enhancing drug (PED) abuse in the major leagues, around the time Regan began his anti-cheating work. While baseball enters its post-PED era, FIDE has yet to put a single perfor­mance enhancing device — the chess world’s PED — regulation into place. It wasn’t until mid-2013 that the Association of Chess Professionals (ACP) and FIDE organized a joint anti-cheating committee, of which Regan is a prominent member. In mid-2014, the committee plans to ratify a protocol about how to evaluate evidence and execute punishment.

Regan clicks a few times on his mouse and then turns his monitor so I can view his test results from the German Bundesliga. His face turns to disgust. “Again, there’s no physical evidence, no behavioral evidence,” he says. “I’m just seeing the numbers. I’ll tell you, people are doing it.” Regan is 53. His hair has turned white. What remains of it, billows up in wild tufts that make him look the professor. When Regan acts surprised his thick, jet-black eyebrows rise like little boomerangs that return a hint of his youth. His enthusiasm for work never wanes; his voice merely shifts modes of erudition that make him sound the professor.

To catch an alleged cheater, Regan takes a set of chess positions played by a single player — ideally 200 or more but his analysis can work with as few as 20 — and treats each position like a ques­tion on a multiple-choice exam. The score on this exam translates to an Elo rating, a score Regan calls an Intrinsic Perfor­mance Rating (IPR). There are, however, three main differences between a standard multiple-choice exam and Regan’s anti-cheating exam. First, on a standard exam each question has a fixed number of answers, usually four or five choices; on Regan’s exam, the number of answers for each position equals the number of legal moves. Second, on a stand­ard exam, one answer per ques­tion receives full credit, while the other answers receive zero credit; on Regan’s exam, every legal move is given partial credit in proportion to how good it is relative to the engine’s top choice. (Partial credit falls off as a complicated nonlinear relationship based on the engine’s evaluations. Credit also abides by the constraint that all moves taken together for a position must sum to full credit.)

Fig1Regan.jpg

The third difference is the scoring method. (See Figure 1) A standard multiple-choice exam is scored by dividing the number of correct answers by the total number of questions. This gives a percentage, which translates to an arbitrary grade like A, B-, C+, etc. What matters is not just the percentage but how one interprets the percentage. If a test is especially difficult and most students do poorly on it, then an 85 percent might translate to an ‘A’ rather than the more typical ‘B’. This is called grading on a curve.

Fig2Regan.jpg

Figure 2 shows the conceptual relationship between a player’s chosen moves for a set of positions and how an engine might distribute partial credit. Each point repre­sents a move. Good moves fall into the top left corner of the plot, while poor moves fall into the bottom right. Since average players and grand­masters both make relatively poor moves compared to an engine, all human players’ plots take on the same general L-shape. This method of converting en­gine evaluations into objective partial credit is the original aspect of Regan’s work. He calls it “Converting Utilities into Probabilities.” (Regan uses the technical term “probability” instead of “partial credit,” because after the partial credits conform to the constraint that they must sum to full credit, they mathematically behave like probabilities.) “I made it up,” he says. “I’ve been astounded, actually, that there doesn’t seem to be precedent in the literature for it. I was dead sure people were doing this problem.” 

(Regan’s literature search nourished his penchant for coincidence as well. As a serious Christian he sometimes gets asked if he believes in the theory of evolution, which he does. But, he says, “Intelligent Design papers featured large in my initial literature search. There’s no direct connec­tion to my work, but some of the mathematical ingredients are the same.” Intelligent Design’s leading complexity theorist is William Dembski, and Regan noted that his wife’s old roommate’s husband is Robert Sloan, chair of the computer science department at the University of Illinois, Chicago, where Dembski earned his Ph.D.)

In Regan’s algorithms it is the relative differences in move quality that matter, not the absolute differences. So if, for example, three top candidate moves are judged by the engine to be only slightly apart, then these top three moves will each earn approximately 30 percent credit (the remaining 10 percent left for the remaining candidate moves). This emphasis on relative differences rather than absolute value explains why cheaters who use moves that are not always the engine’s first choice will still get caught. This also explains why it’s not possible for partial credit to be greater against weak opponents.

Fig3Regan.jpg

After a player’s partial credit is plotted for a set of positions, Regan graphically scores his exam by drawing a curve averaged through the data (See Figure 3). (In statistical jargon, this process is called a “least squares best fit.” The score on a standard multiple-choice exam can be thought of as a “best fit” too, but in this case its best fit is calculated between the points zero and one on a number line rather than between multiple points on a two-dimensional plot. See Figure 1 again.)

The best fit pro­duces a curve (shown as ‘y’ in Figure 3) and two values, ‘s’ and ‘c,’ which characterize the bend in the curve. Regan calls ‘s’ the sensitivity. It shifts the curve left and right and correlates to a player’s ability to sense small differences in move quality. Regan calls ‘c’ the consistency and it thins or thickens the tail of the curve. A larger ‘c’ represents a player’s a­void­ance of gross blunders (“gross” being somewhat relative to the interpretation of the engine). Regan has found that different values of ‘s’ and ‘c’ translate into well-defined categories that align with Elo ratings, similar to the way that a 95 percent and an 85 percent on an exam typically translate to an A and B, respectively. Back in the 1970s, when Arpad Elo designed the USCF and FIDE rating systems, he arbitrarily picked 2000 to mean expert, 2200 to mean master, etc. This arbitrary assignment means chess ratings are based on a curve, and specific values of ‘s’ and ‘c’ can be mapped directly to specific Elo. The mapped rating is the Intrinsic Performance Rating. 

Fig4Regan.jpg
It’s more reliable to call someone, say, a B-player in chess than it is to call someone a B-student in school. A student can study for an individual test, but chess strength tends to change slowly. If Regan knows a player’s Elo before subjecting the player’s moves to an anti-cheating exam, he can compare how well each moves’ partial credit matches the typical partial credit earned by a player with that Elo. Regan represents this difference as a z-score, which is a fancy name for the ratio of how many standard deviations a player’s test performance is from that player’s typical Elo performance. The greater the z-score, the more likely a person has cheated. (See Figure 4)

The IPR and z-score are two separate results that emerge from the same test, but the z-score is much more reliable. If Regan were to compute an IPR with only a few moves, it would be like marking an exam with very few questions. This would translate to an unreliable letter grade. The z-score, however, is more accurate. “The IPR does not have forensic standing,” says Regan. “But the cheating test [z-score] is based on settings that come from training 8,500 moves of world championship games.” These moves act like questions the College Board uses to normalize its scoring on standardized tests. For example, if the College Board wanted to catch a cheater on the SAT, it could easily do so by analyzing a small sample of suspicious answers to questions it knows to be difficult. Cheaters would perform uncharacteristically well on these questions. The same red flags go up when a cheating chess player consistently receives more partial credit on each move than his Elo would predict he deserves.

Because the proper construction of statistical evidence against alleged cheaters requires such technical expertise, Regan believes that it’s necessary to establish a centralized authority responsible for the administration of anti-cheating protocol. Eventually he would like to oversee the conversion of his 35,000 lines of C++ code into a Windows-driven program or portable app. “I see other people using my methods but not necessarily using my program,” he says. Regan also believes that a centralized authority can best fix public confusion about what constitutes scientific versus unscientific procedure. It’s too easy for people with a poor methodology to spread rumors online.

The most notorious public cheating case to date has been that of the then-26-year-old Bulgarian Borislav Ivanov. He was first accused of using computer assistance in December, 2012, at the Zadar Open in Croatia, where, barely a 2200-player, he scored six out of nine in the Open section, including wins over four grandmasters. Allegedly he had cheated in at least three open tournaments before that, too. Finally, Ivanov was disqualified from both the Bladoevgrad Open, in October, 2013 and the Navalmoral de la Mata Open in December, 2013, after both times refusing the inspec­tion of his shoes, where he had allegedly hid a wireless communications device.

The Ivanov case was widely publicized in the Bulgarian media and at the news site ChessBase.com, which prompted amateur bloggers and You Tube aficionados to post their own move “matching” analysis, but none of it was worthy enough or contained high enough confidence intervals to persuade the Bulgarian Chess Federation to take action. Regan’s analysis, however, found that Ivanov’s moves earned a z-score of 5.09, which translates to the odds of him independently making these moves to less than one chance in five million. Regan’s statistical evidence, along with Ivanov’s refusal to submit to a search, resulted in the Bulgarian Chess Federation suspending Ivanov for four months. 

Statistical evidence is immune to con­ceal­ment. No matter how clever a cheater is in communicating with collaborators, no mat­ter how small the wireless communications device, the actual moves produced by a cheater cannot be hidden.

Nevertheless, non-cheating outliers happen from time to time, the inevitable false positives. In any large open tournament with at least a thousand non-cheating players, the chances are very high that at least one of those honest players will earn a z-score of 3.0 or more, an ostensibly sus­picious value. Tamal Biswas, one of Regan’s two graduate students and a class-A player, has used a database of previously played games to run simulations of large open tournaments and verify these numbers.

By the summer of 2014, the ACP-FIDE anti-cheating committee hopes to work out the logistical details about what a­mounts and combinations of statistical, physical, and behavioral evidence should be considered conclusive if an alleged cheater is not caught red-handed. Regan proposes that a single z-score above 5.0 (the threshold for scientific discovery) or multiple instances of slightly lower z-scores should be enough statistical evidence on their own. But in other cases, one would need supporting behavioral or physical evidence, such as suspicious behavior in the restroom or tournament hall.

Regan grew up a chess prodigy during the 1960s and early 1970s, a few miles outside New York City, in Paramus, New Jersey. This area swarmed with chess opportunities and, in 1973, at the age of 13, Regan earned the master title, the youngest American at the time to do so since Bobby Fischer. A photo of Regan from that time shows a boyish round face and the thick, black eyebrows he maintains today.

But before Regan finished high school, mathematics proved too alluring, and he decided he didn’t want to make chess a career. His final two competitive triumphs came in 1976, when he was the only non-Soviet to win a gold medal at the now-defunct Student Chess Olympics, and in 1977 when he co-won the U.S. Junior Championship. After graduating from Princeton and Oxford, and then serving a post-doc at Cornell, Regan was hired by the University at Buffalo in 1989, where he has worked ever since. During the 1990s to 2006 Regan didn’t think much about chess. His kids were young, and he was busy immersing himself into the study of P versus NP, the holy grail of computer science problems. He now “leads three lives” as he likes to say: his main research and teaching duties, his anti-cheating work, and as co-author (with Richard J. Lipton) of the blog, “Godel’s Lost Letter and P=NP.” In December of 2013, Springer published a book Regan co-wrote with Lipton about the blog, titled People, Problems, and Proofs.

The blog publishes not only technical amusements but occasional fodder about coincidences. “[MIT Professor] Scott Aaronson bet $100,000 that scalable quantum computing can be done,” says Regan. “The media picked up on this. The impetus for this bet was my post entitled ‘Perpetual Motion of the 21st Century.’ But my post was edited by Lipton. Lipton, Lance Fortnow, and I co-wrote some papers in the early 1990s, and Fortnow co-writes his own blog with Bill Gasarch; and Bill Gasarch is a friend of mine and one of my confidants because he is also Chris­tian.” At times Regan goes on like this, and it can be argued that his advanced research requires less energy to follow than his personal connections.

P versus NP stands for Polynomial time versus Nondeterministic Polyno­mial time. A P-type problem requires relatively few computations, like the solution to tic-tac-toe or 8x8 checkers. Computations for an NP-type problem scale up extremely quickly, however — too quickly to find a solution based on the current theories of computer science. (An example would be the Travelling Salesman problem, where the goal is to find the shortest tour between a large number of cities.) Regan’s research includes ways to reduce the number of computations in an NP-type problem, so it behaves more like a P-type.

If Regan manages to prove the theoretical equiv­alence of P- and NP-type problems — the meaning of “P=NP” in his blog title but an unlikely event, not be­cause of a lack of technical proficiency on Regan’s part but because of the general consensus in the field that the relationship is false — then the result would change the world: cryptographic techniques would become obsolete, perfect language transla­tion and facial recognition algorithms would become possible, and there would be a tremendous leap in artificial intelli­gence. For good reason, such a discovery would earn him the $1 million Millennium Prize.

The solution to chess is not defined as an NP-type problem (although some vari­ants played on boards larger than 8x8 are), but it shares two characteristics: 1) it is practically impossible to prove a solution — for example, to prove a win or draw for White from the initial position; and 2) we can quickly verify a solution — whether or not a particular chess position is a checkmate. The main difference be­tween chess and NP-types is that the solution to chess is theoretically possible, whereas solutions to NP-type problems currently are not. In one way, however, chess can be marked more difficult than NP-types, because with NP-types one can theoretically verify a solution at the start if there is one. To find the solution to chess, one can only compute deeper and deeper.

Claude Shannon, the father of information theory, in his famous paper “Programming a Computer for Playing Chess,” estimated the number of possible unique chess positions to be roughly 10^43. “It’s impossible to unpack the complete game tree,” says Regan. “It’s so large that if those bits were placed in an efficient memory device the size of a room, that room would collapse into a black hole.”  Regan classifies chess as a Deep problem, “One where I can describe the complete set of rules in a small amount of information, but where unpacking the information will take a long time.”

Chess engines continue to improve at about 20 Elo points per year. If Regan’s estimate of perfect play at 3600 Elo is true, then they will arrive there within a few decades. Regan believes they already play perfectly on occasion, if given enough time to “think.” A chess computer with a good enough algorithm and fast enough processor does not need to store 10^43 positions to play with the same skill as a computer that does. To understand how such an astonishing feat is possible, consider how it’s possible for a human to play perfect tic-tac-toe without having to store the complete solution to tic-tac-toe. There are 256,168 possible different games of tic-tac-toe, but a little smarts reduces this number to 230 strategically important positions. 

In September, 2013, Harvard University hosted the one-day New England Symposium for Statistics in Sports, and Regan decided to attend on a stopover while on his way home from another conference. “I’m not going for the talks so much as to hobnob and button­hole people,” he wrote to me a few weeks before the event. We met at the bar of the Grafton Street Pub, a crowded restaurant near Harvard Square, for the symposium’s social hour. The din of Saturday night beset normal conversation, and I found Regan leaning into the voice of Eric Van, a 50-something statistician who consulted for the Boston Red Sox between 2005 and 2009. Van was explaining to Regan how the Sox needed to shuffle their lineup to win the World Series, a task Van helped the team achieve in 2007 and one in which they eventually accomplished again a month after this get-together. Regan had come to the conference to form connec­tions, and Van’s attachment to baseball made this one particularly sweet.

But the whole bar scene injects a bit of anxiety into Regan’s body language. He blinks hard at times and chews his gum vigorously. (Later Regan would tell me, “Chess got me comfortable in an adult world. I was able to step right off the boat my first year as a graduate student at Oxford and feel confident.” It was during this time at Oxford when Regan also met his wife.) Regan offered to buy us drinks, in a tone of voice that implied this wasn’t a question he often asked but which he felt was the obligatory thing to do. Nobody accepted. So he bought a pizza to share, and we moved to a quieter spot. 

Van battles attention deficit disorder and narcolepsy, and now spends his time as a private scholar who researches these ailments and works on a theory of consciousness. The conversation turned to the intersection of cognitive science and chess, which naturally led to a discussion about the hypothetical Chinese Room Thought Experiment first proposed by philosopher John Searle.

In this experiment, an English-speaking person sits in a locked room. After a question written in Chinese is slipped under the door, the person follows rules on a flowchart that describes how to write an answer in Chinese. The person then slips the answer back under the door. It would appear to people outside the room that there is an intelligent, native Chinese speaker inside, similar to the way a chess engine appears to conceal a mini super-grandmaster. Searle argued that that when the English-speaking person (or a computer) follows a set of instructions to translate a language, no matter how well, they do not understand the language in the same way a native human speaker understands language. The same skepticism can be applied to whether or not computers un­der­stand chess.

Chess has often been described as a form of language, and when I propose to Regan that today’s chess engines approach perfect play by following a set of rules embedded in source code, similar to the way the translator inside the Chinese Room follows a flowchart, he carefully considers his response. The implication that the best chess-playing entities on the planet follow rules revisits the ongoing debate in the chess community about whether or not human chess players also use rules to evaluate positions. Ironically, chess computers are commonly believed to play in a style that ignores rules.

Regan speaks English, Spanish, German, Italian, and French, and he approaches the debate by distinguishing rules written in human language from those written in computer code. “When we get to the tunable parameters in the program,” says Regan, “all of the magic constants that define the value of the queen, the value of a rook, the value of a knight, the value of certain positional play, the values of squares, of attacks, these parameters are tuned by performance, linear regression. Programmers don’t necessarily have a theory about what values or rules for those parameters work well. They have a general idea, but the final values are determined by [the engine] playing lots of very fast games against itself and seeing which values perform best.” When I insist that the ones and zeroes of an engine’s compiled code remain static, similar to rules written in a book, he leans back and restates his point. “Yes, that’s true. But computers use regression.”

What Regan means by regression is this: While some ones and zeroes remain static in the engine’s initial program, other ones and zeroes essential for the engine’s evaluation function — those essential for the way it “thinks” — rapidly update in short-term random access memory (RAM). This process mimics training and en­hances a computer’s ability to do more than just calculate. Regression creates real-time feedback that allows engines to “think” about each position unburdened by con­text, similar to the way a human weighs imbalances. But computers calculate much faster. Deep Blue, the first computer to defeat a world champion in a standard time control match, succeeded despite its relatively poor evaluation function and made up for this deficiency via fast calcula­tion. Today’s top engines would destroy Deep Blue, because they evaluate better — because, ironically, they “think” more like a human.

[Alan] Turing wanted to model human cognition with a computer, but I’m going in the opposite direction,” Regan says. “I want to use the computer to inform us about the human mind.” Regan’s data has reproduced a result in psychology first discovered by Nobel Prize-winning economist Daniel Kahneman and colleague Amos Tversky, which states that human perception of value is relative. “You’ll drive across town to save $4 on a $20 purchase, but you wouldn’t do it for a $2,000 purchase,” says Regan. His data shows that players make 60 percent to 90 percent more errors when half a pawn ahead or behind than when the game is even. Regan claims that this is an actual cognitive effect, not a result of high-risk/high-reward play, because it is observed with players who have both the advantage and disadvantage.

Chess has been called the drosophila  (a small fruit fly, used extensively in genetic research because of its large chromosomes, numerous varieties, and rapid rate of reproduction) of artificial intelligence. It is a popular resource for research in cognitive science and psychology, because the Elo rating system provides an objective measure of human skill. Regan’s work follows this scientific tradition. He has processed over 200,000 reference games played by players ranging in Elo from 1600 to 2800, uusing Rybka 3 at depth 13 in single-line mode. Single-line mode is a bit less accurate than multi-line mode, but it runs roughly 20 times faster. These reference games provide a rich set of data with which to create all sorts of chess-based applications.

In 2012, FIDE sold the marketing and licensing rights of professional chess to AGON, a company run by Andrew Paulson. According to the New York Times, “[Paulson] wants to turn chess into the next mass-market spectator sport.” Paulson plans to supplement Internet coverage of major competitions with something he calls ChessCasting, a broadcast of not only moves, commentary, video, and live engine evaluations, but also biometrics such as a player’s pulse, eye movements, blood pressure, and sweat output. Regan’s work adds many non-invasive statistics to this list. “The greatest immediate impact on the professional chess world that I think I’m going to have, besides my anti-cheating work, is that I’m going to come up with a statistic called ‘Challenge Created,’ which is going to be an objective way to single out the players who create difficult problems for their opponents.” The greatest over-the-board practical problems are not always caused by the objectively best moves, and Regan’s metric can quantify this distinction.

Fig5Regan.jpg

Other statistics that emerge from Regan’s IPR calculation include ways to visualize the degradation of move quality during time pressure (in Figure 5, notice how error increases as the move number approaches 40, the standard time control) and a way to normalize the different chess rating systems of the world. Amateur players constantly wonder how, say, their Chess.com rating compares to their na­tional federation’s rating. IPRs provide a way to standardize this procedure. In some ways, IPRs are even more accurate than traditional ratings, because they’re calculated on a per-move basis rather than on a per-game basis. One bad tournament could sink a traditional rating, but if this bad tournament was the effect of only, say, three isolated bad moves, then such bad luck would not detrimentally affect an IPR. Regan does admit, however, that engines bias their evaluations ever so slightly against human-like moves, and this effect “nudges IPRs slightly out of tune.” The exact reason for this tiny bias is unclear and it obsesses Regan during his free time.

For the improving player, IPRs can be used as training metrics for different phases of the game. Say a person wants to obtain an objective measure of how well they play middle games out of the Ruy Lopez versus how well they play middle games out of the Scandinavian. All they would need to do is isolate the particular moves and positions of interest, send them through Regan’s IPR-generator, and they have a performance metric. This method has been used by Regan to rate historical players.

For years, statistician Jeff Sonas has been rating historical players, but Regan’s IPR is more objective. Sonas uses historical game results, which provide information about relative performance only within eras. Only players alive during the same period can play each other. But since Regan’s method compares moves to a common standard (the engine), rather than the results of games, he can objec­tively relate player abilities across eras. What he found was that rating inflation does not exist.

Fig6Regan.jpg

Between 1976 and 2009, there has been no significant change in IPR for players at all FIDE ratings. Figure 5 shows, for example, how the IPR for players rated between 2585 and 2615 has remained relatively constant over time. Today’s thousands of grandmasters and dozens of players rated over 2700 indicate a legiti­mate proliferation of skill. Thus one may conclude that Hikaru Nakamura’s peak FIDE rating of 2789 beats Bobby Fischer’s peak of 2785 for best American chess player of all time, and Magnus Carlsen’s peak rating of 2881 places him as the best human chess player of all time. (See Figure 6)

Why do we fail to understand those who cheat? In the journal The New Atlantis, Jeremy Ruzansky writes, “Performance-enhancing drugs are a type of cheating that does not merely alter wins and losses or individual records, but transforms the very character of the athlete. … If our entire goal were to break pitching records in baseball, we could build pitching machines to pitch perfect games. It is worth asking why we would never do this, why we would never substitute our sportsmen with machines, even though machines could easily achieve superior performance.”

Ruzansky’s answer is that we value statistics only as the result of superior human performance. Countless athletes and chess players, including Bobby Fischer, have compared sports to life. “Chess is life,” the former American world champion said. Sports provide society with a metaphor for the competition inherent in life, and this metaphor works only when a living person competes — or, in chess, when a living mind contemplates the complex­ities of the moves.

Yet cheaters look upon their act as its own kind of sport. In "The Journal of Personality and Social Psychology," researchers found that cheaters enjoy the high of getting away with their wrongdoing, even if they know others are aware of it. Boris Ivanov, for example, continued to cheat after he was caught but before he was suspended. Behavior like Ivanov’s poses a great threat to tournament chess, because it doesn’t take much risk to reap reward. Faced with a complex calculation, a player could sneak their smartphone into the bathroom for one move and cheat for only a single critical position. Former World Champion Viswanathan Anand said that one bit per game, one yes-no answer about whether a sacrifice is sound, could be worth 150 rating points.

“I think this is a reliable estimate,” says Regan. “An isolated move is almost un­catchable using my regular methods.”

But selective-move cheaters would be doing it on critical moves, and Regan has untested tricks for these cases. “If you’re given even just a few moves, where each time there are, say, four equal choices, then the probabilities of matching these moves become statistically significant. Another way is for an arbiter to give me a game and tell me how many suspect moves, and then I’ll try to tell him which moves, like a police lineup. We have to know which moves to look at, however, and, importantly — this is the vital part — there has to be a criterion for identifying these moves independent of the fact they match.”

Although none of these selective-move techniques have yet to be discussed with the ACP-FIDE anti-cheating committee, Regan has confidence they’ll work. But he keeps his optimism restrained. He doesn’t look forward to the leapfrogging effect bound to happen between cheaters and the people who catch them, a phe­nomenon that has invariably plagued other sports. Other challenges remain, too. A new “depth” parameter to model the num­ber of plies a player evaluates is being researched to join ‘s’ and ‘c’; the standard engine is being converted from Rybka 3 to Houdini; and the ever-present but minimal anti-human bias in engine scores must be cancelled.

In 2012, Regan lost an exhibition match to a Lego-built robot running the Houdini engine, equipped with an arm that moved the pieces on a real board and a camera that could interpret the position. The experience made an awesome impression on him. “Is technology going to be so ubiquitous that we’ll not be able to police it anymore?” he asks while he, his wife, and I eat dinner at a local Thai restaurant. Regan slumps over his food, looking depressed about the need to even ask the question. “Houdini won using only six seconds per move,” he says. The exhibition reminds Regan that his calling has carved valuable time from his research and family. “He’s obsessed,” says his wife, who sits across the table. Then she adds, “But you’ve got to be obsessed to be good.” Regan ignores the flattery, his attention held by an emerging thought. Finally he springs forward in his chair, smiling. “By the way,” he says. “This project was run by a person whose mother and my mother share a best friend back in New Jersey.”

See Dr. Regan’s website www.cse.buffalo.edu/~regan/chess/ for more of his work.



2 comments:

rjs said...

the inflation adjustment is certainly the fly in the ointment of the GDP figures, though i'm not sure Williams has it right either...

i've taken to reporting what's there, figure anyone with any intelligence reading closely will form their own questions...

for instance, last week i wrote: "real personal outlays for durable goods rose at an annual 1.2% rate in the quarter, rather than the 1.4% rate previously reported, even though the current dollar spending for durable goods fell by over 1.0%, as the price deflator for durable goods was negative 2.5%...deflation adjusted outlays for motor vehicles accounted for more than half of the increase in consumer spending for durables, while real outlays for durable household equipment and furniture shrunk at a 1.6% annual rate..."

i just put it out there; it's up to you as to whether you want to believe that prices for cars, TVs, furniture and appliances fell at a 2.5% rate in the first quarter, or not...

Cirze said...

But who's closer?

Love you!

And thanks for all the updates.