Feeds:
Posts
Comments

Posts Tagged ‘correlation’

Good piece on Seeking Alpha about market correlations: these days individual fundamentals don’t seem to matter as much as general market trends:

We used to be a market influenced by lemmings, but now they run the market. Remember two weeks ago? Every asset up… get out of the way. Risk is ON! Five weeks ago? You own stuff? Loser! Risk is OFF! 12 weeks ago? Everything must go up! Last week? Sell sell sell… everything!

The measured correlation of markets is around 80%, which indicates investing is more a game of follow the leader than picking opportunities.

Read on here.

And for anecdotal proof, here’s how the price of crude oil is correlated with other commodities: like alcohol:

LONDON (Reuters) – Britain’s financial regulator has fined and banned a former broker for manipulating oil prices by buying more than 7 million barrels while on a drinking binge.

The Financial Services Authority (FSA) said it fined Steven Perkins, a former employee of PVM Oil Futures Ltd, 72,000 pounds ($108,000) and banned him from working in financial services for at least five years for carrying out trades without the authority of clients or his employer.

Perkins’ unauthorized trading pushed the price of Brent crude oil futures up to almost $73.50 a barrel — at that point the highest level prices had hit on the InterContinental Exchange in 2009.

The rest of the insanity here.

It’s an absurd game we play.

“Sent from my BlackBerry® wireless handheld”

Advertisements

Read Full Post »

Chris Anderson wrote an interesting piece in Wired this year heralding the dawn of the Petabyte Age, or the age when extremely large databases already exist which can allow those with access to these databases to run computer correlations on just about anything to form conclusions without the traditional theoretical modelling under the scientific method.

From Anderson’s article:

At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality.

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

The big target here isn’t advertising, though. It’s science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years.

Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence. Data without a model is just noise.

There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

Although the potential applications and implications of the Petabyte Age described by Chris are staggering in context, this reminds me of a logical fallacy otherwise known as the Texas Sharpshooter’s Fallacy:

The Texas sharpshooter fallacy is a logical fallacy in which information that has no relationship is interpreted or manipulated until it appears to have meaning. The name comes from a story about a Texan who fires several shots at the side of a barn, then paints a target centered on the hits and claims to be a sharpshooter.

The fallacy does not apply if one had an ex ante, or prior, expectation of the particular relationship in question before examining the data. For example one might, previous to examining the information, have in mind a specific physical mechanism implying the particular relationship. One could then use the information to give support or cast doubt on the presence of that mechanism. Alternatively, if additional information can be generated using the same process as the original information, one can use the original information to construct a hypothesis, and then test the hypothesis on the new data. See hypothesis testing. What one cannot do is use the same information to construct and test the same hypothesis — to do so would be to commit the Texas sharpshooter fallacy.

The fallacy is related to the clustering illusion, which refers to the tendency in human cognition to interpret patterns in randomness where none actually exist.

Sharpshooter’s fallacy is what allows pseudo-science (albeit backed by statistically significant data) such as The Bible Code to persist.

Now consider the Petabyte version of The Bible Code.

Chilling.

Read Full Post »