Category Archives: Data Quality

“Big Data” to solve everything?

In John D Cook’s blog post (http://www.johndcook.com/blog/2010/12/15/big-data-is-not-enough/) he quotes Bradly Efron in an article from Significance.  It is somewhat counter-culture (or at least thought-provoking) to the mainstream ‘Big Data’ mantra – Given enough data, you can figure it out.  Here is the quote, with John D. Cook’s emphasis added:

“In some ways I think that scientists have misled themselves into thinking that if you collect enormous amounts of data you are bound to get the right answer. You are not bound to get the right answer unless you are enormously smart. You can narrow down your questions; but enormous data sets often consist of enormous numbers of small sets of data, none of which by themselves are enough to solve the thing you are interested in, and they fit together in some complicated way.”

What struck a chord with me (a data guy) was the statement ‘and they fit together in some complicated way’.  Every time we examine a data set, there are all kinds of hidden nuances that are embedded in the content, or (more often) in the metadata.  Things like:

  • ‘Is this everything, or just a sample?’  –  If it is a sample, then how was the sample created?  Does it represent a random sample, or a time-series sample?
  • ‘Are there any cases where there are missing cases from this data set?’  –  Oh, the website only logs successful transactions, if it wasn’t successful, it was discarded.
  • ‘Are there any procedural biases?’ – When the customer didn’t give us their loyalty card, all of the clerks just swiped their own to give them the discount.
  • ‘Is there some data that was not provided due to privacy issues?’ – Oh, that extract has their birthday blanked out.
  •  ‘How do you know that the data you received is what was sent to you?’  – We figured out the issue – when Jimmy saved the file, he opened it up and browsed through the data before loading.  It turns out his cat walked on the keyboard and changed some of the data.
  • ‘How do you know that you are interpreting the content properly?’  –  Hmm.. this column has a bunch of ‘M and F’s.. That must mean Male and Female.  (Or, have you just changed the gender of all the data because you mistakenly translated ‘M-Mother and F-Father’?)

All of this is even more complicated once you start integrating data sets, and this is what Bradly Efron was getting at.  All of these nuances are exacerbated when you start trying to marry data sets from different places.  How do you reconcile two different sets of product codes which have their own procedural biases, but essentially report on the same things?

Full article here:

http://www-stat.stanford.edu/~ckirby/brad/other/2010Significance.pdf

Advertisements

Leave a comment

Filed under Big Data, Data, Data Management, Data Quality

Data Quality – Garbage In, Garbage Out

From Kaiser Fung’s blog “Numbers Rule Your World”:

The following articles discuss the behind-the-scenes process of preparing data for analysis. It points to the “garbage in garbage out” problem. One should always be aware of the potential hazards.

“The murky world of student-loan statistics”, Felix Salmon (link)

At the end of this post, Felix found it remarkable that the government would not have better access to the data

 The Reuters Blog post by Felix describes the typical problem with data and the challenges facing analysts who consume the data.  The problem is difficult enough when ‘you own all the data’ (i.e. can examine how the data is created, aggregated, managed, etc. because you are the source).  However, most analysis needs more than one pocket of data and relies on external sources of data to supplement what you might already have.  The more removed an analyst is from the source, the less insight and understanding you have on its data quality. 

One of the more disturbing aspects of Felix’s post is the fact that despite knowing there are significant errors in the previously published data, the NY Fed is only going to modify interpretation of current and future data.  Thus, the longitudinal view (the view across time) will have this strange (and likely soon forgotten) jump in the amount of student loan debt.  Good luck trying to do a longitudinal study using that data series.

 

A colleague responded to this by citing a CNN interview of a former GM executive discussing why GM declined.  GM’s management culture (dominated by MBAs who are numbers people) made decisions based on what the data told them.  When the former GM executive would bring perspectives from past experience, gut feeling and subjective judgement, he was advised that he came across as immature.  My colleague commented:

“This provides a footnote to why over-reliance on data is dangerous in itself”

To rephrase/expand his point, I would say:

“Data-centric decision making is the most scientific basis for substantive decision making.  HOWEVER, if you don’t understand the underlying data and its inherent flaws (known and/or unknown), you are living in a dream world.”

I think this is what he meant by ‘over reliance’ — total trust on the data in front of you to the exclusion of everything else.

 In my view, you are almost always faced with these two conditions:

  1.  Your data stinks, or at least has some rotten parts.
  2.  You don’t have all the data which you really need/want

 Once you acknowledge those conditions, you can start examining the ‘gut feel’ and the ‘subjective judgment’ in view of the data gaps.

Leave a comment

Filed under Data, Data Management, Data Quality