Data Provenance and Data Lineage: the View from the Podcasts

Posted on Thu 30 November 2017 in TDDA

In Episode 49 of the Not So Standard Deviations podcast, the final segment (starting at 59:32) discusses data lineage, after Roger Peng listened to the September 3rd (2017) episode of another podcast, Linear Digressions, which discussed that subject.

This is a topic very close to our hearts, and I thought it would be useful to summarize the discussions on both podcasts, as a precursor to writing up how we approach some of these issues at Stochastic Solutions—in our work, in our Miró software and through the TDDA approaches discussed on this blog.

It probably makes sense to begin by summarizing the Linear Digressions, episode, in which Katie Malone explains the ideas of data lineage (also known as data provenance) as the tracking of changes to a dataset.

Any dataset starts from one or more "original sources"—the sensors that first measured the quantities recorded, or the system that generated the original data. In almost all cases, a series of transformations is then applied to the data before it is finally used in some given application. For example, in machine learning, Katie describes typical transformation stages as:

  1. Cleaning the data
  2. Making additions, subtractions and merges to the dataset
  3. Aggregating the data in some way
  4. Imputing missing values

She describes this as the process view of data lineage.

An alternative perspective focuses less on the processes than the resulting sequence of datasets, as snapshots. As her co-host, Ben Jaffe points out, this is more akin to the way version control systems view files or collections of files, and diff tools (see below) effectively reconstruct the process view1 from the data.

In terms of tooling, Katie reckons that the tools for tracking data lineage are relatively well developed (but specialist) in large scientific collaborations such as particle physics (she used to work on the LHC at CERN) and genomics, but her sense is tools are less well developed in many business contexts.

She then describes five reasons to care about data provenance:

  1. (To improve/track/ensure) data quality
  2. (To provide) an audit trail
  3. (To aid) replication (her example was enabling the rebuilding of a predictive model that had been lost if you knew how it had been produced, subject obviously, not only to having the data but full details of the parameters, training regime and any random number seeds used)
  4. (To support) attribution (e.g. providing a credit to the original data collector when publishing a paper)
  5. Informational, (i.e. to keep track of and aid navigation within large collection of datasets).

I recommend listening to the episode.

In Not-so-Standard Deviations 49, Roger Peng introduces the idea of tracking and versioning data, as a result of listening to Katie and Ben discuss the issue on their podcast. Roger argues that while you can stick a dataset into Git or other version control software, doing so it not terribly helpful because, most of the time the dataset acts essentially as a blob,2 rather than as a structured entity that diff tools help you to understand.

In this, he is exactly right. In case you're not familar with version control and diff tools, let me illustrate the point. In a previous post on Rexpy, I added a link to a page about our Miró software between two edits. If I run the relevant git diff command, this is its output:

git diff of two versions of the markdown for a blog post

As you can see, this shows pretty clearly what's changed. Using a visual diff tool, we get an even clearer picture of the changes (especially when the changes are more numerous and complex):

visual diff (opendiff) of two versions of the markdown for a blog post

In contrast, if I do a diff on two Excel files stored in Git, I get the following:

git diff b1b85ddc448a723845c36688480cfe5072f28c1a -- test-excel-sheet1.xlsx 
diff --git a/testdata/test-excel-sheet1.xlsx b/testdata/test-excel-sheet1.xlsx
index 0bd63cb..91e5b0e 100644
Binary files a/testdata/test-excel-sheet1.xlsx and b/testdata/test-excel-sheet1.xlsx differ

This is better than nothing, but gives no insight into what has changed. (In fact, it's worse than it looks, because even changes to the metadata inside an Excel file--such as the location of the selected cell—will cause the files to be shown as different. As a result, there are many hard-to-detect false positives when using diff commands with binary files.)

Going back to the podcast, Hilary Parker then talked about the difficulty of the idea of data provenance in the context of streaming data, but argued there was a bit more hope with batch processes, since at least the datasets used then are "frozen".

Roger then argued that there are good custom tools used in particular places like CERN, but those are not tools he can just pick up and use. He then wondered aloud whether such tools cannot really exist, because they require too much understanding on analytical goals. (I don't agree with this, as the next post will show.) He then rowed back slightly, saying that maybe it's too hard for general data, but more reasonable in a narrower context such as "tidy data".

If you aren't familiar with the term "tidy data", it really just refers to data stored in the same way that relational databases store data in tables, with columns corresponding to variables, rows corresponding to whatever the items being measured are (the observations), consistent types for all the values in a column and a resulting regular, grid-like structure. (This contrasts with, say, JSON data, which is hierarchical, and with many spreadsheets, in which different parts of the sheet are used for different things, and with data in which different observations are in columns and the different variables are in rows.) So "tidy data" is an extremely large subset of the data we use in structured data analysis, at least after initial regularization.

Hilary then mentioned an R package called testdat, that she had had worked on at an "unconference". This aimed to test things check things like missing values and contiguity of dates in datasets. These ideas are very similar to those of constraint generation and verification, which we frequently discuss on this blog, as supported in the TDDA package. But Hilary (and others?) concluded that the package was not really useful, and that what was more important was tools to make writing tests for data easier. (I guess we disagree that general tools like this aren't useful, but very much support the latter point.)

Roger then raised the idea of tracking changes to such tests as a kind of proxy for tracking changes to the data, though clearly this is a very partial solution.

Both podcasts are interesting and worth a listen, but both of them seemed to feel that there is very little standardization in this area, and that it's really hard.

We have a lot of processes, software and ideas addressing many aspects of these issues, and I'll discuss them and try to relate them to the various points raised here in a subsequent post, fairly soon, I hope.


  1. Strictly speaking, diff tools cannot know what the actual processes used to transform the data were, but construct a set of atomic changes (known as patches) that are capable of transforming the old dataset into the new one. 

  2. a blob, in version control systems like Git, is a Binary Large OBject. Git allows you to store blobs, and will track different versions of them, but all you can really see is whether two versions are the same, whereas for "normal" files (text files), visual diff tools normally allow you to see easily exactly what changes have been made, much like the track changes feature in Word documents.