Data cleaning

90% of statistics

Related: outlier detection. Useful: text data wrangling

Great expectations verifies that your data conforms to the specified distributions. (Keyword: Pipeline testing).

  • Edwin de Jonge and Mark van der Loo, Data cleaning with R

  • Snorkel is a hybrid method that, AFAICT, iteratively refines weak-labels (?):

    Today’s state-of-the-art machine learning models require massive labeled training sets — which usually do not exist for real-world applications. Instead, Snorkel is based around the new data programming paradigm, in which the developer focuses on writing a set of labeling functions, which are just scripts that programmatically label data. The resulting labels are noisy, but Snorkel automatically models this process—learning, essentially, which labeling functions are more accurate than others—and then uses this to train an end model (for example, a deep neural network in TensorFlow).

    Surprisingly, by modeling a noisy training set creation process in this way, we can take potentially low-quality labeling functions from the user, and use these to train high-quality end models. We see Snorkel as providing a general framework for many weak supervision techniques, and as defining a new programming model for weakly-supervised machine learning systems.

This This recent snorkel review..

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.