Data sets

Questions for answers looking for questions

See also musical corpora for some specialised music ones.

Generic tools for construction thereof

  • Snorkel is a hybrid method that, AFAICT, iteratively refines weak-labels (?):

    Today’s state-of-the-art machine learning models require massive labeled training sets – which usually do not exist for real-world applications. Instead, Snorkel is based around the new data programming paradigm, in which the developer focuses on writing a set of labeling functions, which are just scripts that programmatically label data. The resulting labels are noisy, but Snorkel automatically models this process—learning, essentially, which labeling functions are more accurate than others—and then uses this to train an end model (for example, a deep neural network in TensorFlow).

    Surprisingly, by modeling a noisy training set creation process in this way, we can take potentially low-quality labeling functions from the user, and use these to train high-quality end models. We see Snorkel as providing a general framework for many weak supervision techniques, and as defining a new programming model for weakly-supervised machine learning systems.

  • Engauge

    The Engauge Digitizer tool accepts image files (like PNG, JPEG and TIFF) containing graphs, and recovers the data points from those graphs. The resulting data points are usually used as input to other software applications. Conceptually, Engauge Digitizer is the opposite of a graphing tool that converts data points to graphs. [..] an image file is imported, digitized within Engauge, and exported as a table of numeric data to a text file.

    (They mean graph in the sense of plot, not in the sense of network.)

Miscellaneous data sets

  • Google’s dataset search
  • academic torrents:

    Torrent technology allows a group of editors to “seed” their own peer-reviewed published articles with just a torrent client. Each editor can have part or all of the papers stored on their desktops and have a torrent tracker to coordinate the delivery of papers without a dedicated server.

    • One aim of this site is to create the infrastructure to allow open access journals to operate at low cost. By facilitating file transfers, the journal can focus on its core mission of providing world class research. After peer review the paper can be indexed on this site and disseminated throughout our system.

    • Large dataset delivery can be supported by researchers in the field that have the dataset on their machine. A popular large dataset doesn’t need to be housed centrally. Researchers can have part of the dataset they are working on and they can help host it together.

    • Libraries can host this data to host papers from their own campus without becoming the only source of the data. So even if a library’s system is broken other universities can participate in getting that data into the hands of researchers.

  • prodigy is an interactive dataset annotator for training classifiers

  • Rdatasets collates all the most popular R datasets

  • Reproduce someone else’s results! Figshare hosts the supporting data for many amazing papers. E.g. here’s 1.4. Gb of synapses firing.

  • Zenodo is similar. Backed by CERN, on their infrastructure. Hosts many published scientific data sets
  • Machine learning cult phenomenon Kaggle now does collaborative data set cleaning and publishing: kaggle data sets, such as NOAA weather,
  • IEEE Dataport is free for IEEE members and happily hosts 2TB datasets. It gives you a DOI and integrates with many IEEE publications, plus allows convenient access from the Amazon cloud via AWS, which might be where your data is anyway. However, they charge USD2000 for an open access version, and otherwise only other IEEE dataport users can get at your data. I know this is not an unusual way for access to journal articles to work, but for data sets it feels like a ham-fisted way of enforcing scarcity. Not to undercut my own professional society here, but if you can do without a DOI, I will happily upload your data for AWS for you for, say, USD1500, which will pay for 2 very lucrative hours of my time.

  • Nuit Blanche’s listing of data sets is handy if you want some good inverse-problem signal processing challenges.

  • Social Media Research Toolkit:

    The Social Media Research Toolkit is a list of 50+ social media research tools curated by researchers at the Social Media Lab at Ted Rogers School of Management, Ryerson University.

    So not necessarily data, but the software to get it.

  • datamarket

    e.g. the time series data library by Rob Hyndman.

  • Datasets on reddit

  • real estate

  • SESHAT:

    The Seshat Global History Databank brings together the most current and comprehensive body of knowledge about human history in one place. Our unique Databank systematically collects what is currently known about the social and political organization of human societies and how civilizations have evolved over time.

  • Quandl has some databases.
  • CSRP has some too? – perhaps accessible to me via Wharton?

Collected open data sets at cloud providers

Various providers host data sets conveniently close to their cloud platforms

Social network-ey ones

I’m no longer in this area, so I won’t say much on this.

  • Social science one is the scheme you join to get them to run your experiments on anonymised facebook data for you. In practice it has not been working.
  • UCI datasets are diverse.

  • Leskovec lab

    467 million Twitter posts from 20 million users covering a 7 month period from June 1 2009 to December 31 2009. We estimate this is about 20-30% of all public tweets published on Twitter during the particular time frame.

    As per request from Twitter the data is no longer available.

    The Higgs dataset has been built after monitoring the spreading processes on Twitter before, during and after the announcement of the discovery of a new particle with the features of the elusive Higgs boson on 4th July 2012. The messages posted in Twitter about this discovery between 1st and 7th July 2012 are considered.

  • Patent citation networks (these are available and reasonably well annotated)

  • Wikipedia articles and their references (readily available)

    • also includes easily-parseable mathematical data and theorems
    • …and edit trails
    • …and category annotations
    • and semantic metadata
    • probably more data than you can use
  • On that theme, Wikidata which attempts to construct a semantic graph of entity relations between things mentioned in, basically, wikipedia.

  • source code of large collaborative projects (Linux or BSD kernel, openoffice, python, Perl, GCC etc)

    • can I parse such projects to see how interfaces form?
    • Are there odd stylised facts about contribution to these that I might be able to explain?
    • Or call-graphs?
  • Microsoft Academic Knowledge Graph

    We present the Microsoft Academic Knowledge Graph (MAKG), a large RDF data set with over eight billion triples with information about scientific publications and related entities, such as authors, institutions, journals, and fields of study. The data set is based on the Microsoft Academic Graph and licensed under the Open Data Attributions license. Furthermore, we provide entity embeddings for all 210M represented scientific papers.

  • Free-text stuff: Some blog data set?

Point clouds/spatial data

Stashed at 3D data.