Databases


You have to store it somewhere

tl;dr

I do a a lot of data processing, and not so much running of websites and such. This is not the typical target workflow for a database. But here are the most convenient for my needs: working at a particular, sub-Google, scale, where my datasets are a few gigabytes but never a few terabytes.

  • Pre-sharded or non-concurrent-write datasets too big for RAM: some files formatted as hdf5. Annoying schema definition needed, but once you’ve done that it’s fast for numerical data.
  • Pre-sharded or non-concurrent-write datasets: sqlite. Has great tooling, a pity that it’s wasteful for numerical data.
  • Pre-sharded or non-concurrent-write datasets between python and R: feather.
  • Concurrent but not incessant writes (i.e. I just want to manage my processes): Maybe dogpile.cache or joblib?
  • Sometimes I just want to quickly browser a database to see what is in it. See DB UIs.
  • Concurrent frequent writes: redis.
  • I want to share my data: Data sharing

Maybe one could get a bits of perspective on the tools here by write-ups such as Luc Perkins’s Recent database technology that should be on your radar (part 1)

OK, full notes now:

With a focus on slightly specialised data stores for use in my statistical jiggerypokery. Which is to say: I care about analysis of lots of data fast. This is probably inimical to running, e.g. your webapp from the same store, which has different requirements. (Massively concurrent writes, consistency guarantees, many small queries instead of few large) Don’t ask me about that.

I prefer to avoid running a database server at all if I can; At least in the sense of a highly specialized multi-client server process. Those are not optimised for a typical scientific workflow. First stop is in-process non-concurrent-write data storage e.g. HDF5 or sqlite.

However, if you want to mediate between lots of threads/processes/machines updating your data in parallel, a “real” database server can be justified.

OTOH if your data is big enough, perhaps you need a crazy giant distributed store of some kind? Requirements change vastly depending on your scale.

Files

Unless my data is enormous, or I need to write to it concurrently, this is what I want, because

  1. no special server process is required and
  2. migrating data is just copying a file

But how to encode the file? See data format.

Array stores that are not filesystem stores

TileDB.

Time series/Event crunching/Streaming

See databases for realtime use.

Document stores

Want to handle floppy ill-defined documents of ill-specified possibly changing metadata? Already resigned to the process of querying and processing this stuff being depressingly slow and/or storage-greedy?

You’re looking for document stores!

If you are looking at document stores as your primary workhorse, as opposed to something you want to get data out of for other storage, then you have

  1. Not much data so performance is no problem, or
  2. a problem, or
  3. a big engineering team.

Let’s assume number 1, which is common.

Mongodb

Mongodb has a pleasant JS api but is not all that good at concurrent storage, so why are you bothering to do this in a document store? If your data is effectively single-writer you could just be doing this from the filesystem. Still I can imagine scenarios where the dynamic indexing of post hoc metadata is nice, for example in the exploratory phase with a data subset?

Couchdb

Couchdb was the pinup child of the current crop of non SQL-based databases, but seems to be unfashionable.

kinto

kinto “is a lightweight JSON storage service with synchronisation and sharing abilities. It is meant to be easy to use and easy to self-host. Supports fine permissions, easy host-proof encryption, automatic versioning for device sync.”

So this is probably for the smartphone app version.

lmdb

lmdb looks interesting if you want a simple store that just guarantees you can write to it without corrupting data, and without requiring a custom server process. Most efficient for small records (2K)

Qminer

?

qminer

UNSTRUCTURED DATA
QMiner provides support for unstructured data, such as text and social networks across the entire processing pipeline, from feature engineering and indexing to aggregation and machine learning.
SEARCH
QMiner provides out-of-the-box support for indexing, querying and aggregating structured, unstructured and geospatial data using a simple query language.
JAVASCRIPT API
QMiner applications are implemented in JavaScript, making it easy to get started. Using the Javascript API it is easy to compose complete data processing pipelines and integrate with other systems via RESTful web services.
C++ LIBRARY
QMiner is implemented in C++ and can be included as a library into custom C++ projects, thus providing them with stream processing and data analytics capabilities.

Relational databases

Long lists of numbers? Spreadsheet-like tables? Wish to do queries mostly of the sort supported by database engines, such as grouping, sorting and range queries? Sqlite if it fits in memory. (No need to click on that link though, sqlite is already embedded in your tool of choice.) 🏗 how to write safely to sqlite from multiple processes through write locks. Also: Mark Litwintschik’s Minimalist Guide to SQLite.

If not, or if you need to handle concurrent writing by multiple processes, MySQL or Postgres. Not because they are best for this job, but because they are common. Honestly, though, unless this is a live production service for many users, you should probably be using a disk-backed store.

Clickhouse for example is a columnar database that avoids some of the problems of row-oriented tabular databases. I guess you could try that? And Amazon Athena turns arbitrary data into SQL-queryable data, apparently. So the skills here are general.

Accessing RDBMSs from python

Maybe you can make numerical work easier using Blaze?

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar interface to query data living in other data storage systems.

More generally, records, which wraps tablib and sqlalchemy, is good at this.

Distributed stores

Ever since google, every CS graduate wants to write one of these. There are dozens of options; you probably need none of them.

  • Hbase for Hadoop (original hip open source one, no longer hip)

  • Voldemort

  • Cassandra

  • Hypertable Baidu’s open competitor to google internal database

  • bedrockdb:

    […] is a networking and distributed transaction layer built atop SQLite, the fastest, most reliable, and most widely distributed database in the world.

    Bedrock is written for modern hardware with large SSD-backed RAID drives and generous RAM file caches, and thereby doesn’t mess with the zillion hacky tricks the other databases do to eke out high performance on largely obsolete hardware. This results in fewer esoteric knobs, and sane defaults that “just work”.

  • datalog seems to be a protocol/language (prolog) designed for largish stores, with implementations such as datomic getting good press for being scalable. Read this tutorial and explain it to me.

    datomic:

    Build flexible, distributed systems that can leverage the entire history of your critical data, not just the most current state. Build them on your existing infrastructure or jump straight to the cloud.

  • orbitdb Not necessarily giant (I mean, I don’t know the how it scales) but convenient for offline/online syncing and definitely distributed, orbitdb uses ipfs for its backend.

    See parallel computing.

Caches

redis and memcached are the default generic choices here. Redis is newer and more flexible. memcached is sometimes faster? Dunno. Perhaps see Why Redis beats Memcached for caching.

See python caches for the practicalities of doing this for one particular languages.

Graph stores

Graph-tuple oriented processing.

graphengine:

GE is also a flexible computation engine powered by declarative message passing. GE is for you, if you are building a system that needs to perform fine-grained user-specified server-side computation.

From the perspective of graph computation, GE is not a graph system specifically optimized for a certain graph operation. Instead, with its built-in data and computation modeling capability, we can develop graph computation modules with ease. In other words, GE can easily morph into a system supporting a specific graph computation.

nebula

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

Other

immudb

immudb is a lightweight, high-speed immutable database for systems and applications, written in Go. With immudb you can track changes in sensitive data in your transactional databases and then record those changes permanently in a tamperproof immudb database. This allows you to keep an indelible history of sensitive data, for example debit/credit card transactions.

Traditional transaction logs are hard to scale and are mutable. So there is no way to know for sure if your data has been compromised.

As such, immudb provides unparalleled insights retroactively of changes to your sensitive data, even if your perimeter has been compromised. immudb guarantees immutability by using a Merkle tree structure internally.

immudb gives you the same cryptographic verification of the integrity of data written with SHA-256 as a classic blockchain without the cost and complexity associated with blockchains today.