Databases (for scientists)

Stashing experimental data for later use

March 4, 2015 — May 6, 2023

computers are awful
concurrency hell
data sets
Figure 1: You have to store it somewhere

tl;dr I do a lot of data processing, and not so much running of websites and such. This is not the typical target workflow for a database, at least not as they usually imagine at enterprise database focus groups. So, here are some convenient databases for my needs: working at a particular, sub-Google, scale, where my datasets are a few gigabytes but never a few terabytes, and capturing stuff like experiment data, data processing pipelines and that kind of thing. Not covered: processing credit card transactions, running websites, etc.

Short list of things I have used for various purposes:

Point of contact here: Data Lakes, big stores of (usually?) somewhat tabular data which are not yet normalized into data tables and are implicitly large. I don’t know much about those but see, e.g. Introducing Walden.

Maybe one could get a bit of perspective on the tools here by write-ups such as Luc Perkins’s recent database technology that should be on your radar.

1 Too-long bit begins here

With a focus on slightly specialised data stores for use in my statistical jiggery-pokery. Which is to say: I care about analysis of lots of data fast. This is probably inimical to running, e.g. your webapp from the same store, which has different requirements. (Massively concurrent writes, consistency guarantees, many small queries instead of few large) Don’t ask me about that.

I would prefer to avoid running a database server at all if I can; At least in the sense of a highly specialized multi-client server process. Those are not optimised for a typical scientific workflow. First stop is in-process non-concurrent-write data storage e.g. HDF5 or sqlite.

However, if I want to mediate between lots of threads/processes/machines updating my data in parallel, a “real” database server might be justified.

OTOH if my data is big enough, perhaps I need a crazy giant distributed store of some kind? Requirements change vastly depending on scale.

2 Files

Unless my data is enormous, or I need to write to it concurrently, this is what I want, because

  1. no special server process is required and
  2. migrating data is just copying a file

But how to encode the file? See data formats.

3 Document stores

Want to handle floppy ill-defined documents of ill-specified possibly changing metadata? Already resigned to the process of querying and processing this stuff being depressingly slow and/or storage-greedy? I must be looking for document stores!

If I am looking at document stores as my primary workhorse, as opposed to something I want to get data out of for other storage, then I have

  1. Not much data so performance is no problem, or
  2. a problem, or
  3. a big engineering team.

Let’s assume number 1, which is common for me.

3.1 Mongodb

Mongodb has a pleasant JS api but is notoriously not all that good at concurrent storage. If my data is effectively single-writer I could just be doing this from the filesystem. Still I can imagine scenarios where the dynamic indexing of post hoc metadata is nice, for example in the exploratory phase with a data subset?

Since it went closed-source, it is worth knowing there is an open-source alternative, FerretDB 1.0 GA.

3.2 Couchdb

Couchdb was the pin-up child of the current crop of non SQL-based databases, but seems to be unfashionable rn? A big ecosystem of different implementations of the core DB for different purposes, all of which promise to support replication into eventually-consistent clusters.

3.3 kinto

kinto “is a lightweight JSON storage service with synchronization and sharing abilities. It is meant to be easy to use and easy to self-host. Supports fine permissions, easy host-proof encryption, automatic versioning for device sync.”

I can imagine distributed data analysis applications.

4 LevelDB

5 RocksDB

The RocksDB library provides a persistent key value store. Keys and values are arbitrary byte arrays. The keys are ordered within the key value store according to a user-specified comparator function.

The library is maintained by the Facebook Database Engineering Team, and is based on LevelDB, by Sanjay Ghemawat and Jeff Dean at Google.

5.1 lmdb

lmdb looks interesting.

LMDB is a tiny database with some excellent properties:

  • Ordered map interface (keys are always lexicographically sorted).
  • Reader/writer transactions: readers don’t block writers, writers don’t block readers. Each environment supports one concurrent write transaction.
  • Read transactions are extremely cheap.
  • Environments may be opened by multiple processes on the same host, making it ideal for working around Python’s GIL.
  • Multiple named databases may be created with transactions covering all named databases.
  • Memory mapped, allowing for zero copy lookup and iteration. This is optionally exposed to Python using the buffer() interface.
  • Maintenance requires no external process or background threads.
  • No application-level caching is required: LMDB fully exploits the operating system’s buffer cache.

Most efficient for small records (2K).

Records are grouped into pages matching the operating system’s VM page size, which is usually 4096 bytes. Each page must contain at least 2 records, in addition to 8 bytes per record and a 16 byte header. Due to this the engine is most space-efficient when the combined size of any (8+key+value) combination does not exceed 2040 bytes.

When an attempt to store a record would exceed the maximum size, its value part is written separately to one or more dedicated pages. Since the trailer of the last page containing the record value cannot be shared with other records, it is more efficient when large values are an approximate multiple of 4096 bytes, minus 16 bytes for an initial header.

5.2 Qminer

?

qminer

UNSTRUCTURED DATA
QMiner provides support for unstructured data, such as text and social networks across the entire processing pipeline, from feature engineering and indexing to aggregation and machine learning.
SEARCH
QMiner provides out-of-the-box support for indexing, querying and aggregating structured, unstructured and geospatial data using a simple query language.
JAVASCRIPT API
QMiner applications are implemented in JavaScript, making it easy to get started. Using the Javascript API it is easy to compose complete data processing pipelines and integrate with other systems via RESTful web services.
C++ LIBRARY
QMiner is implemented in C++ and can be included as a library into custom C++ projects, thus providing them with stream processing and data analytics capabilities.

5.3 Berkeley DB

berkeley is a venerable key-value store that is no longer fashionable. However it is efficient for storing binary data, and supports multi-process concurrency via lock files, all without using a server process. As such it may be useful for low-fuss HPC data storage and processing. There are, e.g. python bindings.

6 Relational databases

Long lists of numbers? Spreadsheet-like tables? Wish to do queries mostly of the sort supported by database engines, such as grouping, sorting and range queries? First stop is Sqlite, if it fits in memory, in the sense of the bit I am mostly-using mostly-fitting in memory. Note that if I have tabular data but do not particularly wish to perform diverse RDBMS-style queries, then I should just use HDF5 or some other simple disk data store.

🏗 how to write safely to sqlite from multiple processes with write locks, worked examples. Also: See Mark Litwintschik’s Minimalist Guide to SQLite.

If not, or if I need to handle concurrent writing by multiple processes, we need one of the classic RDBMS servers, e.g. MySQL or Postgres. Scientific use cases are not usually like this; we are not usually concurrently generating lots of data.

Interestingly if we delete stuff from SQLite it can be relatively easy to recover. See Undark - a SQLite deleted data recovery tool / corrupt database data recovery tool. See also: Make sure Delete is Delete.

6.1 Mysql

TBD

6.2 MariaDB

MariaDB/server.)

MariaDB server is a community developed fork of MySQL server. Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most featureful, stable, and sanely licensed open SQL server in the industry

6.3 postgresql

Classic. What I tended to use because it has powerful embedded scripting and good support for spatial data.

6.4 openGauss

Huawei’s postgres fork is openGauss

6.5 Accessing RDBMSs from python

Maybe we can make numerical work easier using Blaze?

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar interface to query data living in other data storage systems.

More generally, records, which wraps tablib and sqlalchemy, are all good at this.

Julia Evan points out sqlite-utils, and tool that magically converts JSON to sqlite.

6.6 Dolt

Also covered under data versioning

Dolt is a SQL database that you can fork, clone, branch, merge, push and pull just like a git repository. Connect to Dolt just like any MySQL database to run queries or update the data using SQL commands. Use the command line interface to import CSV files, commit your changes, push them to a remote, or merge your teammate’s changes.

All the commands you know for Git work exactly the same for Dolt. Git versions files, Dolt versions tables. It’s like Git and MySQL had a baby.

We also built DoltHub, a place to share Dolt databases. We host public data for free. If you want to host your own version of DoltHub, we have DoltLab. If you want us to run a Dolt server for you, we have Hosted Dolt.

7 Distributed stores

Ever since google, every CS graduate wants to write one of these. There are dozens of options; you probably need none of them.

I have used non of them and only mention them here to keep them straight in my head.

  • Hbase for Hadoop (original hip open source one, no longer hip)

  • Voldemort

  • Cassandra

  • Hypertable is Baidu’s open competitor to Google’s internal database

  • bedrockdb:

    […] is a networking and distributed transaction layer built atop SQLite, the fastest, most reliable, and most widely distributed database in the world.

    Bedrock is written for modern hardware with large SSD-backed RAID drives and generous RAM file caches, and thereby doesn’t mess with the zillion hacky tricks the other databases do to eke out high performance on largely obsolete hardware. This results in fewer esoteric knobs, and sane defaults that “just work”.

  • datalog seems to be a protocol/language designed for large stores, with implementations such as datomic getting good press for being scalable. Read this tutorial and explain it to me.

    datomic:

    Build flexible, distributed systems that can leverage the entire history of your critical data, not just the most current state. Build them on your existing infrastructure or jump straight to the cloud.

  • orbitdb Not necessarily giant (I mean, I don’t know how it scales) but convenient for offline/online syncing and definitely distributed, orbitdb uses ipfs for its backend.

8 Caches

redis and memcached are the default generic choices here. Redis is newer and more flexible. memcached is sometimes faster? Dunno. Perhaps see Why Redis beats Memcached for caching.

See python caches for the practicalities of doing this for one particular language.

9 Graph stores

Graph-tuple oriented processing.

graphengine:

GE is also a flexible computation engine powered by declarative message passing. GE is for you, if you are building a system that needs to perform fine-grained user-specified server-side computation.

From the perspective of graph computation, GE is not a graph system specifically optimized for a certain graph operation. Instead, with its built-in data and computation modelling capability, we can develop graph computation modules with ease. In other words, GE can easily morph into a system supporting a specific graph computation.

nebula

Nebula Graph is an open-source graph database capable of hosting super large-scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

There are a lot more of these. Everyone is inventing new graph stores at the moment.

10 Other

immudb

immudb is a lightweight, high-speed immutable database for systems and applications, written in Go. With immudb you can track changes in sensitive data in your transactional databases and then record those changes permanently in a tamperproof immudb database. This allows you to keep an indelible history of sensitive data, for example debit/credit card transactions.

Traditional transaction logs are hard to scale and are mutable. So there is no way to know for sure if your data has been compromised.

As such, immudb provides unparalleled insights retroactively of changes to your sensitive data, even if your perimeter has been compromised. immudb guarantees immutability by using a Merkle tree structure internally.

immudb gives you the same cryptographic verification of the integrity of data written with SHA-256 as a classic blockchain without the cost and complexity associated with blockchains today.

11 Array stores that are not filesystem stores

Luc Perkins mentions

TileDB is a DB built around multi-dimensional arrays that enables you to easily work with types that aren’t a great fit for existing RDBMS systems, such as dense and sparse arrays and dataframes. TileDB is specifically geared toward use cases like genomics and geospatial data.

11.1 Noteworthy features

12 Vectors

See vector databases.

13 Time series/Event crunching/Streaming

See databases for realtime use.

14 Interesting experiments

14.1 noisepage

Self-learning database: NoisePage

NoisePage is a relational database management system (DBMS) designed from the ground up for autonomous deployment. It uses integrated machine learning components to control its configuration, optimization, and tuning. The system will support automated physical database design (e.g., indexes, materialized views, sharding), knob configuration tuning, SQL tuning, and hardware capacity/scaling. Our research focuses on building the system components that support such self-driving operation with little to no human guidance.

I believe that it does not work yet.

14.2 Logica

logica (source)

Logica is for engineers, data scientists and other specialists who want to use logic programming syntax when writing queries and pipelines to run on BigQuery.

Logica compiles to StandardSQL and gives you access to the power of BigQuery engine with the convenience of logic programming syntax. This is useful because BigQuery is magnitudes more powerful than state of the art native logic programming engines.

We encourage you to try Logica, especially if

  • you already use logic programming and need more computational power, or
  • you use SQL, but feel unsatisfied about its readability, or
  • you want to learn logic programming and apply it to processing of Big Data.

In the future we plan to support more SQL dialects and engines.

15 Incoming

Clickhouse for example is a columnar database that avoids some of the problems of row-oriented tabular databases. I guess you could try that? And Amazon Athena turns arbitrary data into SQL-queryable data, apparently. So the skills here are general.

Columnar in-process DB: