Text data processing



Getting data in a text-like format is an entry pass to a whole world of weird tools to manage and process it, from the command-line no less

General

Data Cleaner’s cookbook explicates dataframe processing by laundering through CSV/TSV and using command-line fu. Fz mentions various tools including CSV munger xsv.

Munging

Here are some popular tools.

sqlite-tools

Julia Evan points out sqlite-utils, and tool that magically converts JSON to sqlite.

Visidata

VisiData is an interactive multitool for tabular data. It combines the clarity of a spreadsheet, the efficiency of the terminal, and the power of Python, into a lightweight utility which can handle millions of rows with ease.

Support stupendous numbers of formats, including various databases.

jq and jid

jq allows one to parse json instead of TSV. It claims to be “like sed for JSON data — you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text.”

See also

  • jid and
  • ijq, an “interactive jq”.

xidel and other HTML parsers

  • xidel: Xidel is a command line tool to download and extract data from HTML/XML pages using CSS selectors, XPath/XQuery 3.0, as well as querying JSON files or APIs (e.g. REST) using JSONiq.

See also

  • tq - Perform a lookup by CSS selector on an HTML input.
  • hq - Powerful command-line tool for slicing & dicing HTML.

yq

yq aspires to be “the jq or sed of yaml files.” YAML is a superset of JSON, so I guess this gets you everything?

PowerShell

This seems like it should also be a strong suit of structured-data-processing shell PowerShell and indeed Powershell does support JSON parsing for example.

Nushell

nushell claims to subsume most of the others into a full shell environment which is also a data-processing environment/functional programming language. It has interesting features such as treating filesystem subolders and nested data sets in the same paradigm and support for many data types natively.

Nu draws inspiration from projects like PowerShell, functional programming languages, and modern CLI tools. Rather than thinking of files and services as raw streams of text, Nu looks at each input as something with structure. For example, when you list the contents of a directory, what you get back is a table of rows, where each row represents an item in that directory. These values can be piped through a series of steps, in a series of commands called a ‘pipeline’.

pxi

pxi has a cure nerdy introduction; It is a fast way of executing tiny javascript snippets over streaming data. Sometimes just examining it is enough; one can use pretty-print-json for that.

d2d

For a quick bit of data conversion with some javascript processing in the middle the open source web app d2d is useful.

fx

fx is another JSON processor whose remarkable features is a clickable interactive mode.

awk

A classic unix tool for text data processing. It’s fine. Ubiquitous. But not intuitive or luxurious like a modern programming language. It does CSV well, but I would be afraid of more structured formats such as JSON.

tab

Consider also, perhaps, tab … a modern text processing language that’s similar to awk in spirit. (But not similar in implementation or syntax.) Highlights:

  • Designed for concise one-liner aggregation and manipulation of tabular text data…
  • Feature-rich enough to support even very complex queries. (Also includes a good set of mathematical operations.)
  • Statically typed, type-inferred, declarative.

Searching

vgrep is a command-line text search that opens up matches in a text editor. ripgrep

ripgrep is a line-oriented search tool that recursively searches your current directory for a regex pattern. By default, ripgrep will respect your .gitignore and automatically skip hidden files/directories and binary files

It also can search compressed files using the -z option.

  • fzf, a command-line “fuzzy finder” that a few people suggested.
  • ag, the “silver searcher”. Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSONFast ack.” /Geoff Greer's site: The Silver Searcher
  • Gron, a tool for making JSON greppable.

Incoming

  • HTTPie, a CURL-adjacentish command-line HTTP client for testing and debugging web APIs.
  • dyff: diff for yaml.
  • csvkit: if you spend a lot of time working with comma-separated values, accept no substitutes.
  • miller, “Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON”
  • Datamash: “GNU datamash is a command-line program which performs basic numeric, textual and statistical operations on input textual data files.”
  • xq - Like jq, but for XML and XPath.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.