Text data processing

Getting data in a text-like format is an entry pass to a whole world of weird tools to manage and process it, from the command-line no less


Data Cleaner’s cookbook explicates dataframe processing by laundering through CSV/TSV and using command-line fu. Fz mentions various tools including CSV munger xsv.


Here are some popular tools, starting with classics and moving on to workalikes.

  • sed and awk (work in practice but I spend half my time fighting with the syntax, and these days I simply get a language model to write the scripts for me

  • sd, “Intuitive find & replace CLI (sed alternative)

  • angle-grinder: Slice and dice logs on the command line

    Angle-grinder allows you to parse, aggregate, sum, average, min/max, percentile, and sort your data. You can see it, live-updating, in your terminal. Angle grinder is designed for when, for whatever reason, you don't have your data in graphite/honeycomb/kibana/sumologic/splunk/etc. but still want to be able to do sophisticated analytics.


VisiData is an interactive multitool for tabular data. It combines the clarity of a spreadsheet, the efficiency of the terminal, and the power of Python, into a lightweight utility which can handle millions of rows with ease.

Support stupendous numbers of formats, including various databases.

jq and jid

jq allows one to parse json instead of TSV. It claims to be “like sed for JSON data — you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text.”

See also

  • jid and
  • ijq, an “interactive jq”.

xidel and other HTML parsers

  • xidel: Xidel is a command line tool to download and extract data from HTML/XML pages using CSS selectors, XPath/XQuery 3.0, as well as querying JSON files or APIs (e.g. REST) using JSONiq.

See also

  • tq - Perform a lookup by CSS selector on an HTML input.
  • hq - Powerful command-line tool for slicing & dicing HTML.


yq aspires to be “the jq or sed of yaml files.” YAML is a superset of JSON, so I guess this gets you everything?


This seems like it should also be a strong suit of structured-data-processing shell PowerShell and indeed Powershell does support JSON parsing for example.


nushell claims to subsume most of the others into a full shell environment which is also a data-processing environment/functional programming language. It has interesting features such as treating filesystem subolders and nested data sets in the same paradigm and support for many data types natively.

Nu draws inspiration from projects like PowerShell, functional programming languages, and modern CLI tools. Rather than thinking of files and services as raw streams of text, Nu looks at each input as something with structure. For example, when you list the contents of a directory, what you get back is a table of rows, where each row represents an item in that directory. These values can be piped through a series of steps, in a series of commands called a ‘pipeline’.


Julia Evan points out sqlite-utils, a tool that magically converts JSON to sqlite.


pxi has a cure nerdy introduction; It is a fast way of executing tiny javascript snippets over streaming data. Sometimes just examining it is enough; one can use pretty-print-json for that.


For a quick bit of data conversion with some javascript processing in the middle the open source web app d2d is useful.


fx is another JSON processor whose remarkable features is a clickable interactive mode.


A classic unix tool for text data processing. It’s fine. Ubiquitous. But not intuitive or luxurious like a modern programming language. It does CSV well, but I would be afraid of more structured formats such as JSON.


Consider also, perhaps, tab … a modern text processing language that’s similar to awk in spirit. (But not similar in implementation or syntax.) Highlights:

  • Designed for concise one-liner aggregation and manipulation of tabular text data…
  • Feature-rich enough to support even very complex queries. (Also includes a good set of mathematical operations.)
  • Statically typed, type-inferred, declarative.


vgrep is a command-line text search that opens up matches in a text editor. ripgrep:

ripgrep is a line-oriented search tool that recursively searches your current directory for a regex pattern. By default, ripgrep will respect your .gitignore and automatically skip hidden files/directories and binary files

It also can search compressed files using the -z option.

  • fzf, a command-line “fuzzy finder” that a few people suggested.
  • ag, the “silver searcher”. Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSONFast ack.” /Geoff Greer’s site: The Silver Searcher
  • Gron, a tool for making JSON greppable.


  • HTTPie, a CURL-adjacentish command-line HTTP client for testing and debugging web APIs.
  • dyff: diff for yaml.
  • csvkit: if you spend a lot of time working with comma-separated values, accept no substitutes.
  • miller, “Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON”
  • Datamash: “GNU datamash is a command-line program which performs basic numeric, textual and statistical operations on input textual data files.”
  • xq - Like jq, but for XML and XPath.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.