Browse the internet for me

Introducing modern automation to the ancient crafts of serfing the web and tilling the clickfarm

The attention economy of late capitalism demands I spend time clicking on a browser window to do things, rather than automating the drudgery like we thought we were going to have all worked out by now.

This should be easier! Surely I can automate it?

Maybe. In some cases, our benevolent social media overlords have blessed certain types of automation service. For everything else, you need to hack the browser to automate stuff. Working out whether this violates the terms and conditions of the particular website you are using is your responsibility. I am not qualified to advise on that.

Her are some options of increasing complexity.

I need to snapshot a web page


wkhtmltopdf and wkhtmltoimage are open source (LGPLv3) command line tools to render HTML into PDF and various image formats using the Qt WebKit rendering engine. These run entirely “headless” and do not require a display or display service.

I need download a thing from one social network and post it to my blog or whatever

I likely don’t need a web browser for that; I could just use their API automation service.

Of course, Facebook probably doesn’t rank this as highly and something manually uploaded, that you obediently stared at advertisements while writing, so it’s up to you whether it’s worth your time being a clickmonkey for them.

I want to get data from some public web site with minimal pain

Scrapy is a python library to do that. Companion project scrapy-rss converts my parsings into RSS feeds.

Also there is a custom cloud service (scrapinghub) that will deploy it for you on a massive scale if you want.

Scrapoxy automates deployment of distributed cloud for this purpose.


No but it is a complicated one from a hostile walled garden! I need to go in using my browser!

Oh dear you aren’t trying to fake being on social media for weaponised mass opinion inception are you? Well, at least that pays, I hope.

At this point in history, where we are using billions of dollars of technological infrastructure to perform ritual social behaviour, I find I’d prefer just pick lice out of the pelts of my audience the old fashioned way. But maybe this is not an option for you? If so, here is some stuff I read before realising I wasn’t being paid enough.

There are some good tips in karicoss’s post on data liberation.

Turns out you can automate your local Firefox to do this in an easy way, if not a scalable one, thanks Ian Bicking. If you want something more full-featured, read on.


In Omar Rizwan’s TabFS, each of your open tabs is mapped to a folder, ie.g. I have 3 tabs open, and they map to 3 folders in TabFS

The files inside a tab’s folder directly reflect (and can control) the state of that tab in your browser.

This gives you a ton of power, because now you can apply all the existing tools on your computer that already know how to deal with files — terminal commands, scripting languages, point-and-click explorers, etc — and use them to control and communicate with your browser.

Now you don’t need to code up a browser extension from scratch every time you want to do anything. You can write a script that talks to your browser in, like, a melange of Python and bash, and you can save it as a single ordinary file that you can run whenever, and it's no different from scripting any other part of your computer.


nickjs is a javascript library to do browsing automation. If this is something you are doing for money, it might be worth your while paying phantombuster to automate hosting of Nickjs. See their explanatory blog post. (TODO check security guarantees)


Chromeless, the headless chrome browser, seems to be a hip thing here for certain types of automation. And it has various easy cloud-deployment options.


Browserless is containerized browsers with an API, I think.

browserless is a web-service that allows for remote clients to connect, drive, and execute headless work; all inside of docker. It offers first-class integrations for puppeteer, selenium’s webdriver, and a slew of handy REST APIs for doing more common work. On top of all that it takes care of other common issues such as missing system-fonts, missing external libraries, and performance improvements. We even handle edge-cases like downloading files, managing sessions, and have a fully-fledged documentation site.


Selenium is a browser testing and automation tool that can automate real work on the web. The protocol used is called WEbDriver. An example of this might be automating download of your bank statements in a usable form. But how can one automate its deployment and a bunch of user credentials with some degree of security and yet the absolute minimum of thought or effort? I do not yet know. To be continued if absolutely necessary.

  • guru99’s tutorials on this

  • testing a facebook application using selenium

  • Facebook login with selenium

  • webdriver docs

  • helium

    Under the hood, Helium forwards each call to Selenium. The difference is that Helium’s API is much more high-level. In Selenium, you need to use HTML IDs, XPaths and CSS selectors to identify web page elements. Helium on the other hand lets you refer to elements by user-visible labels. As a result, Helium scripts are typically 30-50% shorter than similar Selenium scripts. What’s more, they are easier to read and more stable with respect to changes in the underlying web page.

    Because Helium is simply a wrapper around Selenium, you can freely mix the two libraries. For example:

    # A Helium function:
    driver = start_chrome()
    # A Selenium API:

So in other words, you don’t lose anything by using Helium over pure Selenium. * selite:

SeLite automates browser navigation and testing. It extends Selenium. It

  • improves Selenium (API, syntax and visual interface),
  • enables reuse,
  • supports reporting and interaction, […]

SeLite enables DB-driven navigation with SQLite

You might also get some mileage out of the Firefox CLI, mozrepl.


A commercial offering for Windows, scripting your browser for e.g. data extraction. USD99-USD995 depending on features desired.