Hosted Functions / Serverless

2026-04-05 — 2026-04-05

Wherein functions are submitted to a platform, invoked upon triggering, and charged per execution, with idle server costs being traded for the occasional latency of cold starts.

compsci
computers are awful together
doing internet
faster pussycat
Figure 1

Running code in the cloud without (explicitly) managing servers.

In ye olde tymes, I’d rent a VM (or a whole machine), install my OS, configure my runtime, set up monitoring, worry about security patches, and pay for it 24/7 whether anyone is using it or not. Modern provisioning tries to make life easier. We can say cool phrases like devops — which AFAICT is short for “containerization means we can at least package and environment reproducibly and deploy it across providers without starting from scratch each time.” But we’re still doing way more stuff than I want to know about for my modest internet automation need, usually.

Serverless platforms abstract even more stuff away: we hand them a function, they run it when something triggers it, and we pay per invocation without sweating the details of the underlying infrastructure. No capacity planning, no idle costs, and scaling happens automatically. And I don’t need to spend my life checking whether that VM I built is still running.

Trade-offs: Cold starts can add noticeable latency when a function hasn’t run recently. Debugging is harder when you don’t control (or even see) the execution environment. Vendor lock-in is a thing, since AFAICT each platform has its own deployment model, runtime constraints, and surrounding ecosystem of queues and databases. For anything that runs continuously or at high volume, per-invocation pricing can end up more expensive than just keeping a cheap VM around.

Still, for bursty workloads, webhooks, background jobs, and lightweight APIs, the model is genuinely convenient. These are useful for web API automation, running AI inference jobs, or hosting small services. Or whatever, really.

1 Tooling

  • Fly.io: A platform for developers to deploy applications and databases globally. Its value proposition is running apps “at the edge,” physically close to users, to dramatically reduce latency. It works by turning standard Docker containers into lightweight virtual machines that can be launched quickly across its 35+ global regions.
  • runpod: A serverless cloud platform specifically designed for AI and machine learning workloads. Its value proposition is providing on-demand, auto-scaling GPU compute. This allows developers to run compute-intensive AI models without managing servers, paying only for the processing time they use. That’s often significantly cheaper and faster than traditional cloud providers.
  • Val Town e.g. Delivering the mail in Val Town