Local social platforms: a technical implementation guide

Two deployment paths for a neighbourhood platform, from cloud hosting to closet servers

2026-03-25 — 2026-03-25

Wherein two deployment courses are set forth—VPS and closet laptops—whilst PostGIS proximity search, passkeys, and postcard codes for postcode proof are described.

communicating
cooperation
diy
engineering
faster pussycat
institutions
networks
P2P
sovereign
straya
wonk
Figure 1

This is the technical companion to A social platform for your neighbourhood, which lays out the social and institutional case for community-owned local platforms. That post is the why. This one is the how—or at least a plausible how, since the right technical choices depend heavily on what the community actually wants to build.

As with the sovereign compute technical post, this is heavily AI-assisted research. Treat specific version numbers, pricing, and benchmarks as “serving suggestions”. In practice we’d design this in a more iterative way, starting with a simple prototype and evolving the architecture as we learn what the community actually needs. Here is… way more intense. I let the LLM spin out my minimal dotpoints into some seriously wargamed scenarios.

I’ll sketch two deployment paths: a cloud-hosted option that’s cheap and quick to stand up, and a local hardware option that gives the community full physical sovereignty over its data. These aren’t mutually exclusive—we’d likely start with cloud hosting and migrate to local hardware if the project proves viable.

Does that sound interesting? Get in touch.

1 Build or buy?

The first decision is whether to customize an existing platform or build from scratch.

1.1 Off-the-shelf: Bonfire

Bonfire is the most interesting pre-built option for this use case. It’s a modular, federated social platform built in Elixir/Phoenix, explicitly designed for community self-governance. It speaks ActivityPub, so it can interoperate with the Fediverse, and it’s built around “extensions”—pluggable modules for different social functions (discussion, coordination, economic exchange).

Pros:

  • Real-time by default (Phoenix LiveView—no polling, no page reloads)
  • Federation built in, so we can start local and open up later
  • Governance features designed in from the start (roles, boundaries, moderation)
  • Active development community, explicitly aligned with cooperative values
  • Elixir/BEAM runtime is famously good at handling many concurrent connections cheaply

Cons:

  • Relatively early-stage; documentation is patchy
  • The extension ecosystem is thin—we’d need to build or heavily customize extensions for marketplace, events calendar, and tool library functionality
  • Elixir is not a common skill; if our volunteer maintainers are Python or JavaScript people, the learning curve is steep
  • The modular architecture means deployment is more complex than “just run the Docker container”

1.2 Off-the-shelf: other options

Discourse is battle-tested forum software with a plugin ecosystem. It could handle discussion and events but would need significant customization for marketplace or tool-library functions. Runs on Ruby/Rails; hosting requirements are heavier than Bonfire (we recommend 2GB+ RAM minimum).

Mobilizon is Framasoft’s federated events platform, written in Elixir. It does events well, but only events—it’s not a general social platform. Could be part of a multi-service stack rather than the whole thing.

1.3 Custom build

Given that AI coding assistants can now scaffold a full-stack web application in a weekend, building from scratch is less crazy than it sounds. The advantage is total control over the feature set and UX. The disadvantage is total responsibility for maintenance, security, and ongoing development.

A sensible custom stack for this kind of project in 2026:

Layer Choice Rationale
Backend Python (Django) or Elixir (Phoenix) Django: enormous ecosystem, easy to find contributors, great ORM. Phoenix: real-time by default, better concurrency model, but smaller talent pool
Frontend HTMX + server-rendered templates or SvelteKit HTMX keeps things simple and server-authoritative. SvelteKit if we want a richer client experience
Database PostgreSQL with PostGIS PostGIS gives us spatial queries for free—essential for “things near me” in a neighbourhood platform
Search Meilisearch or PostgreSQL full-text Meilisearch is fast and typo-tolerant; Postgres FTS is simpler to deploy and good enough for our scale
Real-time WebSockets (built into Phoenix; Django Channels for Django) For live chat, marketplace notifications, etc.
Auth Passkeys + email magic links No passwords to manage or leak. Passkeys are the future; magic links are the pragmatic fallback
Federation ActivityPub (optional, add later) Don’t build this in v1. Add it when/if we want to connect to the Fediverse

My instinct, given the audience for this project (community-minded people in Melbourne, likely to include a few developers but not a dedicated engineering team), is to start with Django + HTMX + PostgreSQL/PostGIS. It’s the most boring choice, and that’s the point—boring means the biggest pool of people who can maintain it, the most Stack Overflow answers, and the least risk the project dies because the one person who understood the framework moved to Tasmania.

We can scaffold the initial version with Claude or a similar coding assistant in a couple of dedicated weekends. A marketplace with listings, search, user profiles, reputation, and messaging is maybe 20–30 screens. An events calendar adds another 5–10. It’s not trivial, but it’s not a moonshot either.

2 Path 1: Cloud hosting

This is the sensible starting point. Low upfront cost, no hardware to maintain, easy to iterate.

2.1 Architecture

┌─────────────────────────────────────────────┐
│              Reverse proxy (Caddy)           │
│         TLS termination, static files        │
├─────────────────────────────────────────────┤
│            App server (Django/Gunicorn       │
│               or Phoenix)                    │
├──────────────────┬──────────────────────────┤
│   PostgreSQL     │     Redis                │
│   + PostGIS      │   (cache, sessions,      │
│                  │    WebSocket broker)      │
└──────────────────┴──────────────────────────┘
│              Object storage (S3/R2)          │
│          (user uploads, listing images)      │
└─────────────────────────────────────────────┘

2.2 Hosting options and costs

For ~1,000 active users with moderate traffic (maybe 10,000 page views/day at peak, much less most of the time), we need very little compute.

Option A: Single VPS (simplest)

Provider Spec Monthly cost
Hetzner CX32 4 vCPU, 8GB RAM, 80GB NVMe €8.50 (~$14 AUD)
DigitalOcean 4 vCPU, 8GB RAM, 160GB $48 USD (~$75 AUD)
Vultr 4 vCPU, 8GB RAM, 128GB $48 USD (~$75 AUD)
BinaryLane (Australian) 4 vCPU, 8GB RAM ~$40 AUD

Hetzner is absurdly cheap and reliable, but their nearest data centre is Singapore—latency from Melbourne is ~60ms, which is fine for a web app but noticeable for real-time features. BinaryLane and similar Australian providers cost more but give sub-10ms latency.

For the MVP, a single Hetzner CX32 at $14 AUD/month runs everything—app server, database, Redis, Caddy—with headroom. This is a latte a month.

Option B: Managed services (less ops burden)

Service What it does Monthly cost
Railway or Render App hosting $7–25 USD
Neon or Supabase Managed PostgreSQL + PostGIS Free tier → $25 USD
Cloudflare R2 Object storage (images) ~$0–5 USD (generous free tier)
Upstash Managed Redis Free tier → $10 USD

Total: $15–65 USD/month (~$25–100 AUD) depending on tier. More expensive than a raw VPS, but with way less ops burden—no server patching, no database backups to manage, and automatic scaling.

Option C: The free tier special

For a truly bootstrapped start:

  • Vercel (free tier) for the frontend, if we go with a SvelteKit or Next.js client
  • Fly.io (free tier: 3 shared-cpu VMs, 256MB RAM each) for the backend
  • Neon (free tier: 0.5GB storage, autosuspend) for PostgreSQL
  • Cloudflare R2 (free tier: 10GB) for images

Total: $0/month until we outgrow the free tiers, which for a neighbourhood of a few hundred users might be a while. The catch is that free tiers have cold-start latency (the app literally goes to sleep and takes a few seconds to wake up) and resource limits that will get annoying.

2.3 Deployment

The whole stack should be containerized from day one. A docker compose setup makes it possible to run the same stack locally for development, on a VPS for production, and on local hardware (Path 2) when we’re ready.

# docker-compose.yml (simplified)
services:
  web:
    build: .
    ports: ["8000:8000"]
    environment:
      DATABASE_URL: postgres://db:5432/localplatform
      REDIS_URL: redis://redis:6379
    depends_on: [db, redis]

  db:
    image: postgis/postgis:16-3.4
    volumes: ["pgdata:/var/lib/postgresql/data"]

  redis:
    image: redis:7-alpine

  caddy:
    image: caddy:2
    ports: ["80:80", "443:443"]
    volumes: ["./Caddyfile:/etc/caddy/Caddyfile"]

volumes:
  pgdata:

From here to production on a VPS: ssh in, install Docker, docker compose up -d, and point DNS at the server. That’s a Saturday afternoon, including lunch.

2.4 Backups

The database is the thing we cannot afford to lose. A daily pg_dump to object storage (Cloudflare R2, Backblaze B2) costs essentially nothing and is straightforward to automate. Test restores monthly—a backup you’ve never restored from is a hypothesis, not a backup.

2.5 Data sovereignty note

If we use a European or American hosting provider, the community’s data lives in their jurisdiction. For a neighbourhood marketplace this is probably fine—we’re not storing state secrets, we’re storing listings for second-hand couches. But if the community cares about data sovereignty (and they might, given the ethos of this project), an Australian hosting provider or local hardware (Path 2) is the answer.

3 Path 2: Closet servers

This is the sovereign option. The community’s data lives on hardware the community physically owns, in someone’s house or a shared space. It costs more upfront, requires more operational skill, and is less reliable than cloud hosting—but it means no one can pull the plug on our community because of a terms-of-service change or a foreign government’s policy shift.

3.1 Hardware

We don’t need much. A neighbourhood platform for 1,000 users with moderate traffic is computationally trivial by modern standards—a decade-old laptop could handle it.

The minimum viable closet server:

Buy two second-hand business laptops. ThinkPads, Latitudes, HP EliteBooks—the kind of thing that comes off three-year corporate leases and shows up on eBay or Gumtree for $150–300 each. We go for something with 16 GB RAM, an Ethernet port, and a working battery. Ideally, we buy at least two of the same model.

Why two? Redundancy. One is the primary server; the other is a warm standby running the same stack, with the database replicated via PostgreSQL streaming replication. If the primary dies (hardware failure, house fire, someone trips over the power cord), the standby can take over.

Estimated hardware costs:

Item Cost
2× second-hand business laptop $300–600 AUD
2× USB-C Ethernet adapter (if needed) $30–50 AUD
1× small Ethernet switch $20–40 AUD
1× external SSD for backups $60–100 AUD
Misc cables $20 AUD

Total: $430–810 AUD. That’s about $4–8 per household in a 100-household community—less than a month’s coffee.

3.2 Networking

The same problem we discussed for the sovereign compute closet: Australian residential internet is not great, and running a server on NBN has quirks.

The requirements:

  • A static IP or dynamic DNS. Most NBN plans give you a dynamic IP, which changes periodically. A dynamic DNS service (e.g. DuckDNS, free) maps a hostname to your current IP automatically. A business-grade NBN plan with a static IP ($10–30/month more than residential) is more reliable.
  • Port forwarding or a tunnel. If the server is behind a home router, we need to forward ports 80 and 443 to the server. Alternatively, a Cloudflare Tunnel (free tier) creates an outbound connection from the server to Cloudflare’s edge, avoiding the need for port forwarding entirely and providing DDoS protection as a bonus. This is probably the right answer for most setups.
  • Upload bandwidth. A web application serving mostly text and small images is not bandwidth-heavy. At our scale, even a 20 Mbps upload link (typical for NBN FTTP 100/20 plans) would handle hundreds of concurrent users without breaking a sweat. Listing photos are the heaviest content; resizing them to reasonable dimensions on upload (say, 1200px wide, WebP format) keeps each image under 100KB.

The Cloudflare Tunnel approach is cool because it solves several problems at once: no static IP needed, no port forwarding, automatic TLS, DDoS protection, and it works even behind double-NAT (common with some NBN configurations). The server makes an outbound connection to Cloudflare; Cloudflare routes incoming requests back through that connection.

3.3 Operating system and maintenance

Install Ubuntu Server LTS (currently 24.04) on both laptops. Enable unattended-upgrades for automatic security patches. Run the same docker compose stack as the cloud version—the containers don’t care whether they’re running on Hetzner or on a ThinkPad in someone’s hallway cupboard.

Monitoring: a lightweight monitoring agent like Uptime Kuma (self-hosted, runs in Docker) sends alerts if the server goes down. Point an external uptime monitor (e.g. UptimeRobot, free for 50 monitors) at the public URL for an independent check.

3.4 Power and heat

Two laptops draw roughly 30–60 watts total under moderate load—negligible on the electricity bill (under $10/month at Australian rates). They generate less heat than a desk lamp. A hallway cupboard with a slightly open door is genuinely fine as a “data centre” for this scale of operation.

A dedicated UPS is nice to have but not essential—the laptop batteries provide 30–60 minutes of runtime, which is enough to ride out most power blips. For a longer outage, the platform is simply down until power returns, which is fine—we’re not running a stock exchange.

3.5 Replication and failover

PostgreSQL streaming replication keeps the standby laptop’s database in near-real-time sync with the primary (seconds of lag, typically). A simple failover procedure:

  1. Primary goes down (detected by monitoring)
  2. Human (the designated ops person) SSHs into the standby
  3. Promote the standby to primary: pg_ctl promote
  4. Update the Cloudflare Tunnel or DNS to point at the standby’s IP
  5. Investigate and repair the primary; when it’s back, reverse the replication direction

This is not automatic failover (that’s complex to get right and risky to get wrong at small scale), but it’s a 10-minute procedure that any moderately competent Linux user can follow from a runbook.

3.6 Backups

Even with two machines, we want off-site backups. A nightly pg_dump compressed and encrypted with age or GPG, pushed to a cloud storage bucket (Backblaze B2: $5/TB/month) or even to a third community member’s machine via rsync. Belt and suspenders.

4 The migration path

Start on cloud hosting (Path 1). It’s cheaper, faster to set up, and lets the community focus on the social problem (do people actually want this?) rather than the operational one (is the server up?).

If the platform takes off and the community decides it wants data sovereignty, the migration to local hardware (Path 2) is straightforward because we’ve containerised everything from the start. The procedure is:

  1. Set up the closet servers and run the same docker compose stack
  2. Take a database dump from the cloud and restore it locally
  3. Switch DNS to point at the local server
  4. Keep the cloud instance as a fallback for a month
  5. Decommission the cloud instance

This should take an afternoon and the community will experience maybe 30 minutes of planned downtime.

5 Features roadmap

What do we actually build, and in what order? This is a suggestion, not a prescription—the community should decide.

5.1 v0.1: Marketplace MVP (Month 1–2)

  • User registration with postcode verification (enter postcode, receive a physical postcard with a code—low-tech, high-trust, and fun)
  • Listing creation: title, description, photos, price (or “free” or “swap”), category
  • Search and browse with PostGIS-powered proximity sorting
  • Messaging between buyer and seller
  • Basic reputation: completed transactions, star ratings, text reviews
  • Simple moderation: flag/report button, admin review queue

5.2 v0.2: Events + directory (Month 3–4)

  • Events calendar: create, browse, RSVP
  • Skills/services directory: “I’m a plumber,” “I do tutoring,” “I have a ute”
  • Tag-based discovery: search by skill, service, or interest
  • Notification preferences: email digest (daily/weekly) rather than push notifications—deliberately low-urgency

5.3 v0.3: Trust and governance (Month 5–6)

  • Vouching system: existing members can vouch for new members, creating a trust graph
  • Community proposals and voting (for platform rules, feature requests, spending decisions)
  • Transparency log: all moderation actions visible to members
  • Privacy controls: who can see your profile, your listings, your reviews

5.4 v1.0: Community infrastructure (Month 6+)

  • Tool/resource library: list things available to borrow, manage lending/returning
  • Mutual aid board: request or offer help, coordinated through the platform
  • Group purchasing: organise bulk buys (firewood, solar panels, etc.) at better rates
  • Integration with the friendly society (if one exists): membership management, communications, voting on society matters
  • AI-assisted features (if the community has access to sovereign compute): smart search, listing categorisation, translation for multilingual neighbourhoods, moderation assistance

6 AI-assisted development

Since I argued in the companion post that LLMs lower the development cost, let me be concrete about what that looks like.

A competent developer working with a coding assistant (Claude, Cursor, Copilot, or inference from the community’s own compute) can realistically:

  • Scaffold the entire v0.1 in 2–3 weekends (Django project structure, models, views, templates, basic CSS)
  • Generate test suites that would otherwise take days to write by hand
  • Draft moderation tooling (content classifiers, spam detection) using local models rather than shipping community data to a commercial API
  • Produce documentation and runbooks for operational procedures, reducing the bus factor

This doesn’t eliminate the need for human judgement—architecture decisions, UX design, security review—but it compresses the grunt work enough that a small volunteer team can make it go.

The sovereign compute connection is particularly neat here: if the community owns an LLM inference box, the same machine that serves the platform’s AI features (search, moderation, translation) can also be the coding assistant that helps maintain the platform itself. The tools build the tools.

7 Security considerations

A neighbourhood platform holds personal information (names, addresses, phone numbers, transaction history, private messages) for people who literally live near each other. A breach would be personally harmful in a way that’s different from a breach of a global platform—the attacker knows where you live.

Non-negotiable security measures:

  • TLS everywhere. Caddy handles this automatically with Let’s Encrypt certificates.
  • Passwords: don’t. Use passkeys where possible, email magic links as fallback. No password database to breach.
  • Input sanitisation. Django’s ORM and template system handle the common injection vectors, but we should still run a security-focused code review before launch.
  • Rate limiting. Prevent brute-force attacks on auth and abuse of messaging. Django-ratelimit or equivalent.
  • Encryption at rest. Full-disk encryption on the server (LUKS on the closet laptops; most cloud providers offer this as a checkbox).
  • Minimal data collection. Don’t store data we don’t need. If we don’t need to know someone’s exact address (we probably don’t—postcode is enough for locality), don’t ask for it.
  • Penetration testing. Before launch, ask a security-minded community member (or hire someone for a day) to try to break in. Fix what they find.

9 What this costs, total

9.1 Cloud path (first year)

Item Annual cost
Hosting (Hetzner CX32) ~$170 AUD
Domain name ~$20 AUD
Email (Fastmail or similar, for platform notifications) ~$60 AUD
Backblaze B2 backups ~$10 AUD
Incorporated association setup ~$200 AUD
Security review (1 day, hired) ~$500–1,000 AUD
Total first year ~$960–1,460 AUD
Total ongoing (year 2+) ~$320 AUD/year

At 200 households paying $50/year in membership, revenue is $10,000/year. Running costs are under $500/year ongoing. The surplus funds development, maintenance stipends, and community activities.

9.2 Closet path (first year, additional to cloud year)

Item Cost
2× second-hand laptops $300–600 AUD
Networking gear $50–90 AUD
Business NBN upgrade (static IP) ~$200–360 AUD/year
Electricity ~$60–100 AUD/year
Total hardware setup $350–690 AUD (one-off)
Total ongoing ~$260–460 AUD/year

The cloud and closet paths cost roughly the same ongoing. The closet path has a higher upfront cost (about ~$700 one-off, on top of the cloud path), but gives us physical data sovereignty and no ongoing dependency on a hosting provider.

10 Open questions

  • Is Django + HTMX the right default stack, or should we optimize for the specific skills of whoever shows up to build it? (If the founding devs are Rails people, build it in Rails. If they’re Elixir people, use Phoenix. The technology matters less than the people.)
  • How do we handle identity verification without becoming creepy? The postcard-with-code idea is fun but has edge cases (apartments, PO boxes, renters who move). Vouching by existing members is socially robust but creates gatekeeping risks.
  • What’s the right moderation model for neighbourhood scale? Professional community managers are too expensive; pure volunteer moderation burns people out; algorithmic moderation misses context. Some hybrid is needed.
  • Should the platform support financial transactions (escrow for marketplace purchases, membership fee collection), or keep money external (bank transfers, cash on pickup)? Handling money adds regulatory complexity but removes friction.
  • How do we handle the “missing stair” problem—a community member who everyone quietly knows is problematic, but who hasn’t technically violated any rules?

As with everything in this series: if any of us have opinions, experience, or want to help build this, get in touch.