WireGuard Mesh

Updated Feb 14, 2026 Edit this page

WireGuard Mesh Networking

ACT establishes a private, full-mesh peer-to-peer network between all your servers using WireGuard. This allows secure, low-latency communication between services running on different servers without exposing ports to the public internet.

Architecture

The ACT WireGuard implementation uses a Full Mesh topology.

       [Server A]
      /    |     \
     /     |      \
[Server B]----[Server C]
     \     |      /
      \    |     /
       [Server D]

Every line represents a secure, p2p WireGuard tunnel.
  • Every commissioned server automatically becomes a peer to every other server in the same Organization.
  • Traffic within the mesh is encrypted and routed directly between servers (p2p), not through the control plane.
  • The control plane acts only as the coordinator, distributing public keys and endpoint information.

Network Configuration

  • Interface: wg0
  • Subnet: 100.64.0.0/10 (CGNAT Space)
  • Port: 51820/udp
  • MTU: managed by act-agent (High Performance)

Limits

The new Hyperscale Networking architecture uses Carrier-Grade NAT (CGNAT) address space:

  • Maximum Peers: ~4 million unique IPs per mesh (using /10 CIDR).
  • Scalability: Optimized act-agent userspace daemon uses efficient hash maps and eBPF filters to handle thousands of concurrent peers with minimal CPU overhead.

Security & Firewall

When you commission a server, ACT automatically:

  1. Generates a WireGuard private/public key pair securely on the server.
  2. Configures the wg0 interface.
  3. Updates ufw to allow traffic on UDP port 51820.

Note: For the mesh to work, your servers’ cloud provider firewall (Security Group) must allow inbound traffic on UDP port 51820 from the other servers’ public IPs.

Automatic Reconciliation

ACT uses a robust pull-based reconciliation model powered by the unified act-agent.

  1. State Generation: The Control Plane aggregates peer information and generates a MeshState.
  2. HTTP Polling: Agents periodically poll the /api/v1/mesh/config endpoint using an If-None-Match header with their current revision.
  3. Revision Tracking: The Control Plane uses Etags to ensure agents only download the state when it changes.
  4. Atomic Updates: The agent applies the new state and hot-reloads the networking configuration without dropping packets.
  5. Zero-Copy Deserialization: The agent uses bincode for highly efficient, zero-copy parsing of the mesh state.

This process ensures the mesh remains healthy and up-to-date with minimal latency, even as peers join or leave frequently.

Service Discovery & High Availability

ACT provides seamless service discovery and failover capabilities over the mesh network.

Internal DNS

Every service in the cluster is addressable via its service name. This allows services to communicate with each other securely over the private mesh network.

http://<service-name>

For example, if you have a service named api and a database named postgres, the API can connect to the database securely using postgres as the hostname.

Mesh Failover

ACT supports Weighted Load Balancing and Failover across mesh peers. By enabling enable_mesh_failover on a resource, ACT configures Traefik to route traffic to available healthy instances across the mesh.

  • Load Balancing: Traffic is distributed across all healthy instances of a service running on different servers.
  • Failover: If a server goes down, traffic is automatically rerouted to remaining healthy instances on other servers.

To enable this feature, set enable_mesh_failover: true in your resource configuration.