WireGuard Mesh Networking
ACT establishes a private, full-mesh peer-to-peer network between all your servers using WireGuard. This allows secure, low-latency communication between services running on different servers without exposing ports to the public internet.
Architecture
The ACT WireGuard implementation uses a Full Mesh topology.
[Server A]
/ | \
/ | \
[Server B]----[Server C]
\ | /
\ | /
[Server D]
Every line represents a secure, p2p WireGuard tunnel.
- Every commissioned server automatically becomes a peer to every other server in the same Organization.
- Traffic within the mesh is encrypted and routed directly between servers (p2p), not through the control plane.
- The control plane acts only as the coordinator, distributing public keys and endpoint information.
Network Configuration
- Interface:
wg0 - Subnet:
100.64.0.0/10(CGNAT Space) - Port:
51820/udp - MTU: managed by
act-agent(High Performance)
Limits
The new Hyperscale Networking architecture uses Carrier-Grade NAT (CGNAT) address space:
- Maximum Peers: ~4 million unique IPs per mesh (using
/10CIDR). - Scalability: Optimized
act-agentuserspace daemon uses efficient hash maps and eBPF filters to handle thousands of concurrent peers with minimal CPU overhead.
Security & Firewall
When you commission a server, ACT automatically:
- Generates a WireGuard private/public key pair securely on the server.
- Configures the
wg0interface. - Updates
ufwto allow traffic on UDP port51820.
Note: For the mesh to work, your servers’ cloud provider firewall (Security Group) must allow inbound traffic on UDP port 51820 from the other servers’ public IPs.
Automatic Reconciliation
ACT uses a robust pull-based reconciliation model powered by the unified act-agent.
- State Generation: The Control Plane aggregates peer information and generates a
MeshState. - HTTP Polling: Agents periodically poll the
/api/v1/mesh/configendpoint using anIf-None-Matchheader with their current revision. - Revision Tracking: The Control Plane uses Etags to ensure agents only download the state when it changes.
- Atomic Updates: The agent applies the new state and hot-reloads the networking configuration without dropping packets.
- Zero-Copy Deserialization: The agent uses
bincodefor highly efficient, zero-copy parsing of the mesh state.
This process ensures the mesh remains healthy and up-to-date with minimal latency, even as peers join or leave frequently.
Service Discovery & High Availability
ACT provides seamless service discovery and failover capabilities over the mesh network.
Internal DNS
Every service in the cluster is addressable via its service name. This allows services to communicate with each other securely over the private mesh network.
http://<service-name>
For example, if you have a service named api and a database named postgres, the API can connect to the database securely using postgres as the hostname.
Mesh Failover
ACT supports Weighted Load Balancing and Failover across mesh peers. By enabling enable_mesh_failover on a resource, ACT configures Traefik to route traffic to available healthy instances across the mesh.
- Load Balancing: Traffic is distributed across all healthy instances of a service running on different servers.
- Failover: If a server goes down, traffic is automatically rerouted to remaining healthy instances on other servers.
To enable this feature, set enable_mesh_failover: true in your resource configuration.