Skip to main content

Exposing all the ports using a Wireguard tunnel

Exposing a workload is quite complicated. You can use ngrok to expose your service on a ngrok proxy. We offer a way to connect a WireGuard tunnel to the jobs.

This solution is well adapted when you need to forward UDP traffic too.



How to use

1. Setting up a wireguard cloud router

1.a. Deploying a server on the cloud

We recommend using a cloud provider which allows fast connectivity. We recommend Exoscale for this.

For the rest of this guide, we assume that you have deployed an instance with these specifications:

  • 1 vCPU, 1GB of RAM, 10 GB of persistent storage
  • 1 public IP

Since we are in a server-client setup, Network Address Translation (NAT) becomes necessary to enable communication between the clients and the external network. To achieve this, the server uses masquerading. Specifically, the network interface dedicated to WireGuard utilizes the Linux IP masquerade feature to perform NAT on the packets passing through that interface. This involves modifying the source IP of the packets to match the IP address of the WireGuard server. By doing so, the routing on the client-side can be configured by setting the allowedIPs field to specify the VPN subnet they can access. This NAT setup ensures proper communication and routing between the clients and the external network.

For the sake of the example, we are also deploying NGINX servers on the compute nodes, which will open the port 8080/tcp. We will either load-balance with HAProxy or via iptables.

If you can deploy a VyOS as an OS image, it will be a lot simpler since it is an OS optimized for routers. You can also use an OPNSense OS image, which is an OS for firewalls with routing capabilities, and has a web dashboard.

Alternatively, you can just use a Linux OS Image with iptables/nftables.

If your cloud uses a firewall, make sure to open ports 22/tcp and the WireGuard port (often 51820/udp).

1.b. Setup WireGuard

  1. Start by installing WireGuard
sudo dnf install wireguard-tools
  1. For old kernels (< 5.6), you must load the kernel module manually and make it persists:

    sudo modprobe wireguard
    echo wireguard > /etc/modules-load.d/wireguard

    Check with:

    lsmod | grep wireguard
  2. Enable packet forwarding by editing the /etc/sysctl.conf file:

    net.ipv4.ip_forward = 1

    And apply it:

    sysctl -p /etc/sysctl.conf
  3. Create the server and peer private key and public key with:

    PK="$(wg genkey)"
    PUB="$(echo "$PK" | wg pubkey)"
    echo "PrivateKey = $PK"
    echo "PublicKey = $PUB" # You will send the public key to the job. Don't place it in the server configuration.

    For this example:

    ServerPrivateKey = 2GuuSBxL0pd1Mdv7sstzg2IYi5SO6TuuCEp+cDW8r0c=
    ServerPublicKey = uF7mD0B9CxMVBY+1tn+bBHu/QTBBYIjw5l/92vgF/yE=

    To set up the WireGuard virtual interface, create a file /etc/wireguard/wg0.conf. The file name indicates the name of the network interface. In this file add:

    Address =
    ListenPort = 51820
    MTU = 1420
    PrivateKey = 2GuuSBxL0pd1Mdv7sstzg2IYi5SO6TuuCEp+cDW8r0c=
    # PublicKey = uF7mD0B9CxMVBY+1tn+bBHu/QTBBYIjw5l/92vgF/yE=
    # NAT: wg0 -> eth0 and any -> wg0
    PostUp = iptables -t nat -A POSTROUTING -s -j MASQUERADE -o eth0
    PostUp = iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
    PostDown = iptables -t nat -D POSTROUTING -s -j MASQUERADE -o eth0
    PostDown = iptables -t nat -D POSTROUTING -o wg0 -j MASQUERADE
  4. To add peers, create the private and public keys of the peers using the same command you used to create the server keys. Assuming you want to load balance between 2 tasks:


    # PrivateKey = iBSW/dma96OjnH098U5BEWzNcHIbsZqsCPiQ1DfP11c=
    PublicKey = nnhYC6rdEYyNqpTC9n0Q1ubnL5rmvbZIQ1IA75A1chk=
    AllowedIPs =

    # PrivateKey = wIQkWhjPnIg9GUg1dhH6FmEIftKuxZdkvaD9VjOWH1Q=
    PublicKey = hCIM4037XrySn6BSLKz194X+ulqE4+PTVg2il1W12TU=
    AllowedIPs =
  5. Use iptables to load-balance and port forward. You can add PostUp and PostDown rules to add your iptables rules safely:

    # NAT: wg0 -> eth0
    PostUp = iptables -t nat -A POSTROUTING -s -j MASQUERADE -o eth0
    # NAT: any -> wg0
    PostUp = iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
    # Accept traffic to
    PostUp = iptables -A FORWARD -p tcp -d --dport 8080 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
    PostUp = iptables -A FORWARD -p tcp -d --dport 8080 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
    # Forward packets and loadbalance
    PostUp = iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination -m statistic --mode random --probability 0.5
    PostUp = iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination -m statistic --mode random --probability 0.5
    # Cleanup
    PostDown = iptables -t nat -D POSTROUTING -s -j MASQUERADE -o eth0
    PostDown = iptables -t nat -D POSTROUTING -o wg0 -j MASQUERADE
    PostDown = iptables -D FORWARD -p tcp -d --dport 8080 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
    PostDown = iptables -D FORWARD -p tcp -d --dport 8080 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
    PostDown = iptables -t nat -D PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination -m statistic --mode random --probability 0.5
    PostDown = iptables -t nat -D PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination -m statistic --mode random --probability 0.5



    Depending on the OS that you are using, eth0 could be renamed with en*.

    Note that this load balancing doesn't health-check the servers. You better use a HAProxy as an Ingress for this.

    HAProxy configuration example
    log local2

    chroot /var/lib/haproxy
    pidfile /var/run/
    maxconn 50000
    user haproxy
    group haproxy

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

    # utilize system-wide crypto-policies
    ssl-default-bind-ciphers PROFILE=SYSTEM
    ssl-default-server-ciphers PROFILE=SYSTEM

    mode http
    log global
    option httplog
    option dontlognull
    timeout connect 10s
    timeout client 30s
    timeout server 30s
    timeout tunnel 30s
    maxconn 50000

    # Frontends
    frontend my_frontend
    bind :80
    mode tcp
    default_backend my_backend

    # Backends
    backend my_backend
    mode tcp
    balance roundrobin
    server job1 check
    server job2 check

    Also, if you want to disable the load balancing, just remove the -m statistic --mode random --probability 0.5 parameters and remove the extras port forwarding rules.


    Make sure that iptables does not conflict with other firewall software (like FirewallD or UFW)!

    Also note that there are no rules allowing traffic on ports. If your iptables forbid traffic by default, you can add these rules:

    # Allow HTTP (you can add this one in PostUp)
    iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT

    # Allow Wireguard (you can add this one in PostUp)
    iptables -A INPUT -p udp -m udp --dport 51820 -j ACCEPT

    # Allow SSH (you should have already opened this port)
    iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT

    # Allow ICMP (you should have already opened this port)
    iptables -A INPUT -p icmp ACCEPT

    You can remove the iptables forwarding rules.

    You can also use IPVS for load balancing, which offer better performance than iptables (but doesn't offer health-checks):

    IPVS configuration
    # Add a virtual service
    ipvsadm -A -t <public IP>:80 -s rr
    # Forward packets
    ipvsadm -a -t <public IP>:80 -r -m
    ipvsadm -a -t <public IP>:80 -r -m

    You can add these rules to PostUp.

    One last alternative, if you want to have high performance and high customization, try to look at the latest technologies: eBPF/XDP-based load balancing! BPF load balancing provides high performance, flexibility, and programmability in a safe and efficient way, allowing for the manipulation and analysis of network packets at a low level with minimal overhead.

  6. Enable the interface and start it!

    chmod 600 /etc/wireguard/wg0.conf
    systemctl enable wg-quick@wg0
    systemctl start wg-quick@wg0

    Your cloud router is now configured!

2. Sending jobs and connect them WireGuard VPN

To connect your job to the WireGuard server, you can specify the client configuration in the workflow. Like this:

enableLogging: true

tasks: 1
cpusPerTask: 1
memPerCpu: 1000
gpus: 0

- name: start-nginx
image: nginxinc/nginx-unprivileged
## Use the container network interface slirp4netns (or pasta).
network: slirp4netns # or pasta
## Need to map to root to start wireguard
mapUid: 0
mapGid: 0
- wireguard:
## The container network interface configuration.
## The container's private key. (wg genkey to generate)
privateKey: iBSW/dma96OjnH098U5BEWzNcHIbsZqsCPiQ1DfP11c=
## Peers configuration.
## The peer ID, which is also the public key of the peer.
- publicKey: uF7mD0B9CxMVBY+1tn+bBHu/QTBBYIjw5l/92vgF/yE=
## Allowed routes. Packets targeting these range will be forwarded to the peer.
## PersistentKeepalive (in seconds) is a settings to allow force the tunnel to stay up.
## Often, it's the client that set this settings.
persistentKeepalive: 10
## The peer endpoint.
endpoint: '<public IP>:51820'
command: / nginx -g "daemon off;"

A second job will have wIQkWhjPnIg9GUg1dhH6FmEIftKuxZdkvaD9VjOWH1Q= as private key and the address.

Why not IPsec or OpenVPN ?

WireGuard is seamlessly integrated into the Linux kernel, delivering superior performance and employing advanced encryption techniques. Its lightweight configuration makes it particularly well-suited for deployment in DeepSquare, providing a more straightforward setup compared to OpenVPN.

Thanks to its UDP-based nature and support for dynamic IPs, WireGuard inherently facilitates NAT traversal, rendering it highly suitable for server-client setups in a decentralized ecosystem like DeepSquare. As a result, traditional point-to-point VPN solutions like IPsec are not supported.

Overall, Wireguard is well-suited in our environment. It simplifies the process of establishing and maintaining connections across NAT devices, ensuring smooth and reliable communication between the server and clients, even in dynamic and diverse network environments like DeepSquare.


Manipulation with the network interfaces require to map the UID to root. See Mapping the UID of the container.