Deploying EVE-NG On Google Cloud Platform: Part 4

How to enable Internet access from an EVE-NG topology on Google Cloud Platform (GCP).

Deploying EVE-NG On Google Cloud Platform: Part 4

In Part 1, Part 2 and Part 3 of this blog series, we covered a lot of ground, including:

  • The step-by-step procedure for installing EVE-NG (eve-ng.net) on Google Cloud Platform (GCP).
  • Spinning up a simple topology in EVE-NG, consisting of Arista vEOS and Juniper vMX routers.
  • Connecting your EVE-NG toplogy to external VMs that reside in your GCP environment.

I have received a few comments from Readers who have asked about how to connect their EVE-NG topology nodes to the Internet. Of the several reasons for wanting this access, the most prevalent one seems to be the need to perform updates or install new packages for Linux VMs that are spun up as part of the topology. In this very short final post of this blog series, I will go through the simple steps needed to enable Internet access from an EVE-NG topology on GCP.

Recap: Bridge Interfaces Created By EVE-NG

Recall from the Part 3 blog post, that EVE-NG automatically creates various bridge interfaces on the host VM during installation. Two specific bridge interfaces, namely pnet0 and pnet9 are particularly important.

As explained in the Part 3 blog post, we used pnet9 as the EVE-NG -to-GCP bridge, allowing our EVE-NG devices to access VMs that reside in GCP. We assigned a static IP address to pnet9 (e.g. "192.168.249.1") from a designated management subnet (e.g. "192.168.249.0/24"). It is from this management subnet that we assign IP addresses to internal EVE-NG nodes.

Also recall that interface pnet0 is bridged with the primary physical EVE-NG ethernet port, eth0, and therefore is assigned the IP address that is used for accessing the EVE-NG Web GUI. This is the same IP address used within GCP to access the EVE-NG host VM.

Configuring an Iptables Rule on the EVE-NG Host VM

In order for the devices in our EVE-NG topology to access the Internet via GCP, we need to route the traffic to pnet0 and subsequently NAT it out to the Internet by masquerading the packets received from internal EVE-NG nodes as if they had originated from the EVE-NG host VM.

We do this by adding the following iptables rule on the EVE-NG host VM (as an example, substitute "192.168.249.0/24" in place of "<MANAGEMENT_SUBNET>" below):

sudo iptables -t nat -A POSTROUTING -o pnet0 -s <MANAGEMENT_SUBNET> -j MASQUERADE

  • To view the iptables rule just created, use the following command:
    sudo iptables -t nat -L
  • To delete the iptables rule just created, use the following command:
    sudo iptables -t nat -D POSTROUTING -o pnet0 -s <MANAGEMENT SUBNET> -j MASQUERADE

Example: Configuring a Tiny Core Linux Node in EVE-NG

In my particular use case, I wanted to be able to spin up a Tiny Core Linux node within my EVE-NG topology and install some new packages on it (specifically, iperf). With the iptables rule setup on the EVE-NG host VM as shown above, all that we need to do are the following 3 steps on the Linux node within the EVE-NG topology:

  1. Pick an unused IP address from the management subnet used in the iptables rule above and configure it on the Linux node, e.g.:
    sudo ifconfig eth0 192.168.249.29 netmask 255.255.255.0
  1. Add a default route to the static IP address assigned to the pnet9 bridge interface, e.g.:
    sudo route add default gw 192.168.249.1
  1. Configure a DNS server, e.g.:
    echo nameserver 8.8.8.8 > /etc/resolv.conf

And that's it! You should now be able to connect your EVE-NG topology nodes to the Internet! This completes this four-part blog series on Deploying EVE-NG On Google Cloud Platform (GCP).

-- Jag --