... | @@ -74,4 +74,109 @@ as in **launch-experiment.sh** in the |
... | @@ -74,4 +74,109 @@ as in **launch-experiment.sh** in the |
|
--from-file=dashboard1.json=dashboard1.json \
|
|
--from-file=dashboard1.json=dashboard1.json \
|
|
```
|
|
```
|
|
|
|
|
|
|
|
## Providing ipv4 access to the Kubernetes master node
|
|
|
|
Any node on the virtual wall has a network interface with a public ipv6 address and a local ipv4 address.
|
|
|
|
These can be found using the `ifconfig` command in a terminal window to a node.
|
|
|
|
|
|
|
|
An example output of `ifconfig`on a Kubernetes master node:
|
|
|
|
```
|
|
|
|
cni0 Link encap:Ethernet HWaddr f2:37:c6:01:ba:d4
|
|
|
|
inet addr:10.244.0.1 Bcast:0.0.0.0 Mask:255.255.255.0
|
|
|
|
inet6 addr: fe80::f037:c6ff:fe01:bad4/64 Scope:Link
|
|
|
|
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
|
|
|
|
RX packets:7143 errors:0 dropped:0 overruns:0 frame:0
|
|
|
|
TX packets:7575 errors:0 dropped:0 overruns:0 carrier:0
|
|
|
|
collisions:0 txqueuelen:1000
|
|
|
|
RX bytes:487665 (487.6 KB) TX bytes:2356989 (2.3 MB)
|
|
|
|
|
|
|
|
docker0 Link encap:Ethernet HWaddr 02:42:8c:d5:13:2a
|
|
|
|
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
|
|
|
|
UP BROADCAST MULTICAST MTU:1500 Metric:1
|
|
|
|
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
|
|
|
|
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
|
|
|
|
collisions:0 txqueuelen:0
|
|
|
|
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
|
|
|
|
|
|
|
|
enp0s8 Link encap:Ethernet HWaddr 00:30:48:78:f4:d0
|
|
|
|
inet addr:10.2.0.211 Bcast:10.2.15.255 Mask:255.255.240.0
|
|
|
|
inet6 addr: 2001:6a8:1d80:2021:230:48ff:fe78:f4d0/64 Scope:Global
|
|
|
|
inet6 addr: fe80::230:48ff:fe78:f4d0/64 Scope:Link
|
|
|
|
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
|
|
|
RX packets:350009 errors:1288 dropped:0 overruns:1287 frame:1
|
|
|
|
TX packets:30006 errors:0 dropped:0 overruns:0 carrier:0
|
|
|
|
collisions:0 txqueuelen:1000
|
|
|
|
RX bytes:349190116 (349.1 MB) TX bytes:4712714 (4.7 MB)
|
|
|
|
|
|
|
|
(etc...)
|
|
|
|
```
|
|
|
|
From the interface where an inet6 addr op `Scope:Global` is listed, these are the addresses in the example above:
|
|
|
|
* public ipv6 address: 2001:6a8:1d80:2021:230:48ff:fe78:f4d0
|
|
|
|
* local ipv4 address: **10.2.0.211**
|
|
|
|
|
|
|
|
The local ipv4 address of the Kubernetes master node is what `<KUBE_MASTER_IP>` refers to elsewhere in this documentation.
|
|
|
|
|
|
|
|
This `<KUBE_MASTER_IP>` is accessible from your machine without further work, if you're connected to the iGent network.
|
|
|
|
|
|
|
|
If not connected to the iGent network (and your network connection does not support ipv6), you need a public ipv4 address to talk to in the virtual wall.
|
|
|
|
The virtual wall has different solutions, of which we'll select one below.
|
|
|
|
* Configure the Kubernetes master node (a bare metal server) to have a public ipv4 address.
|
|
|
|
* We won't select this option, because it complicates the Kubernetes environment.
|
|
|
|
We certainly do not want Kubernetes traffic over the public ipv4 address.
|
|
|
|
Note that there are issues with Kubernetes to select a network interface.
|
|
|
|
* Add an extra virtual wall node inside the current experiment (a bare metal server or a XEN VM) with a public ipv4 address, and forward ports.
|
|
|
|
* We won't select this option, because it is overhead when connected to the iGent network.
|
|
|
|
* Make a small helper experiment (rspec based), containing one node (a bare metal server or a XEN VM, with a public ipv4 address, on the same virtual wall as the Kubernetes experiment), and forward ports.
|
|
|
|
This solution is based on the fact that all virtual wall nodes on the same wall can connect with each other.
|
|
|
|
* We'll select this option, because it leaves the original experiment unchanged. Follow the steps below.
|
|
|
|
|
|
|
|
Do the next steps after the Kubernetes experiment was started and is up and running.
|
|
|
|
|
|
|
|
In jFED, obtain the virtual wall node name of the Kubernetes master node, using the context menu `Show Node Info`.
|
|
|
|
Example node name: **n079-01.wall1.ilabt.iminds.be**.
|
|
|
|
We'll refer to this node name below as `<KUBE_MASTER_NODENAME>`.
|
|
|
|
|
|
|
|
In jFed, create a new rspec.
|
|
|
|
|
|
|
|
Drag a `XEN VM` on the diagram.
|
|
|
|
We select this type of resource, because it's the easiest way to obtain a public ipv4 address. We follow [this documentation](https://doc.ilabt.imec.be/ilabt/virtualwall/network.html#requesting-a-public-ipv4-address-for-a-xen-vm).
|
|
|
|
A `Physical Node` is an other valid choice, but then you'll need to follow [this documentation](https://doc.ilabt.imec.be/ilabt/virtualwall/network.html#requesting-and-configuring-public-ipv4-addresses-on-bare-metal-servers).
|
|
|
|
|
|
|
|
Right-mouse-click on the node and configure as follows:
|
|
|
|
* General
|
|
|
|
* Node name: forwarder
|
|
|
|
* Test bed: the same as where the Kubernetes experiment runs
|
|
|
|
* Routable Control IP
|
|
|
|
* Routable Control IP: yes
|
|
|
|
|
|
|
|
Optionally, save this rspec for future use.
|
|
|
|
|
|
|
|
Run this experiment.
|
|
|
|
|
|
|
|
Open a terminal to the `forwarder` node.
|
|
|
|
|
|
|
|
Find it's ipv4 address using `ifconfig`.
|
|
|
|
Example forwarder ipv4 address: **193.190.127.170**.
|
|
|
|
We'll refer to this address below as `<FORWARDER_IP>`.
|
|
|
|
|
|
|
|
Establish SSH port forwardings.
|
|
|
|
In the normal instructions, the following addresses are visited:
|
|
|
|
* `http://<KUBE_MASTER_IP>:<GRAFANA_PORT>`
|
|
|
|
* `http://<KUBE_MASTER_IP>:<INFLUXDB_PORT>/...`
|
|
|
|
|
|
|
|
So, in the terminal to the `forwarder` node, we'll forward as follows (in general):
|
|
|
|
```
|
|
|
|
ssh -N -f -L <GRAFANA_PORT>:<KUBE_MASTER_NODENAME>:<GRAFANA_PORT> -g localhost
|
|
|
|
ssh -N -f -L <INFLUXDB_PORT>:<KUBE_MASTER_NODENAME>:<INFLUXDB_PORT> -g localhost
|
|
|
|
```
|
|
|
|
or in our example:
|
|
|
|
```
|
|
|
|
ssh -N -f -L 3000:n079-01.wall1.ilabt.iminds.be:3000 -g localhost
|
|
|
|
ssh -N -f -L 8086:n079-01.wall1.ilabt.iminds.be:8086 -g localhost
|
|
|
|
```
|
|
|
|
|
|
|
|
Finally, we can now access Grafana and InfluxDB from our local machine, by replacing `<KUBE_MASTER_IP>` with `<FORWARDER_IP>` in the normal instructions!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|