posted on 2016-05-29 12:38
To create a permanent tunnel via
ssh between two hosts, some configuration has to be done on each side of the tunnel, so it gets automatically created once the tunnel interface is gotten up.
This tutorial is debian-specific.
ssh-keygen -t rsa -b 4096 -f ~/.ssh/sshvpn
Allow tunnelling in
Save and exit, and
service ssh restart.
Make ip forwarding available persistently, so it will be there across reboots:
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
Enable ip forwarding just for the current session:
manual tun99 iface tun99 inet static address 192.168.0.2/30 pointtopoint 192.168.0.1 up ip r a 10.0.0.0/24 via 192.168.0.1 dev tun0
manual tun98 iface tun98 inet static pre-up ssh -i /home/sjas/.ssh/sshvpn -M -S /var/run/sshvpn -f -w 98:99 firstname.lastname@example.org true pre-up sleep 5 address 192.168.0.1/30 pointtopoint 192.168.0.2 up ip r a 192.168.189.0/24 dev tun0
Starting the tunnel, on client-side:
Stopping the tunnel, on client-side:
posted on 2015-07-10 07:56:07
To create ssh tunnels there are a lot of explanations out there, and the most are not worth much. Let's see if I can do better.
A tunnel involves only two endpoints.
Ok, fair enough. But you need to specify minimum three host locations for a working tunnel.
Where two can point to the same machine, just from different views.
Which is your local host (or at least it's port), the gateway (the machine which will be the other tunnel endpoint) and the machine you are targetting.
localhost, if the target/destination host is the same machine as the gateway host.
More on that later, if this does not make sense yet.
Another misconception which is often prevalent: "How do I get the server port so I can access it locally?"
Actually the direction may seem unnatural:
Things depend on the source host, where the request (of whichever protocol being used) will originate.
There exist directions, which is what the
-R flags are for.
The order in which the
ssh arguments are specified can actually be changed.
And changed it is quite easier to grok.
This is basic tunnelling knowledge, where SSH tunnels differ from SSL/IPSEC VPNs comments will indicate so.
Tunnelling connects non-routable networks with each other. (This is the case when one or both sites are behind a NAT.)
A tunnel is created between two enpoints, often called gateways. Encrypted pipes are created for securing traffic by crypting packets between the endpoints.
On each side, other hosts can be reached. Depending on the tunnel type, you may or may not have access to the remote gateway. (SSH lets you access the remote gateway, with an IPSEC VPN (virtual private network) where application and endpoint run on the same box you are in for some trouble. It works, but is ugly to do so.)
You also have to specify the hosts behind the endpoint. This can happen via subnets, or you can specify single hosts. (With SSH we will specify only single hosts here, no networks. Further only one side behind the tunnel has to be specified, the other side's host 'behind' the tunnel endpoint, is always located on the same machine as the gateway in question. The tunnel, it being of local or remotely forwarded port type, lets you specify the host not being locate on the gateway. Don't worry, this will come later with a better explanation.)
On general VPN's:
If you would not specify the local and remote network, how could the remote party possibly know to which ip packets should have to be directed, after the data packets exit the tunnel? (For SSH as already stated, only one host, either remote or local, which is not located on a gateway, can be specified. The other 'end' outside of the tunnel endpoint, lies always on the the gateway.)
A regular ssh tunnel is like the above mentioned tunnels, except that the gatways and the networks after the ends (/32 networks to be exact) reside on the same host (read: the gateway).
This guide assumes that you already know how to do this, its the basic
ssh <hostname-or-ip> stuff.
To connect to a remote host, but hopping over a few other hosts in the process, simply chain the tunnels:
ssh <host1> ssh <host2> ssh <host3>
Since you will want proper terminals, use the
-t flag when doing so.
-A if you need agent forwarding, when wanting to copy files between hosts directly.
ssh -t -A <host1> ssh -t -A <host2> ssh -A <host3>
This chaining stuff will also work for port forwardings described below, but you really have to watch your ports, so things fit together.
localtunnelling / port forwarding
-L will forward a port on your side of the tunnel to a host on the other one.
That way you can reach over into the remote network.
The first use case here will be 'local' tunneling with the
The port specified on the local site will be forwarded to the remote site.
This will be done so the webinterface of a remote NAS behind a router with NAT will be made externally accessible.
NAS means Network Attached Storage, a small data server consuming not much energy providing file-level data.
For this to work, the router has to be configured such that it does port forwarding of requests on its port 12345 to the ssh host you want to connect to, by knowing its IP and the port on which the ssh server on this machine runs. (Usually on port 22.)
Usually you see specifications like this one:
ssh -L 1337:192.168.0.33:443 <user>@<domain-or-ip> -p12435
Easier to grasp should be this:
ssh <domain-or-ip> -l <user> -p 12345 -L localhost:1337:192.168.0.33:443
ssh to the host at
<domain-or-ip>, with the user specified by
<user> on port specified with
-p which is
The port only has to be specified if SSH is not running on standard port 22.
This is the gateway part.
Then you pass the information from on the local and the remote host, connected via a
localhost is the bind address, on which the SSH server instance is running, and
1337 is the port which will be used for accessing the webinterface.
Which is what you have to type into your browser. (
If it were running with a different bind address, you'd have to use this one here, but then I likely would not have to tell you that. :)
localhost does not have to be specified, this is done just for illustration purposes.
bindaddress does, is allowing others to use the tunnel if
GatewayPorts is enabled on the local SSH server. See
man sshd_config for more info.
192.168.0.33:443 is the ip of the NAS system on the remote network behind the remote gateway and the port where the webserver is running on there.
remotetunneling / port forwarding
-R will forward a port from the remote site to your side of the tunnel.
That way hosts from your network can be reached remotely.
Going along with the example above, from within the LAN where the NAS is located:
ssh <domain-or-ip> -l <user> -p 12345 -R localhost:1337:192.168.0.33:443
<domain-or-ip> -l <user> -p 12345 is again the gateway information for the remote machine.
-R the local or remote port (and bindaddress!) are specified.
localhost here talks about the bindaddress on the remote server.
If it is explicitly set, ssh's
GatewayPorts directive/option has to be enabled on the server's
192.168.0.33:443 is just the location of the NAS again.
ssh -t <host1> -L 1337:localhost:1337 ssh -t <host2> -L 1337:localhost:1337 ssh <host3> -L 1337:192.168.0.33:443
Local browser can reach the far far away NAS via
https://localhost:1337, which is on the same network as
If the NAS were SSH accessible, the complete path could be encrypted.
Since we can't (at least in my made up example), we will hop from
<host3> to it at its IP
192.168.0.33, and this is the only part of the connection, that cannot be encrypted. (This is just provided for educational purposes, such complex setups are usually unlikely in sane reality.)
-t for all hops prior to the last one.
This is for services bound to the loopback / 127.0.0.1 interface, and which are thus only locally available:
ssh <host1> -L 1336:<host2>:22 ssh localhost -p 1336 -L 1337:localhost:3306
NAS is again a bad example here, as usually these boxes do not have ssh daemons installed/running.
What we did above was simply building a tunnel to the host we want to hop onto, and then creating the port forward by connecting to the locally existing SSH tunnel. This may be useful for remote connections to mysql instances that usually can just be reached locally.
Usually I have no use for this, but it might come in handy some day.
To create a SOCKS proxy via SSH:
ssh <domain-or-ip> -l <user> -p 12345 -D 192.168.0.2:1337
Here a specific bindaddress was used (
192.168.0.2, which is our local ip within our LAN. Do you remember the
Any host connecing to our ssh tunnel running on port 1337 will straight be forwarded to the remote gateway.
The application has to know how to handle SOCKS connections, else this will not work.
To keep up with our NAS example, I'd do:
ssh <domain-or-ip> -l <user> -p 12345 -D 1337
Then set up my web browser to use a SOCKS proxy, with address
localhost (since no bindaddress was given, unlike in the prior example) and port 1337.
https://192.168.0.33:433 can be entered into the adressbar and the NAS is reachable.
Just keep in mind, that other Websites will not work.
When having to use software which is unaware of SOCKS proxies, the Point-to-Point Protocol (PPP) comes to help.
Also this is a poor man's VPN, when used to transfer all traffic through it and not just a sole host or network.
Since I have not had this put to use yet, I cannot write much about it.
One link was on BSD, but I guess this helps with enlightenment. The shortest howto is the last one from the Arch wiki. Best may be the second one.
View posts from 2017-05, 2017-04, 2017-03, 2017-02, 2017-01, 2016-12, 2016-11, 2016-10, 2016-09, 2016-08, 2016-07, 2016-06, 2016-05, 2016-04, 2016-03, 2016-02, 2016-01, 2015-12, 2015-11, 2015-10, 2015-09, 2015-08, 2015-07, 2015-06, 2015-05, 2015-04, 2015-03, 2015-02, 2015-01, 2014-12, 2014-11, 2014-10, 2014-09, 2014-08, 2014-07, 2014-06, 2014-05, 2014-04, 2014-03, 2014-01, 2013-12, 2013-11, 2013-10