Posts from 2016-05

ssh vpn howto

posted on 2016-05-29 12:38

To create a permanent tunnel via ssh between two hosts, some configuration has to be done on each side of the tunnel, so it gets automatically created once the tunnel interface is gotten up.

This tutorial is debian-specific.


  • a keypair gets created on client side, for the sole purpose of activating the tunnel
  • server network config is extended by an additional tun interface and a routing rule
  • authorized_hosts on the server is modified to activate the tunnel and the tun interface on the server side
  • client network config gets added a tun interface and routing rule
  • once the client tun interface gets brought up, an ssh connection gets established to the server, the servers tun interface is brought up, too, and the tunnel is in place



ssh-keygen -t rsa -b 4096 -f ~/.ssh/sshvpn

server side

Allow tunnelling in /etc/ssh/sshd_config:

PermitTunnel point-to-point

Save and exit, and service ssh restart.

Make ip forwarding available persistently, so it will be there across reboots:

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf

Enable ip forwarding just for the current session:

sysctl net.ipv4.ip_forward=1

Add to /etc/network/interfaces:

manual tun99
iface tun99 inet static
    up ip r a via dev tun0

client side

manual tun98
iface tun98 inet static
pre-up ssh -i /home/sjas/.ssh/sshvpn -M -S /var/run/sshvpn -f -w 98:99 sjas@ true
    pre-up sleep 5
up ip r a dev tun0


Starting the tunnel, on client-side:

ifup tun0

Stopping the tunnel, on client-side:

ifdown tun0

gnu parallel instead of bash for loops

posted on 2016-05-24 18:15

If you happen to have to iterate over a list of files/strings/whatever, gnu parallel comes in handy after you installed it from your linux distro's package manager.

Then instead of:

for i in *; do echo "test $i"; done

You can simply do:

ls -1 | parallel echo "test "

# alternatively:
parallel echo 'test ' ::: `ls -1`

If you happen to have more complex scripts, simply double-quote the commands handed to parallel. Use {} if you happen to need a reference to the current variable.


parallel "echo 'number {} echoed'" ::: `seq 1 10`

which gives:

number 1 echoed
number 2 echoed
number 3 echoed
number 4 echoed
number 5 echoed
number 6 echoed
number 7 echoed
number 8 echoed
number 9 echoed
number 10 echoed

At first this does not look like much, but how often have you messed up for loops? The example is rather made up, but does work better the more complex your examples become.

debian permanent ctrl on caps in pty's and tty's

posted on 2016-05-23 00:52

To set both the pseudoterminals and the virtual consoles up to have CTRL instead of CAPSLOCK:

vim /etc/default/keyboard:


Save and run:

dpkg-reconfigure -phigh console-setup

desktop installation documentation

posted on 2016-05-22 18:47

After running debian testing became annoying (kworker threads in state D killing network access), it was time to reinstall, this serves as a documentation for the next time.

OS install

  • install debian 8 with encrypted lvm for root and swap partitions
  • use usb-ethernet adapther, wlan firmware is missing: iwlwifi-7265-9.ucode, iwlwifi-7265-8.ucode
  • install it via usbstick (copy it from another usb stick from another virtual console to /lib/firmware)
  • or do it later after the installation (copy to /lib/firmware, install linux-firmware, linux-firmware-nonfree after adding the apt sources for contrib and non-free, then modprobe -r b43 and modprobe -r iwlwifi, not sure what exactly did the trick last time)
  • kde as regular window manager, desktop env, ssh server
  • reboot, enter grub, add nomodeset to kernel line, 3.16 kernel display does not work and just stays black
  • control on capslock

enable debian testing for newer kernel

cat << EOF >> /etc/apt/preferences.d/sjas
Package: *
Pin: release a=stable
Pin-Priority: 700

Package: *
Pin: release a=jessie-backports
Pin-Pirority: 660

Package: *
Pin: release a=unstable
Pin-Priority: 90

fix apt sources

cat << EOF > /etc/apt/sources.list
deb jessie main non-free contrib
deb-src jessie main non-free contrib

deb jessie/updates main non-free contrib
deb-src jessie/updates main non-free contrib

# jessie-updates, previously known as 'volatile'
deb jessie-updates main non-free contrib
deb-src jessie-updates main non-free contrib

deb unstable main contrib non-free
deb-src unstable main contrib non-free


apt update -y

Then apt search linux-image and see what is a current kernel

apt install -y linux-image-<CURRENT_KERNEL>
apt install -y i3 htop openvpn vim git terminator firmware-linux* firmware-iwlwifi parted tree parallel mlocate apt-file hdparm nmon rsync mc ethstatus nmap traceroute tcpdump screen iftop iotop mytop curl wget sysstat bash-completion multitail chromium tmux ansible pwgen pv clusterssh clustershell freerdp-x11 rdesktop tmux libreadline-gplv2-dev python-apt aptitude

terminator config


  • ctrl-shift-hjkl for pane movement
  • ctrl-shift-f8/f10/f9 for broadcast all/group/none
  • ctrl-(shift)-tab for (prev)/next tab


  • solarized, customize red/blue/pink to be be lighter
  • background 0.7 transparency
  • green blinking cursor


  • infinite scrollback
  • focus follows mouse


  • activity watch
  • inactivity watch
  • terminalshot
  • logger

i3 config

In ~/.i3/config the following got to be adjusted: (jkl; instead of hjkl simply SUCKS)

# start browser
bindsym $mod+g exec google-chrome

# change focus
bindsym $mod+h focus left
bindsym $mod+j focus down
bindsym $mod+k focus up
bindsym $mod+l focus right

# move focused window
bindsym $mod+Shift+h move left
bindsym $mod+Shift+j move down
bindsym $mod+Shift+k move up
bindsym $mod+Shift+l move right

# split in horizontal orientation
bindsym $mod+semicolon split h

mode "resize" {
        # These bindings trigger as soon as you enter the resize mode

        # Pressing left will shrink the window’s width.
        # Pressing right will grow the window’s width.
        # Pressing up will shrink the window’s height.
        # Pressing down will grow the window’s height.
        bindsym h resize shrink width 10 px or 10 ppt
        bindsym j resize grow height 10 px or 10 ppt
        bindsym k resize shrink height 10 px or 10 ppt
        bindsym l resize grow width 10 px or 10 ppt

        # same bindings, but for the arrow keys
        bindsym Left resize shrink width 10 px or 10 ppt
        bindsym Down resize grow height 10 px or 10 ppt
        bindsym Up resize shrink height 10 px or 10 ppt
        bindsym Right resize grow width 10 px or 10 ppt

        # back to normal: Enter or Escape
        bindsym Return mode "default"
        bindsym Escape mode "default"

bar {
        #status_command sudo i3status --config /home/sjas/.i3/status.conf
        status_command i3status -c /home/sjas/.i3/i3status.conf

# kde-like screen locking ctrl-alt-l
bindsym Control+mod1+l exec i3lock

# make two monitors show up as one
#fake-outputs 3840x1080+0+0

cp -va /etc/i3status.conf /home/sjas/.i3/i3status.conf

Then vim /home/sjas/.i3/i3status.conf:

general {
        colors = true
        interval = 1

order += "ipv6"
order += "disk /"
order += "run_watch DHCP"
order += "run_watch VPN"
order += "wireless wlan0"
order += "ethernet eth0"
order += "volume master"
order += "battery 0"
order += "load"
order += "tztime local"

load {
        format = "⚇ %1min"

volume master {
        format = "♪: %volume"
        format_muted = "♪: muted (%volume)"
        device = "default"
        mixer = "Master"
        mixer_idx = 0

That's it so far, other things may be appended here eventually.

Don't forget to stop and disable bluetooth.

current blogpost-creation shortcut

posted on 2016-05-22 18:42

This is put here for documentation purposes:

TEMP=$(date --rfc-3339=seconds) 
CURRENT_COUNT=$(basename $(find . -iname "*.post" | sort | tail -1 ) | cut -c1- | sed 's/-.\+//g')

FINAL_TITLE=$(echo $1 | sed 's/[[:digit:]]\+-//' | sed 's/-/ /g')

tags: todo
date: $DATE
format: md



DNS: resolution and reverse resolution script

posted on 2016-05-21 20:54

This is a quick-and-dirty for loop for checking a list of dns A resource records using dig. CNAME's are not handled like they should, thus are printed not in the same line, so this can not be used for being parsed without first having a look at the output and curating it first, if these are in use.

for i in; do echo -n $'\e[33;1m'$i$'\e[0m '; TEMP=`dig +short $i`; echo -n "$TEMP "; TEMP=`dig -x $TEMP +short`; echo ${TEMP%.}; done | column -t

Instead of editing the for-loop, it might be helpful using a heredoc instead:

echo; cat << EOF | while read i; do echo -n $'\e[33;1m'$i$'\e[0m '; TEMP=`dig +short $i`; echo -n "$TEMP "; TEMP=`dig -x $TEMP +short`; echo ${TEMP%.}; done | column -t; echo

Paste this into the shell, followed by a paste of lines of the domains you want, and type EOF afterwards.


sjas@sjas:~/blog$ echo; cat << EOF | while read i; do echo -n $'\e[33;1m'$i$'\e[0m '; TEMP=`dig +short $i`; echo -n "$TEMP "; TEMP=`dig -x $TEMP +short`; echo ${TEMP%.}; done | column -t; echo


booknotes: how to solve it (g. polya)

posted on 2016-05-21 15:01

Note: WIP/TODO work in progress

As an exercise in destilling information from things I read, it may help to see how paraphrasing book contents serves. This means extracting titles and maybe adding short notes, so if I reread them later I can see how much of the book itself I can stll recall and recall the thought process behind it. Sometimes this may be the same as a reprint of the contents, sometimes things will be paraphrased.

Most of this book seems quite natural (ok, it is, beside the math parts at times), but the nice point of it is the way it gets you into it's proposed mindset while reading it, so you submerge the steps more self-conciously.

part 1: in the classroom


  1. helping the student - just lead the teaching
  2. questions, recommendations, mental operations - ask questions to guide their search
  3. generality - whats the problem, the data, its environment?
  4. common sense - get them to search comparable/analoguous problems they already solved for comparison
  5. teacher and student. imitation and practice. - implicitly convey how to teach the student

main divisions, main questions

  1. four phases - 'understand, plan, carry out, review'
  2. understanding the problem - try approaches that come natural: draw, write, build, create a notation or domain language
  3. example - arouse curiosity by letting them bond with the environment so they familiarize
  4. devising a plan - break the problem into smaller subproblems that you have solutions for, for a complete solution
  5. example - rerepeat paraphrased questions, more concrete examples
  6. carrying out the plan - check every step in the solution, and just 'do'
  7. example - check the amount of understanding being present, but don't push
  8. looking back - is the solution obvious, or can it be derived differently? can the solution/approach be used on other problems?
  9. example - can the arguement and the result be checked, do they apply to all data, are they dimension-agnostic?
  10. various approaches - all of the above, revisited
  11. the teacher's method of questioning - be unobtrusive to the student's solution process, let them faciliate mental habits of the above
  12. good questions, bad questions - make the understand the solution's steps one by one, don't shortcut anything

more examples

  1. a problem of construction - geometrical example
  2. a problem to prove - proof example
  3. a rate problem - differentiation example

part 2: how to solve it - a dialogue

getting aquainted

Essential substeps in each of the four phases:

Analyse the problem statement as good as you can.

working for a better understanding

Gather all facts and order them in ways that come naturally to you.

hunting for the helpful idea

After memorizing the problem and familiarizing with its domain, remember similar problems' solutions', ask others. Think of what you can gain in the aftermath. See how far you can get with your current ideas, even if they are incomplete.

carrying out the plan

execute the idea of your solution, checking each step forward after it was done.

looking back

Simplify your solution as much as you can. Try applying it to other problems.

See how your problem-solving skill improved.

part 3: short dictionary of heuristic


  1. analoguos objects agree in certain relations of their respective parts
  2. it is used on different levels
  3. you are lucky when you find a simpler analoguous problem during problem-solving
  4. solve the simpler problem
  5. use the previous solution as a model on solving the current problem
  6. or use the result of a previous problem for solving the current problem
  7. or use both result(s) and model(s) of previous problems
  8. plausible forecasts of the solution are analogies, too, but handle with care
  9. more cases strengthen you analogy
  10. try discerning the relationship between solutions to get further clues, try induction for mathematical problems
  11. analogies can be as precise as mathematical ideas

auxiliary elements

  • various kinds of these exist, they are basically everything problem-related in the same domain
  • they can be 'related' results or approaches or domain elements
  • get a suitable notation
  • note why auxiliary objects were introduced

auxiliary problems

  • solve auxiliary problems to circumenvent current obstacles
  • consider the time trade-off going down other paths
  • variations of the current problem and looking at your unknown(s) help finding these
  • equivalent problems which are more special problems of the current one, are auxiliary problems, too
  • chains of auxiliary problems may lead to the solution, be careful to only use equivalent problems
  • try solving more general problems to achieve the solution to the current special one

bernard bolzano

Some notes on Bolzano's motives about his notes about the subject of heuristics.

bright idea

Bright ideas are sudden leaps toward the solution at an inappreciable time

can you check the result/argument?

  • assess arguements / facts against common sense
  • generalize solutions and check with different input data
  • mix the order of steps towards the solution
  • check all data to work
  • domain knowledge increases with more solutions to related problems

can you derive the result differently?

Try variating steps towards the solution to come up with a better or different one.

can you use the result?

can the solution be applied to other problems, which are not variations of the previous one?

carrying out

  • heuristic reasoning is fine during planning
  • correctly checking each step during carrying out is then even more important
  • check major steps first
  • include the motives from your reasoning


Requirements to the einironment of the problem domain may

  • contradict
  • rendundant

Eliminate redundant conditiones, check how contradicting conditions relate and when they matter.


General solution for another problem upon which you stumble during solving the current problem.

could you derive something useful from the data?

See how the data relates to get nearer to the solution.

could you restate the problem?

rephrase / redefine to get to different views.

decomposing and recombining

--- currently no time for reading ---

windows: static routing

posted on 2016-05-20 19:08

Handling of static routes in windows can be easily done through the commandline.

Information is specified like this:



route print


# temporary
route add  mask
# permanent: just add the -p switch
route add  mask -p


# temporary
route delete  mask
# permanent: just add the -p switch
route delete  mask -p

plesk: show mailpasswords

posted on 2016-05-19 07:08

To show all passwords for all mailaccounts on a plesk installation, do this:


linux: lsyncd setup

posted on 2016-05-18 19:08

This was done on ubuntu, but should work accordingly on other linux systems.


apt-get install lsyncd
mkdir /etc/lsyncd
mkdir /var/log/lsyncd
touch /var/log/lsyncd/{lsyncd.log,lsyncd-status.log}

I did not know wether the folders and files under /var/log are created automatically, so I just created them myself.


vim /etc/lsyncd/lsyncd.conf.lua and do something like this:

settings = {
    logfile = "/var/log/lsyncd/lsyncd.log",
    statusfile = "/var/log/lsyncd/lsyncd-status.log",
    statusintervall = 20,
nodaemon = false,
maxProcesses = 5,
maxDelays = 0

        rsyncOpts = {"-av", "--delete"}
        rsyncOpts = {"-av", "--delete"}

This config could be written differently. I don't know lua or what this supposedly is, so I stick with this very basic config.

The configuration is only needed on the host sending data. Of course, ssh has to be set up accordingly.


If you happen to have a lot of files, initial startup can take quite a while. First lsyncd starts and gets a list of all files (or whatever it is that it does), and afterwards the rsync subprocesses are started.

If you want to stop it, it might very likely hang. In that case do ps aux | grep -e lsync -e rsync and see what is running after you already did service lsyncd stop.

To see what exactly the server does, try using iostat -xzd 1 from the sysstat package. While the util% is 100% your IOPS are eaten by something and likely it still does work.

Also use multitail /var/log/lsyncd/* (package lsyncd) to see what lsyncd does. No logs are written during the initial indexing. Also almost none by the initial rsync's for initial setup.

Afterwards the logs will show entries for every new inotified and synced file. Test this by touching files and tailing the logs. :)

inotify's exhausted?

In case you get such an error, you run in the inotify limit.

To check, do cat /proc/sys/fs/inotify/max_user_watches. If it's a number like 8192, this is simply too low. (You want to watch more files than that.)

Temp fix:

echo 1048576 > /proc/sys/fs/inotify/max_user_watches

Permanent fix:

vim /etc/sysctl.conf


Save and exit.

Infrastructural website tuning

posted on 2016-05-15 16:38


The following is a write-up is leaning onto an artictle I read years ago, but this post's essence should still hold up today. It's written being targetet at PHP landscape, and is not something you need for your personal blog. Do use a static website generator if you care for speed there, your wordpress is gonna be hacked some day anyway.

If you already have two servers, one for your webserver, one for your database, this is more likely for you.
If you already have a loadbalancer in front of two webservers, this is definitely for you.

But even big webshops usually do not put such measures in place, except if they really do care about their response times and thus about their google ranking.

Mostly this is a guidance on how to tackle the customers favourite complaint: 'It is so slow!' and providing some background.

highlevel approaches

When trying to fix a 'slow' website, there are several approaches.

  • fix the website code
  • throw hardware at the problem
  • change the underlying infrastructure (software-wise)

First usually is not going to happen, as good web developers, especially in the PHP universe, are just rare. They fight their codebase and are happy when things are working correct. Performance comes second, and profiling their application is something they often never did or heard about.

Second was a nice solution, but these GHz numbers don't really improve that drastically as they did in the past. And since the memory wall gets hit, this solution also ceases to be a viable approach, no matter how fast you make your webserver connect to your database. SSD's do help, but only so much.

Which leaves us with option three, or the following measures in particular:

  1. SSD's (noted here for sake of completeness)
  2. handle sessions via redis
  3. separate static from dynamic content and serve each via different webservers
  4. browser caching
  5. accelerators, like squid or varnish in front
  6. opcode caches
  7. database caching via memcached to relieve the main database
  8. CDN's like akamai

fast storage

If you are about to migrate your website onto new hardware anytime soon (onto a new single server is what we talk about), think about getting SSD's. These have capacities of 250GB upwards, which will give you in a RAID10 setup like 500GB of usable redundantly persisted space. No matter how big your web presence is usually, after substracting 20GB for a linux operating system, this leaves you with plenty of diskspace for whatever your future may hold for you.

For budgeting reasons, a RAID1 setup of two SSD's providing still ~230GB of space is usually sufficient, except you plan on storing literally shitloads of FTP data or useless backups. Backups are to be done off-site on another server anyway. You don't need version control on your production server anyway (except for /etc maybe), except you think you are a true DevOp, fight others to the bone about the agile kool-aid and know jack-shit anyway.

But, no hard feelings, point is, just get SSD's if you can afford them.

session handling

This is sort of a pre-requisite, depending your overall approach, see the fazit at the bottom to see what this means.

If you happen to have a lot of sessions, this improves things a bit. Reading sessions from an in-memory database is just plain faster than letting the webserver getting them from the harddisk every time. This is only true if redis runs on the same machine as your webserver, as network latency is almost always higher than the latency of disk I/O operations. If you have a direct crosslink to your dedicated session server with 10G NIC's, this is not true, but if you have this in place you sure as hell do not need this whole article.

If you however split your load onto several webservers behind a loadbalancer, and want a real 'shared-nothing' architecture, things are different. In that case, you don't have your loadbalancer configured to use sticky session, and so you need a central place where your sessions are managed. 'Sticky sessions' simply mean, each user is served by the same webserver each time he visits your website.

Unless you really think you need a shared nothing installation or have REALLY many sessions, you don't need this.

Use redis instead of memcached for this, as the former can persist the in-memory data to disk.

static vs. dynamic content

Classify all your content in one of these two groups. Put all static content on a separate webserver, and let it be served by it only handling a subdomain of your site. This frees up resources on your 'dynamic' webserver. If you put both on the same hardware, if the dynamic webserver eats all the resources, it's no use doing that of course. You need another server in this case.

In case you read this, and have zero clue what static vs. dynamic means:

  • static content = html files on your website
  • dynamic content = html code generated from your php code which is then inserted in the already existing html code mentioned as 'static'

'dynamic content' here is not to be confused with 'dynamic html', which rather means client-side changing of a website through changes to the DOM tree via javascript in the form of 'asynchronous javascript and xml' (AJAX).

browser caching

Classify your content into things that, maybe like this:

  • never change (6 months caching time)
  • seldom change (1 week)
  • often (1 day)
  • always (1 minute)

This is just rough guidance from the top of my head, adjust to your needs.

Set the caching headers of your HTTP packets accordingly, and let the users browser help you reduce your servers' load.

To still be able to exchange old content with new one, add hashes to the URLs of your 'never-changes' content, to make sure when things change, new content will be served no matter what cache expiration times you use. These hashes have to be created during your deployment process automatically and be inserted to your application code, also automatically.

This is actually something rather sophisticated, but otherwise you have the same problem as with using 301 Redirects: Caching times and permanent redirects don't forgive fuck-ups on your behalf.

reverse-proxies for accellerating things

If you already seperated dynamic vs. static content, what sense does it make to put an accelerator in form of a reverse-proxy like squid or varnish up front, too?

Accelerators do create like 'static snapshots' of your combined static-dynamic content, and serves them directly. An accelerator does create static html code to be served from the already existing static html parts and the html generated from the interpreted php code.

Some made up numbers for a big website and requests being possibly served per second:

  • dynamic content webserver: 100
  • static content webserver: 5k
  • accelerator: 250k

Sounds good?

Important is to differentiate HTTP GET and all the other requests. GET's don't change things, POST's or PUT's and such do. GET are served by the accelerator, but the others must pass all your caching layers.

opcode caches

PHP instructions are parsed and translated into operation codes (machine language) in the process of their execution through the php interpreter. To speed up things, opcode caches like APC do basically precompile the instruction to speed up the php execution.

Like up to three times faster your website can become, just through the opcode cache.

database caching

memcached is a in-memory key-value store, caching often-used data from the database. Any questions why this might be beneficial? ;)

Sidenote: There exist memcache and memcached which are separate programs, don't get confused by that.

content delivery networks

In case you have seriously big traffic spikes (read: if you wonder about this happening, you don't), you don't need a CDN.

CDN's are put in place by exchanging the subdomain pointing to the data served by your static webservers, to another subdomain pointing to the CDN. This is helpful if you have had single times like special days where you knew your load to be ridiculously high where you'd need a lot more serving power than you usually do.

If you need your CDN to not just serve static content but complete sites, you can't just use your own loadbalancer. The CDN's loadbalancer must be configured and put to work.

So instead of getting more machines yourself, set things up accordingly and employ a CDN of your choice. Akami is rather good.

Else your machines will idle around for 99,99% of all the time you have them in place, and would have a hard time making profit out of them. Never wondered why amazon is such a huge cloud provider? That's just their machines that would otherwise be doing nothing since christmas is just not there already.


In case you have a single server where a webserver and a database server run on, what are the easiest steps for speeding up things?

  1. opcode cache
  2. accelerator
  3. memcached

Also the SSD's help, but usually you get them up front, not after your installation is already running, as there are migration fees to be paid if you want your provider to reinstall your hosting.

Further you do categorize your content, and implement caching.

Once all this is done, implement caching.

The next step would be more hardware and distributing the load onto several webservers.

So what if you already have more than one server?

Do the first three points mentioned above and caching.

Then set up dedicated session handling, so your load will be distrubuted more evenly accross your servers, when using a loadbalancer.

For setting everything else up, you should know what you are doing and not just be reading this.

This blog covers .csv, .htaccess, .pfx, .vmx, /etc/crypttab, /etc/network/interfaces, /etc/sudoers, /proc, 10.04, 14.04, AS, ASA, ControlPanel, DS1054Z, GPT, HWR, Hyper-V, IPSEC, KVM, LSI, LVM, LXC, MBR, MTU, MegaCli, PHP, PKI, R, RAID, S.M.A.R.T., SNMP, SSD, SSL, TLS, TRIM, VEEAM, VMware, VServer, VirtualBox, Virtuozzo, XenServer, acpi, adaptec, algorithm, ansible, apache, apache2.4, apachebench, apple, applet, arcconf, arch, architecture, areca, arping, asa, asdm, autoconf, awk, backup, bandit, bar, bash, benchmarking, binding, bitrate, blackarmor, blockdev, blowfish, bochs, bond, bonding, booknotes, bootable, bsd, btrfs, buffer, c-states, cache, caching, ccl, centos, certificate, certtool, cgdisk, cheatsheet, chrome, chroot, cisco, clamav, cli, clp, clush, cluster, coleslaw, colorscheme, common lisp, configuration management, console, container, containers, controller, cron, cryptsetup, csync2, cu, cups, cygwin, d-states, database, date, db2, dcfldd, dcim, dd, debian, debug, debugger, debugging, decimal, desktop, df, dhclient, dhcp, diff, dig, display manager, dm-crypt, dmesg, dmidecode, dns, docker, dos, drivers, dtrace, dtrace4linux, du, dynamictracing, e2fsck, eBPF, ebook, efi, egrep, emacs, encoding, env, error, ess, esx, esxcli, esxi, ethtool, evil, expect, exportfs, factory reset, factory_reset, factoryreset, fail2ban, fbsd, fdisk, fedora, file, filesystem, find, fio, firewall, firmware, fish, flashrom, forensics, free, freebsd, freedos, fritzbox, fsck, fstrim, ftp, ftps, g-states, gentoo, ghostscript, git, git-filter-branch, github, gitolite, global, gnutls, gradle, grep, grml, grub, grub2, guacamole, hardware, haskell, hdd, hdparm, hellowor, hex, hexdump, history, howto, htop, htpasswd, http, httpd, https, i3, icmp, ifenslave, iftop, iis, imagemagick, imap, imaps, init, innoDB, innodb, inodes, intel, ioncube, ios, iostat, ip, iperf, iphone, ipmi, ipmitool, iproute2, ipsec, iptables, ipv6, irc, irssi, iw, iwconfig, iwlist, iwlwifi, jailbreak, jails, java, javascript, javaws, js, juniper, junit, kali, kde, kemp, kernel, keyremap, kill, kpartx, krypton, lacp, lamp, languages, ldap, ldapsearch, less, leviathan, liero, lightning, links, linux, linuxin3months, lisp, list, livedisk, lmctfy, loadbalancing, locale, log, logrotate, looback, loopback, losetup, lsblk, lsi, lsof, lsusb, lsyncd, luks, lvextend, lvm, lvm2, lvreduce, lxc, lxde, macbook, macro, magento, mailclient, mailing, mailq, manpages, markdown, mbr, mdadm, megacli, micro sd, microsoft, minicom, mkfs, mktemp, mod_pagespeed, mod_proxy, modbus, modprobe, mount, mouse, movement, mpstat, multitasking, myISAM, mysql, mysql 5.7, mysql workbench, mysqlcheck, mysqldump, nagios, nas, nat, nc, netfilter, networking, nfs, nginx, nmap, nocaps, nodejs, numberingsystem, numbers, od, onyx, opcode-cache, openVZ, openlierox, openssl, openvpn, openvswitch, openwrt, oracle linux, org-mode, os, oscilloscope, overview, parallel, parameter expansion, parted, partitioning, passwd, patch, pct, pdf, performance, pfsense, php, php7, phpmyadmin, pi, pidgin, pidstat, pins, pkill, plasma, plesk, plugin, posix, postfix, postfixadmin, postgres, postgresql, poudriere, powershell, preview, profiling, prompt, proxmox, ps, puppet, pv, pveam, pvecm, pvesm, pvresize, python, python3, qemu, qemu-img, qm, qmrestore, quicklisp, quickshare, r, racktables, raid, raspberry pi, raspberrypi, raspbian, rbpi, rdp, redhat, redirect, registry, requirements, resize2fs, rewrite, rewrites, rhel, rigol, roccat, routing, rs0485, rs232, rsync, s-states, s_client, samba, sar, sata, sbcl, scite, scp, screen, scripting, seafile, seagate, security, sed, serial, serial port, setup, sftp, sg300, shell, shopware, shortcuts, showmount, signals, slattach, slip, slow-query-log, smbclient, snmpget, snmpwalk, software RAID, software raid, softwareraid, sophos, spacemacs, spam, specification, speedport, spi, sqlite, squid, ssd, ssh, ssh-add, sshd, ssl, stats, storage, strace, stronswan, su, submodules, subzone, sudo, sudoers, sup, swaks, swap, switch, switching, synaptics, synergy, sysfs, systemd, systemtap, tar, tcpdump, tcsh, tee, telnet, terminal, terminator, testdisk, testing, throughput, tmux, todo, tomcat, top, tput, trafficshaping, ttl, tuning, tunnel, tunneling, typo3, uboot, ubuntu, ubuntu 16.04, ubuntu16.04, udev, uefi, ulimit, uname, unetbootin, unit testing, upstart, uptime, usb, usbstick, utf8, utm, utm 220, ux305, vcs, vgchange, vim, vimdiff, virtualbox, virtualization, visual studio code, vlan, vmstat, vmware, vnc, vncviewer, voltage, vpn, vsphere, vzdump, w, w701, wakeonlan, wargames, web, webdav, weechat, wget, whois, wicd, wifi, windowmanager, windows, wine, wireshark, wpa, wpa_passphrase, wpa_supplicant, x11vnc, x2x, xfce, xfreerdp, xmodem, xterm, xxd, yum, zones, zsh

Unless otherwise credited all material Creative Commons License by sjas