Posts from 2015-05

mysql: show users (with proper syntax and host accesses)

posted on 2015-05-22 12:34:19

To have a proper user overview, use this:

SELECT CONCAT(QUOTE(user),'@',QUOTE(host)) UserAccounts FROM mysql.user ORDER BY user;

Which will give you this:

mysql> SELECT CONCAT(QUOTE(user),'@',QUOTE(host)) UserAccounts FROM mysql.user ORDER BY user;
| UserAccounts                   |
| 'debian-sys-maint'@'localhost' |
| 'root'@'localhost'             |
| 'root'@'my-hostname'           |
| 'root'@''             |
| 'root'@'::1'                   |
5 rows in set (0.00 sec)

mysql: show table sizes

posted on 2015-05-22 12:07:28

To show all table sizes for a given database, ascending, so largest are shown last:

# change <"DB_NAME"> !

SELECT table_name AS "Tables", 
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB" 
FROM information_schema.TABLES 
WHERE table_schema = "$DB_NAME"
ORDER BY (data_length + index_length) ASC;

To query just for a single table from a given database:

# change <"DB_NAME"> and <"TABLE_NAME"> !

SELECT table_name AS "Table", 
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB" 
FROM information_schema.TABLES 
WHERE table_schema = "$DB_NAME"
 AND table_name = "$TABLE_NAME";

Found these gems on stackoverflow.

git submodules

posted on 2015-05-22 01:12:29

git submodules are the solution to having a repository referencing another repository.

A use case for me is using my emacs dotfile repo from within the main dotfile repository.

## clone with submodules
git clone --recursive <repo>

## after clone, but without '--recursive' initialize submodules
git submodule update --init

## status
git submodule status

## add submodule, dont forget to commit the change afterwards
git submodule add <repo-url> <reponame>

## update submodule status
git submodule update

That should be submodules in a nutshell.

thin vs. thick provisioning

posted on 2015-05-15 23:12:29

Since I always mix these up:

thin = disk is created, but not completely allocated. overprovisioning is possible.
thick = disk is created with actual size

VMWare has furter lazy and eager provisioned thick drives:

lazy zeroed thick:
    disk is created with actual size, nothing else is done at create time.
    But every newly used block is zeroed first:
        Thus a little bit slower than eagerly provisioning during runtime.

eager zeroed thick:
    disk is created AND zeroed during creating, thus wiped completely then.

RAID5 vs. RAID10

posted on 2015-05-14 16:18:58

In a earlier post I mentioned a linux readme where someone ranted on raid levels. As it seems, this one is no longer present, so here's a reprint.

zless /usr/share/doc/mdadm/RAID5_versus_RAID10.txt.gz

# from
# also see
# Note: I, the Debian maintainer, do not agree with some of the arguments,
# especially not with the total condemning of RAID5. Anyone who talks about
# data loss and blames the RAID system should spend time reading up on Backups
# instead of trying to evangelise, but that's only my opinion. RAID5 has its
# merits and its shortcomings, just like any other method. However, the author
# of this argument puts forth a good case and thus I am including the
# document. Remember that you're the only one that can decide which RAID level
# to use.

RAID5 versus RAID10 (or even RAID3 or RAID4)

First let's get on the same page so we're all talking about apples.

What is RAID5?

OK here is the deal, RAID5 uses ONLY ONE parity drive per stripe and many RAID5 arrays are 5 (if your counts are different adjust the calculations appropriately) drives (4 data and 1 parity though it is not a single drive that is holding all of the parity as in RAID 3 & 4 but read on). If you have 10 drives or say 20GB each for 200GB RAID5 will use 20% for parity (assuming you set it up as two 5 drive arrays) so you will have 160GB of storage. Now since RAID10, like mirroring (RAID1), uses 1 (or more) mirror drive for each primary drive you are using 50% for redundancy so to get the same 160GB of storage you will need 8 pairs or 16 - 20GB drives, which is why RAID5 is so popular. This intro is just to put things into perspective.

RAID5 is physically a stripe set like RAID0 but with data recovery included. RAID5 reserves one disk block out of each stripe block for parity data. The parity block contains an error correction code which can correct any error in the RAID5 block, in effect it is used in combination with the remaining data blocks to recreate any single missing block, gone missing because a drive has failed. The innovation of RAID5 over RAID3 & RAID4 is that the parity is distributed on a round robin basis so that there can be independent reading of different blocks from the several drives. This is why RAID5 became more popular than RAID3 & RAID4 which must sychronously read the same block from all drives together. So, if Drive2 fails blocks 1,2,4,5,6 & 7 are data blocks on this drive and blocks 3 and 8 are parity blocks on this drive. So that means that the parity on Drive5 will be used to recreate the data block from Disk2 if block 1 is requested before a new drive replaces Drive2 or during the rebuilding of the new Drive2 replacement. Likewise the parity on Drive1 will be used to repair block 2 and the parity on Drive3 will repair block4, etc. For block 2 all the data is safely on the remaining drives but during the rebuilding of Drive2's replacement a new parity block will be calculated from the block 2 data and will be written to Drive 2.

Now when a disk block is read from the array the RAID software/firmware calculates which RAID block contains the disk block, which drive the disk block is on and which drive contains the parity block for that RAID block and reads ONLY the one data drive. It returns the data block. If you later modify the data block it recalculates the parity by subtracting the old block and adding in the new version then in two separate operations it writes the data block followed by the new parity block. To do this it must first read the parity block from whichever drive contains the parity for that stripe block and reread the unmodified data for the updated block from the original drive. This read-read-write-write is known as the RAID5 write penalty since these two writes are sequential and synchronous the write system call cannot return until the reread and both writes complete, for safety, so writing to RAID5 is up to 50% slower than RAID0 for an array of the same capacity. (Some software RAID5's avoid the re-read by keeping an unmodified copy of the orginal block in memory.)

Now what is RAID10:

RAID10 is one of the combinations of RAID1 (mirroring) and RAID0 (striping) which are possible. There used to be confusion about what RAID01 or RAID10 meant and different RAID vendors defined them differently. About five years or so ago I proposed the following standard language which seems to have taken hold. When N mirrored pairs are striped together this is called RAID10 because the mirroring (RAID1) is applied before striping (RAID0). The other option is to create two stripe sets and mirror them one to the other, this is known as RAID01 (because the RAID0 is applied first). In either a RAID01 or RAID10 system each and every disk block is completely duplicated on its drive's mirror. Performance-wise both RAID01 and RAID10 are functionally equivalent. The difference comes in during recovery where RAID01 suffers from some of the same problems I will describe affecting RAID5 while RAID10 does not.

Now if a drive in the RAID5 array dies, is removed, or is shut off data is returned by reading the blocks from the remaining drives and calculating the missing data using the parity, assuming the defunct drive is not the parity block drive for that RAID block. Note that it takes 4 physical reads to replace the missing disk block (for a 5 drive array) for four out of every five disk blocks leading to a 64% performance degradation until the problem is discovered and a new drive can be mapped in to begin recovery. Performance is degraded further during recovery because all drives are being actively accessed in order to rebuild the replacement drive (see below).

If a drive in the RAID10 array dies data is returned from its mirror drive in a single read with only minor (6.25% on average for a 4 pair array as a whole) performance reduction when two non-contiguous blocks are needed from the damaged pair (since the two blocks cannot be read in parallel from both drives) and none otherwise.

One begins to get an inkling of what is going on and why I dislike RAID5, but, as they say on late night info-mercials, there's more.

What's wrong besides a bit of performance I don't know I'm missing?

OK, so that brings us to the final question of the day which is: What is the problem with RAID5? It does recover a failed drive right? So writes are slower, I don't do enough writing to worry about it and the cache helps a lot also, I've got LOTS of cache! The problem is that despite the improved reliability of modern drives and the improved error correction codes on most drives, and even despite the additional 8 bytes of error correction that EMC puts on every Clariion drive disk block (if you are lucky enough to use EMC systems), it is more than a little possible that a drive will become flaky and begin to return garbage. This is known as partial media failure. Now SCSI controllers reserve several hundred disk blocks to be remapped to replace fading sectors with unused ones, but if the drive is going these will not last very long and will run out and SCSI does NOT report correctable errors back to the OS! Therefore you will not know the drive is becoming unstable until it is too late and there are no more replacement sectors and the drive begins to return garbage. [Note that the recently popular IDE/ATA drives do not (TMK) include bad sector remapping in their hardware so garbage is returned that much sooner.] When a drive returns garbage, since RAID5 does not EVER check parity on read (RAID3 & RAID4 do BTW and both perform better for databases than RAID5 to boot) when you write the garbage sector back garbage parity will be calculated and your RAID5 integrity is lost! Similarly if a drive fails and one of the remaining drives is flaky the replacement will be rebuilt with garbage also propagating the problem to two blocks instead of just one.

Need more? During recovery, read performance for a RAID5 array is degraded by as much as 80%. Some advanced arrays let you configure the preference more toward recovery or toward performance. However, doing so will increase recovery time and increase the likelihood of losing a second drive in the array before recovery completes resulting in catastrophic data loss. RAID10 on the other hand will only be recovering one drive out of 4 or more pairs with performance ONLY of reads from the recovering pair degraded making the performance hit to the array overall only about 20%! Plus there is no parity calculation time used during recovery - it's a straight data copy.

What about that thing about losing a second drive? Well with RAID10 there is no danger unless the one mirror that is recovering also fails and that's 80% or more less likely than that any other drive in a RAID5 array will fail! And since most multiple drive failures are caused by undetected manufacturing defects you can make even this possibility vanishingly small by making sure to mirror every drive with one from a different manufacturer's lot number. ("Oh", you say, "this schenario does not seem likely!" Pooh, we lost 50 drives over two weeks when a batch of 200 IBM drives began to fail. IBM discovered that the single lot of drives would have their spindle bearings freeze after so many hours of operation. Fortunately due in part to RAID10 and in part to a herculean effort by DG techs and our own people over 2 weeks no data was lost. HOWEVER, one RAID5 filesystem was a total loss after a second drive failed during recover. Fortunately everything was on tape.

Conclusion? For safety and performance favor RAID10 first, RAID3 second, RAID4 third, and RAID5 last! The original reason for the RAID2-5 specs was that the high cost of disks was making RAID1, mirroring, impractical. That is no longer the case! Drives are commodity priced, even the biggest fastest drives are cheaper in absolute dollars than drives were then and cost per MB is a tiny fraction of what it was. Does RAID5 make ANY sense anymore? Obviously I think not.

To put things into perspective: If a drive costs $1000US (and most are far less expensive than that) then switching from a 4 pair RAID10 array to a 5 drive RAID5 array will save 3 drives or $3000US. What is the cost of overtime, wear and tear on the technicians, DBAs, managers, and customers of even a recovery scare? What is the cost of reduced performance and possibly reduced customer satisfaction? Finally what is the cost of lost business if data is unrecoverable? I maintain that the drives are FAR cheaper! Hence my mantra:


Art S. Kagel

Linux Software RAID: revisited

posted on 2015-05-13 14:55:51

Having done a linux install based on a software raid and LVM some time ago, with the help of the debian installer, I found out the hard way that booting from it can be possible, but only if the first disk works. Maybe I did something wrong, but I wasn't able to fix the install or find the point where I erred, a reinstall from hand will be a nice learning experience, so here we go.

get a livedisk

To partition the disks manually, you need a livedisk. There are many of these out there, google for the one of your choosing. I ended up using the kali live disk from last time, but have had to manually install mdadm everytime, like described in the last blog post. You need mdadm and the LVM tools for the following.

Usually you will get an .iso file, where dd will help you to put the ISO onto the stick. If for some reason a stick will not work, you might also try burning a CD and run it from there.

boot from the live disk

Depending on your BIOS setup (UEFI booting will no be covered here.), you might have to readjust the boot order, so your system will boot. After having a running OS, open a shell.



Workstation Setup List

posted on 2015-05-12 14:59:59

Everytime I reinstall a workstation, the question arises what I need, independent of the OS being used. To get that stuff off my memory, here's a halfway ordered list:



  • vim
  • emacs


  • sup


  • openvpn
  • openssh
  • openssh-server


  • Chrome
  • Chrome vim bindings
  • Firefox / Iceweasel
  • Firefox vim bindings
  • youtube / flash-plugin


  • xclip
  • Klipper


  • bash
  • python
  • ruby
  • perl
  • sbcl / ccl


  • Git
  • SVN
  • make
  • cmake
  • autoconf
  • m4
  • darcs / CVS / others are not important, as they are rarely needed and can be doing so.


  • tmux



  • PS1 (a.k.a. shell prompt)
  • VISUAL=vim
  • EDITOR=vim


  • vim (syn on, incsearch, hls)
  • emacs config (baremetal emacs just sucks, big time.)
  • shortcut for opening a terminal (how to do this depends grossly on the window manager)

It might make more sense, to get that stuff into a puppet or ansible script, but until now this list suffices and keeping the script up to date would be more work than it'd help.

Linux: Wake-On-LAN

posted on 2015-05-11 22:04:35

To get a computer to start via remote, without having someone to push the powerbutton, can easily be achieved via the NIC's wake-on-lan feature. Only prerequisites are access to a computer within the same LAN and a WOL able computer and proper setup.

NOTE: In some BIOSes or UEFIs the WOL / wake on lan feature has to be enabled explicitly.

First check if your NIC is able to do it, and which NIC you need.

Use ip a in shell, and look up your active NIC, the one containing an IP not being :) This should be the cabled ethernet connection, as, aside from newer Mac's (Snow Leopard / OSX 10.6 and above) the trigger will not work via WIFI.

check for functionality

Then have a look at the capabilities and the current setting:

ethtool <NIC> | grep Wake

which may give you something like:

[root@jerrylee /home/jl]# ethtool eno1 | \grep Wake
        Supports Wake-on: pumbg
        Wake-on: g

If the line with Wake-on is set to d, WOL is disabled. From the manpage:

          p   Wake on PHY activity
          u   Wake on unicast messages
          m   Wake on multicast messages
          b   Wake on broadcast messages
          a   Wake on ARP
          g   Wake on MagicPacket
          s   Enable SecureOn password for MagicPacket
          d   Disable  (wake  on  nothing).  This option
              clears all previous options.

Here I have 'Wake on MagicPacket' already enabled.

enable it

ethtool --change <NIC> wol g

use it

At another host within your network, you only have to know the IP or MAC address of the machine in question, and have the wakeonlan package (debian via apt-get) or wol package (redhat derivates, via yum) installed.

Have a look at ip n, which is short for ip neigh, so you get the MAC:

root@pi:~# ip n dev eth0 lladdr 34:31:c4:1b:1e:b7 REACHABLE dev eth0 lladdr 70:71:bc:9d:bd:e1 STALE

You can also put a .txt file on the host, containing the MAC.

If I wanted to start the machine with the IP, I'd have to use:

wol 70:71:bc:9d:bd:e1

And the machine will boot.

This will also persist, even when using ifup / ifdown on the interface in question.


To see what can trigger a boot of your machine, see here:

cat /proc/acpi/wakeup

tmux primer

posted on 2015-05-11 21:19:05

Like an earlier post on screen, this is a primer on tmux to get up and running as fast as possible.

tmux 'feels' faster, and has according to rumors, cleaner code, thus crashes not as easily. Also the shortcuts, manpage, everything felt more natural and easier to memorize. As well as ctrl-b being a better shortcuts than screen's ctrl-a which is often needed for jumping to the beginning of the line in bash. And the pane borders are only a pixel wide, which is just great.

In short, on a server, use screen, tmux otherwise. Why? Most likely your peers will know screen already, but do not want to have to do anything with tmux. :)

Further, 'tmux' has sessions containing windows containing panes, whereas 'screen' only has sessions containing panes, as far as I remember.

In the following, every command that does not start with tmux is a hotkey, the former are shell commands. For hotkey commands you have to be within a running tmux session.

global hotkey

# needed for every command you will want to enter inside tmux

general handling


## general help overview, bindings via tmux

## bindings via shell, if one tmux instance already runs
tmux lsk





show tmux messages


session management

start a named session

tmux new -s <session-name>

kill session

tmux kill-session -t <session-name>

list available sessions

tmux ls

reattach named session

tmux a -t <session-name>

# if only one session is running
tmux a

choose session (= tmux instance) via menu


window management

open new window


exit / close current window (tmux session, if last window)


choose window via menu


rename current window


search for text in all windows


moving around windows

# go to previous window

# jump to window by id
0, 1, 2, 3, 4, 5, 6, 7, 8, 9

# next/previous window
n / p

choose window via menu


pane management

split / open panes

# vertical

# horizontal

close current pane


break current pane out of current window (into new window)


moving around panes

just use the arrow keys, they will work, too

# next pane in current window

# rotate panes forwards / backwards (so next pane is put where the current was)
CTRL-o / M-o

# show pane id's

# jump to pane by number
q <number>

# go to previous pane

resizing panes

# by one character
CTRL-<arrow key>

# by five characters
M-<arrow key>

rearranging panes

# swap current with next with pane

# swap current with previous with pane


or use the mousewheel


show time


config things


bind-key & kill-window
bind-key x kill-pane 

Linux: iftop manual

posted on 2015-05-07 14:33:55

Linux iftop is a nice tool when watching traffic in realtime. Sadly, the base settings are not the most helpful.

So try these for a change, after starting iftop:

p  -  toggle port display
L  -  logarithmic traffic scale 
s  -  hide source host
N  -  port resolution off
t  -  toggle sent, received, sent+received, send and received display

which will give you something like this here: (sadly, the traffic bars are not shown)

         10b        100b        1,00kb     10,0kb      100kb      1,00Mb 10,0Mb
 * :443                   <=>  * :53269                     0b   37,9kb  11,0kb
 * :80                    <=>  * :21400                  4,79kb  20,0kb  5,00kb
 * :80                    <=>  * :20141                  27,7kb  19,6kb  4,89kb
 * :80                    <=>  * :50604                  52,4kb  17,9kb  4,47kb
 * :80                    <=>  * :58073                  16,3kb  16,3kb  6,05kb
 * :22                    <=>  * :27883                  19,0kb  14,8kb  12,3kb
 * :80                    <=>  * :50086                     0b   14,8kb  3,69kb
 * :80                    <=>  * :52441                   480b   14,4kb  4,88kb
 * :80                    <=>  * :50581                  71,5kb  14,3kb  3,58kb
 * :80                    <=>  * :49450                  11,3kb  13,9kb  5,05kb
 * :80                    <=>  * :57972                     0b   13,8kb  3,44kb
 * :80                    <=>  * :37680                     0b   13,7kb  3,42kb
 * :80                    <=>  * :49312                  6,93kb  13,6kb  3,41kb
 * :80                    <=>  * :49723                  13,5kb  13,6kb  6,09kb
 * :80                    <=>  * :4442                   15,5kb  13,6kb  3,39kb
 * :80                    <=>  * :53240                  13,4kb  13,4kb  6,69kb
 * :443                   <=>  * :51954                  13,4kb  13,4kb  5,15kb

TX:             cum:   28,0MB   peak:   3,18Mb  rates:   2,75Mb  2,86Mb  2,79Mb
RX:                    28,5MB           3,01Mb           2,83Mb  2,87Mb  2,84Mb
TOTAL:                 56,5MB           6,18Mb           5,58Mb  5,73Mb  5,63Mb

The bars are the actual traffic taking place, the logarithmic bar on top help with understanding.

To move down/up, use j/k.

The columns to the left are chosen via 1, 2, 3 and show traffic averages over 2s, 10s and 40s.

The bars can also be toggled, to reflect the 2s, 10s and 40s aggregation.

Linux and VNC

posted on 2015-05-03 11:39:03

previous VNC problems

Linux and VNC was a pain point in the past for me, as a regular VNC (read vncserver) will give you headaches when trying to view the current display. You can open a second session, but you will not see the currently running Xsession.

Enter x11vnc

For the next steps, root privileges are assumed, and that you are in the same network as your VNC machine.

Install x11vnc package for your OS via its package manager and create a startup script like this one:

cat << EOF > ~/vnc && chmod a+x ~/vnc
x11vnc -env FD_XDM=1 -auth guess -ncache 10 &

Now you can just ssh into the machine in question, run ./vnc and have a proper vnc server running. It will even work at the login screen of the display manager, even before a user is logged into the desktop environment of the target machine.

On your maching (not the vnc server) do vncviewer (assuming you have an installed package providing the viewer), do:

vncviewer <hostname-or-ip-of-server>:5900

If it connects Fullscreen, try pressing F8, for getting a menu so you can get into windowed mode or exit the session, when done.

This also will kill the running x11vnc instance, so if you want to connect with vnc again, ssh into the maching and rerun the ./vnc script above.

security considerations

This setup does only suffice for an internal network, as no authentication measures are in place. You aren't even asked for a password when connecting.

Also running the application as root should considered bad practice.

For further securing your setup in case you need it, you might have to create an auth cookie:

xauth generate :0

and use it accordingly, as well as running it with a proper user and user rights.

Linux: Which display manager do I run?

posted on 2015-05-03 11:09:32

To easily determine the display manager you are running, this should usually siffice to a pretty high degree:

ps auxf | awk '{print $11}' | \grep -e "^/.*dm$" -e "/.*slim$" 

linux software raid, raid levels, LVM, btrfs and Kali Linux

posted on 2015-05-02 16:27:18

preface and setup layout

After having had installed a fresh system based on Kali Linux with software raid and LVM, I had some fun. The setup consisted of four hdd's, partitions for /boot, /, /var , swap and some others for testing purposes, mostly btrfs, /boot was on a ext2. First two harddisks were designated as the system RAID, second two were to be the data RAID.

The harddisks were plugged into the SATA ports 1 to 4 in the right order (ports can always be identified via the prints on the motherboard), which was a good idea as we will see later. Out of habit I also took a photo of the partitioning scheme during the install when I was done, as this was a more complex setup. Both RAID levels were RAID1, nothing fancy.

Each of the RAID devices was in turn used as a LVM volume group, and each of the partitions mentioned above were a single logical volume. So /boot was a LVM partition on top of a software raid.

Well, I simply hoped this would boot after this setup was chosen. ;)

excourse on used RAID levels

On a sidenote, usually I only put RAID1 (mirrored) and RAID10 (striped sets of mirrors) levels to use. RAID5 allows one disk to fail in the array, RAID6 two. With the current sizes of two, three or even four terabytes, and six also being already shipped, just think of the amount of time needed to rebuild a RAID5 with 10TB, which should take quite a while, when two TB already take days to finish.

Considering most people do not mix harddisks but just take them one after another out of the box the mailman sent them, these are very likely quite similar. Same model, from the same production unit or time slot, with likely similar life expectancies. Rebuilds further take their toll on the hardware, as they impose an intense workload upon the disks. Besides, in a RAID 10 data is copied straight from one disk to its partner, whereas in a RAID5 ALL disks are read, plus parity has to be calculated. This fucks up the performance of the drives during rebuilds.

I do not feel good about a rebuild stressing the array over a time span of like weeks which it takes the system until it finishes, sitting only on top of a lousy RAID5 during the process, where another missing disk means all is lost.

A RAID5, where the failure probability increases with each disk, as does the time to rebuild. RAID6 will mitigate this somehow, but just think of the time and work the rebuild takes. And if your data goes down the drain, think of what the customer will tell you when he's missing 20TB?

A RAID10 with two failed disks is already among my experiences, both went out in quick order in that case, like within two days.

Lucky me, their were on different legs of the RAID0. So what did the situation feel like?

All data was backupped. The backups are actively being tested and thus working most of the time. All storage capacity summed up to just six TB. And these were only 2TB drives, which were synced within days, not within two weeks, compared to if it had been a RAID5/6 setup.

I still dread the memory, it was a Hypervisor for several customers. Brave new virtualized world.

You may ask, why no external storage? Getting a dothill or an EMC2 storage is simply several thousand euros, and why not use an already existing 8bay Server with local storage? RAID1 for the system, leaves six disks for data, with two TB drives sums up to 6TB capacity, which is a nice use case for slightly aged hardware.

Besides, these setups can also be sold more easily, they are simply cost efficient. Plus you do not have two digit terabyte amounts of data to sync.

Here's a link from Jan 2014 to show the level of importance storage already had last year.

Some time after this posting I found some additional info on this, from someone I have never heard of:

zless /usr/share/doc/mdadm/RAID5_versus_RAID10.txt.gz

(You may have to have mdadm installed, though I do not know for sure.)

But I disgress, back to the story.

boot failed, ofc

Booting the system afterwards failed with errors, and the root partition was formatted as btrfs, too.

That the RAID status was not ok was a minor issue, as the RAID was just not synced yet.


fsck: fsck.btrfs: not found
fsck: error2 while executing fsck.btrfs for /root/rundev
fsck: died with exit status 8
failed (code 8).

was really a problem.

Sadly, the btrfs-tools package being missing was the culprit. This could be found out through having a look at the fsck tools, seeing that not btrfs stuff is present, and googling the problem. Google also helps for finding the right package name, we have to install.


get to know the storage geometry

Reboot with a live disk, and having written down / photographed my layout previously, I knew where to start.

Figure this out, in case you have not watched your cabling or no info on partitions or software raids:

root@kali:~# lsblk
sda      8:0    0 233.8G  0 disk
`-sda1   8:1    0 232.9G  0 part
sdb      8:16   0 233.8G  0 disk
`-sdb1   8:17   0 232.9G  0 part
sdc      8:32   0 298.1G  0 disk
`-sdc1   8:33   0   298G  0 part
sdh      8:112  0   3.8G  0 disk
|-sdh1   8:113  0   2.9G  0 part /lib/live/mount/medium
`-sdh2   8:114  0  61.9M  0 part /media/Kali Live
sdi      8:128  0 372.6G  0 disk
`-sdi1   8:129  0   298G  0 part
sr0     11:0    1  1024M  0 rom
loop0    7:0    0   2.6G  1 loop /lib/live/mount/rootfs/filesystem.squashfs

Knowing we have two software raids, sda1 and sdb1 seem to be related, as are sdc1 and sdi1. The actual device sizes don't help you much, as mixed hardware was used. Something you'll also encounter out there in the wild, standard procedure.

This may lead to interesting situations:

Like at 4am in the night, with you of course being on call:
You already pulled out and set up new hardware and then you realize the system just won't boot. You can either restore too-many terabytes from backup, or just get the system back in order. This is your problem at hand, 'GO! I cannot tell you anything, I have no clue of the setup either...'

Fun times. ;) But back to the broken install.

Mounting a RAID drive directly won't work:

root@kali:~# mkdir asdf
root@kali:~# mount /dev/sda1 asdf
mount: unknown filesystem type 'linux_raid_member'

mdadm helps:

root@kali:~# mdadm -E /dev/sda1
bash: madadm: command not found

When it is installed, that is. After all, this is the live stick I used to setup the installation, so it must be somewhere:

root@kali:~# find / -iname mdadm
root@kali:~# /lib/live/mount/medium/pool/main/m/mdadm
bash: /lib/live/mount/medium/pool/main/m/mdadm: Is a directory
root@kali:~# ls -alh /lib/live/mount/medium/pool/main/m/mdadm
total 749k
dr-xr-xr-x 1 root root 2.0K Mar 12 18:26 .
dr-xr-xr-x 1 root root 2.0K Mar 12 18:26 ..
-r--r--r-- 1 root root 192K Mar 12 18:26 mdadm-udeb_3.2.5-5_i386.udeb
-r--r--r-- 1 root root 553K Mar 12 18:26 mdadm_3.2.5-5_i386.deb

Lets install the debian package:

root@kali:~# dpgk -i /lib/live/mount/medium/pool/main/m/mdadm/mdadm_3.2.5-5_i386.deb 

Now back to the problem:

root@kali:~# mdadm -E /dev/sda1
             Magic : a92b4efc
           Version : 1.2
       Feature Map : 0x0
        Array UUID : ab74df56:e0745791:d5cc011e:3792070a
              Name : vdr:0
     Creation Time : Sat May  2 15:32:29 2015
        Raid Level : raid1
      Raid Devices : 2

Available Dev Size : 488017920 (232.71 GiB 249.87 GB)
        Array Size : 244008768 (232.70 GiB 249.86 GB)
     Used Dev Size : 488017536 (232.70 GiB 249.86 GB)
       Data Offset : 262144 sectors
      Super offset : 8 sectors
             State : active
       Device UUID : 6f44a60f:d035d2d9:643a3f9c:a5bb21ef

       Update Time : Sat May  2 16:24:42 2015
          Checksum : 5a0897b6 - correct
            Events : 12

       Device role : Active device 0
       Array State : AA ('A' == active, '.' == missing)

This looks certainly better. For fun, you can look up the others if you know your layout, if you don't the layout you will have to anyway.

Try this, copy-paste, its the easiest way:

mdadm -E /dev/sd?? | grep -i -e /dev/ -e name -e device\ role -e raid\ devices -e state

Gives me this nice overview:

mdadm: No md superblock detected on /dev/sdh1.
              Name : vdr:0
      Raid Devices : 2
       Device role : Active device 0
       Array State : AA ('A' == active, '.' == missing)
              Name : vdr:0
      Raid Devices : 2
       Device role : Active device 1
       Array State : AA ('A' == active, '.' == missing)
              Name : vdr:1
      Raid Devices : 2
       Device role : Active device 0
       Array State : AA ('A' == active, '.' == missing)
              Name : vdr:1
      Raid Devices : 2
       Device role : Active device 1
       Array State : AA ('A' == active, '.' == missing)

Name is the array name, by the way, followed by the number of the array.

get the raid up so you can work on it

-A will assemble the raid, -R makes it available as soon as it has enough drives to run, -S stops it again. You can only assemble fitting parts anyway:

root@kali:~# mdadm -A -R /dev/md0 /dev/sda1 /dev/sdi1
mdadm: superblock on /dev/sdi1 doesn't match others - assembly arborted


root@kali:~# mdadm -A -R /dev/md0 /dev/sda1 /dev/sdb1
mdadm: /dev/md0 has been started with 2 drives.
root@kali:~# mdadm -A -R /dev/md1 /dev/sdc1 /dev/sdi1
mdadm: /dev/md1 has been started with 2 drives.

This is better. Now we have the raids back up:

root@kali:~# lsblk
sda      8:0    0 233.8G  0 disk
`-sda1   8:1    0 232.9G  0 part
  `-md0  9:0    0 232.7G  0 raid1
sdb      8:16   0 233.8G  0 disk
`-sdb1   8:17   0 232.9G  0 part
  `-md0  9:0    0 232.7G  0 raid1
sdc      8:32   0 298.1G  0 disk
`-sdc1   8:33   0   298G  0 part
  `-md1  9:1    0 232.7G  0 raid1
sdh      8:112  0   3.8G  0 disk
|-sdh1   8:113  0   2.9G  0 part /lib/live/mount/medium
`-sdh2   8:114  0  61.9M  0 part /media/Kali Live
sdi      8:128  0 372.6G  0 disk
`-sdi1   8:129  0   298G  0 part
  `-md1  9:1    0 232.7G  0 raid1
sr0     11:0    1  1024M  0 rom
loop0    7:0    0   2.6G  1 loop /lib/live/mount/rootfs/filesystem.squashfs

In my case, I'd only need the md0 device, as I know that root is on there. But this is handled as if we knew nothing about, for illustration purposes.

Now have a look at what pandora's box has in store for you:

root@kali:~# mkdir asdf0
root@kali:~# mount /dev/md0 asdf0
mount: unknown filesystem type 'LVM2_member'

Oh well. Deja vu.

get LVM back up so you can work on it

Get an overview with pvscan, vgscan and lvscan:

root@kali:~# pvscan
  PV dev/md1   VG vg_data     lvm2 [297.96 GiB / 111.70 GiB free]
  PV dev/md0   VG vg_system   lvm2 [232.70 GiB / 34.81 GiB free]
  Total: 2 [530.66 GiB] / in user: 2 [530.66 GiB] / in no VG: 0 [0   ]

root@kali:~# lvscan
  inactive          '/dev/vg_data/lv_data_var_backup' [93.13 GiB] inherit
  inactive          '/dev/vg_data/lv_data_var_nfs' [93.13 GiB] inherit
  inactive          '/dev/vg_system/lv_system_boot' [476.00 MiB] inherit
  inactive          '/dev/vg_system/lv_system_root' [46.56 GiB] inherit
  inactive          '/dev/vg_system/lv_system_var' [74.50 GiB] inherit
  inactive          '/dev/vg_system/lv_system_var_test' [74.50 GiB] inherit
  inactive          '/dev/vg_system/lv_system_swap' [1.86 GiB] inherit

For more information there are also pvdisplay, vgdisplay and lvdisplay, which are like this:

root@kali:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_data/lv_data_var_backup
  LV Name                lv_data_var_backup
  VG Name                vg_data
  LV UUID                f0C3o2-XUB1-5xkq-om5W-w0Kh-YwcX-752gkE
  LV Write Access        read/write
  LV Creation host, time vdr, 2015-05-02 15:38:48 +0000
  LV Status              NOW available
  LV Size                93.13 GiB
  Current LE             23841
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---


There is also pvs, vgs and lvs, providing rather short output, too, so you have even more options to chose from.

In our case life is easy, since I have the habit to name the Logical Volumes like


so there is less to keep in mind and to mix up. Plus you know which LV is home to what mountpoint. As far as I am concerned, there do no conventions exist?

Above all the logical volumes were marked as 'inactive', so we first have to activate them:

root@kali:~# vgchange -a y
    2 logical volume(s) in volume group "vg_data" now active
    5 logical volume(s) in volume group "vg_system" now active

root@kali:~# lvscan
  ACTIVE            '/dev/vg_data/lv_data_var_backup' [93.13 GiB] inherit
  ACTIVE            '/dev/vg_data/lv_data_var_nfs' [93.13 GiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_boot' [476.00 MiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_root' [46.56 GiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_var' [74.50 GiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_var_test' [74.50 GiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_swap' [1.86 GiB] inherit

To disable would be, in case you'd need it: vgchange -a n.

These commands can also be used for single volume groups, this is done by passing the VG name as a parameter.

mount and chroot into the installation to repair it

Now lets just mount the LV's needed to fix the install:

mkdir asdf-root
mount /dev/vg_system/lv_system_root asdf-root
chroot asdf-root

When trying to install the btrfs tools, another error occurs:

root@kali:/# apt-get install btrfs-tools -y
E: Could not open lock file /var/lib/dpkg/log -open (2: No such file or directory)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

Well, after a quick look at /var via ls -alh and seeing it was empty, we might have to mount another LV.

Exit the chroot via exit, and then lets mount the other missing LV, and chroot into it again:

mount /dev/vg_system/lv_system_var asdf-root/var
chroot asdf-root

Now ls -alh /var shows us something, and we should be able to apt-get install btrfs-tools -y.

Followed by an exit and reboot plus removing the boot stick, the system should work now.

If still there were problems, mounting the 'system' partition might help. Since Kali is debian-based, see this:

mount -o bind /dev /mnt/rescue/dev
mount -o bind /dev/pts /mnt/rescue/dev/pts
mount -o bind /proc /mnt/rescue/proc
mount -o bind /run /mnt/rescue/run 
mount -o bind /sys /mnt/rescue/sys

another test

Reboot actually worked, only problem was, after the Grub welcome message on top, and before the grub menu:

error: fd0 read error.

... several times. Grub still boots, so this is not really an issue, not yet at least.

Grub can natively boot of raids, but I strongly suspect if /dev/sda dies, the system will not boot, as it seems the bootloader is just installed on one partition.

grub and booting off software raid devices

Verifying this is easy: Turn off the computer and remove the SATA cable to the first hdd. Sure enough more things broke:

GRUB loading...
Welcome to GRUB!

error: out of partition.
Entering rescue mode...
grub rescue>

Awwww. To double test, lets power off the machine, plug in the first disk again and pull the plug on the second disk, and sure enough again, it worked this time.

So, fixing this is probably not as easy as the other stuff until now?
You cannot google much, or rather google much but not have good hits, its gross with this problem.

A solution I found, here: (Beware, its in German.)

grub-mkdevicemap -n
grub-install /dev/sda
grub-install /dev/sdb

Maybe just installing grub to the second disk might have been enough after all, but now sadly this doesn't cut it either.

Looks like I should not trust debian-based installers any more than I trust redhat-based ones. (Which I absolutely don't, since any slightly more complex setup will fail in anaconda...)

final result: all is in vain

Now it seems, the problem can only be fixed with a manual reinstall, as there are several caveats when running a bootable software raid.

The RAID superblock, which contains all information on how the RAID is constructed, and which is written on each of the member disks, will be created in the v1.2 of the metadata, which will be written to the head of the disk. This creates problems with grub.

When doing a manual install, mdadm even asks if metadata version 0.90 should be used for a bootable device. Oh well, fuck installers.

There will be another post coming, where the partitioning will be done by hand.

DNS: subzone delegation for a subdomain for a dynamic ip

posted on 2015-05-01 01:16:37

As a sideproject I wanted a dynamical DNS, since seemingly all the free products out there all went out of business, started to charge money or have started having other bad habits like coercing you to periodically log into the service or your domain was turned off.

Since I already had a server plus a domain, an own DNS server was a nice idea. But changing the authorized nameserver for a domain leads to the need of having to update the settings of the domain at the registrar, which I did not want:

  • The primary DNS server for the main domain should stay with my hoster.

This is due to the server being a playground, and if something broke and the nameserver daemon would run on the server, the DNS would be out of order. Also I was kind of lazy to get my hoster to change the settings, and where would the fun be in the easy way anyway?

After looking around some time, I found out about subzone delegation, which needs some additional RR's/resource records in the config of the main domain, but no changes to the DNS Server which is authorative for it. Ain't that an idea? Just exactly what I needed.

So here a little howto on how to implement this, on an external CentOS server with a fixed IP, a Fritzbox router where a raspberry pi is behind, and a hosted domain at an ISP / Internet Service Provider. The raspberry is running a raspbian install as an OS / Operating System. Strictly speaking, the raspberry is not really necessary, but better 'for reasons'.

Example values in the following are:

  • for the domain, pointing to
  • is the subzone, which will serve the dynamically changing ip.
  • the authorative nameserver for the main domain is at
  •, where the main domain is pointed at, will also be the secondary DNS server which serves the dynamic domain as subzone of the main domain.
  • the mailaddress which is usually used, is called email@domain.tld

The dynamic ip will be denoted as 999.999.999.999 in the following.

change RR's of main domain at your hoster

Add these two:             IN NS          IN A

Don't forget to increment the serial number (the first in the list of numbers after the line of the SOA definition), else your setting will not become public!

If you are lucky, you can add these two lines in a web interface that your domain hoster provides, else you have to tell the guys over there to change the settings for you.

install dns server on remote machine

On CentOS the bind9 dns server is referred as named, 'name daemon`.

yum install named -y

domain configuration of subdomain, on remote CentOS server


; public zone master file
$TTL 1800
; provides minimal public visibility of external services  IN      SOA email.domain.tld. (
                              2015042800    ; se = serial number
                              10800         ; ref = refresh
                              1800          ; ret = update retry
                              604800        ; ex = expiry
                              1800          ; min = minimum
                              )        IN NS
;; next line is then domain name of the name server for
;; it also should have a FQDN, so you don't just pass the IP, but do a second A RR to the NS RR        IN NS     IN  A IN  A        IN  A       999.999.999.999

Also, doing changes here, you have to increment the serial as well so that the changes become known. Usually this number is YYYYMMDDSS if I remember correctly. Does not matter, it just has to be bigger after each change you do to it.

On the values of the other numbers following the serial, I don't exactly know what they do. I just remember someone telling me that the RIPE would not be amused if TTL's are lower than 1800s (0,5h), so all these are bigger than that.

At the line defining the SOA RR / start of authority resource record the second

Never forget the dots at the end of domain names when specifying the absolute names (not just the string of hot the subdomain is called), these denote the end of the current domain name. Else the current domain name will be appended and you will have some fun time figuring out why things do not work. NOT.

bind configuration for the subzone name server

On the external server, make sure bind listens on public interface for DNS requests, so bind the listen port also to the NIC with the external ip:


options {
        listen-on port 53 {;; };


Also add the zone entry for your subzone:

zone "" IN {
        type master;
        file "";
        allow-query { any; };

On a sidenote, the dnsroot folder is at /var/named/, so you just pass the file or folders above to the file directive in the config, as shown above.

Enable logging:

logging {
        channel default_debug {
                file "data/";
                severity dynamic;

And also create the file if it does not exist:

touch /var/named/data/
chown -R named.named /var/named/data

With this configuration in place and a restart of the server, you already should be able to dig / nslookup / host dig, the 'domain information groper' is the nicest one since it provides the most output, as long as you understand what you are doing. If you do, you know all this anyway.


  • Do you have a firewall in place?
  • Port 53 is open?
  • What does tail -f /var/named/data/ tell you, right when you connect?
  • If you have fail2ban, is your ip currently banned?
  • Try a tcpdump on the external server on the IF / interface which holds the external ip.
  • Have a look at the log where the dropped packets are logged on your system, if there's anything like that.

Try a debugging script like this: (chmod a+x on the file may help.)


## check if domains are globally available
## you can also ask the google DNS via @
## should print both ips, else you have something broken in your configuration...
## ... or it takes the internet time, to get to know your DNS, maximum 30 minutes due to TTL 1800
dig +short
echo dyn
dig +short

## check if domains are available via your domainhosters nameserver
## this should only serve the main domain
dig +short
echo dyn
dig +short

## check if domains are available via your own nameserver
## this should only serve your subdomain in our setup
echo OWN NS
dig +short
echo dyn
dig +short

If all works as expected, congratulations. Else good luck troubleshooting this.

What is still missing now is the configuration so that the DNS will updated once your dynamic ip changes.

update DNS for the dynamic ip once it has changed in theory

Every 24 hours I have a forced disconnect from my telecommunications company, thats when my home ip changes. On the router a scheduled reconnect can be set so this happens at a known time, I set it to 4am.

Now on the update of the DNS for the dynamic domain:
This has to be done from a machine which is behind your router, or from your router. Usually you do not have a router with a fully fledged operating system, or you do not want to open it up from the outside of your network due to security reasons, this is why this is done via the raspberry behind it.

The raspberry installation and network configuration here will be skipped, it is assumed that you have a working ssh client and server installed on it and your network works so you can access the internet from (and your external server) from it.

Via curl you can easily get the external ip your router currently has, another possibility is to get it somehow directly from your router. The former is just way easier and will be used in the following.

BIND nowadays has the nsupdate facility (since v8? v9?), which lets you update the DNS remotely. Doing it via shellscripts and SSH will not work as the zonefile will be locked. Running scripts as root via SUID will not work, as this is prohibited by the OS due to security reasons.

A workaround would be a compiled C binary wrapper for the bash script, but just because it works does not mean you have to use it. Stick with nsupdate.

dns update in actual practice

create keypair

On the machine from where you want to update the DNS, you have to create a keypair. Use a valid email, with a . instead of an @.

dnssec-keygen -a HMAC-SHA512 -b 512 -n USER email.domain.tld

make the key known to BIND

Put the public part onto your dns server, and integrate it into BIND. Easiest and cleanest this is done like this, after scp'ing the pubkey up onto your server and into /etc/named/:

Insert into /etc/named.conf:

include "/etc/named.keys.conf";

Create /etc/named.keys.conf, and insert:

key email.domain.tld {
    algorithm HMAC-MD5;
    secret "insert-last-two-random-part-from-your-generated-public-key-file-into-here";

You might try using another algorithm, as there are several others available. But I am not sure, if the setup will work then.

configure management rights for the new key on the nameserver

The key could be either given full access, which I did not need, so it was just given partial access:

/etc/named.conf, add to zone:

update-policy {
    grant email.domain.tld subdomain A;

The part after grant is the keyname. Highlevel it is:

grant <key> <type> <zone> <RR> [<RR>];

Restart the nameserver, even though this might be unnecessary, to be safe.

configure the update script, the helper file plus the cronjob on the updating host

There are two files needed:

  1. the acutual script, which is run through the cron job
  2. the dns statements which nsudpate will execute, located in a second file
  3. Plus, the cronjob, so stuff is actually run in the end.

On the raspberry, for simplified reasons this is done as the root user:

mkdir /root/bin
touch /root/bin/
chmod a+x /root/bin/
touch /root/bin/dns-update.statements

Contents dns-update.statements:

update delete A
update add 1800 A


DNS="$(dig +short)"
## next two lines used for testing
#echo $CURRENT
#echo $DNS
if [ "$CURRENT" == "$DNS" ]; then exit 0; 
    /bin/sed -i "s/\(update add 1800 A\).*/\1 $CURRENT/" /root/bin/dns-update.statements
    /usr/bin/nsupdate -k /etc/dns/Kemail.domain.tld.+157+26336.private -v /root/bin/dns-update.statements

When it's shown like this here, it should be obvious where you have to apply changes for your setup:

  • after the -k flag, where your private key's name has to be entered
  • generally where mydomain is in use

Take special care, so the sed command will work, remember to change the "statements" file, too.

Actual testing did take place through adding echo's in every branch of the if statement, and running it every 5 seconds via watch:

watch -n5 -d /root/bin/

That way I could identify errors easily. Once the update works, it will tell you then as the ip got changed. No need to restart or reload the BIND server.

If all is working as expected, remove the show from the statements file, we just needed it during testing.

Also add the cron in /etc/crontab:

*/15 * * * * root /root/bin/

Afterwards service cron restart and you should have an updated DNS tomorrow and the day after tomorrow. And the following ones, of course. :)

The cron job does the checking every 15 minutes, if the ip has changed. Usually it would suffice if the check was done and run when the router resets.

But what about power outages? Router resets because somebody had to use the power outlet for the vacuum cleaner?
Just kidding, but it actually makes sense to update this periodically.

For questions I can be reached via twitter, see link on top of the site.

emacs: remote editing of files

posted on 2015-05-01 01:00:25

emacs comes, just as vim, with the possibility to open remote files within your editor. Usual syntax is this:


From the the emacs manual:

  1. If the host name starts with ftp. (with dot), Emacs uses FTP.
  2. If the user name is ftp anonymous, Emacs uses FTP.
  3. If the variable tramp-default-method is set to ftp, Emacs uses FTP.
  4. If ssh-agent is running, Emacs uses scp.
  5. Otherwise, Emacs uses ssh.

Usually this is what you want, since it just works:


Visual Studio Code

posted on 2015-05-01 00:35:00

Microsoft released a preview of the free edition of the Code edition of Visual Studio. I am quite not a Microsoft fanboi, so why do I bother mentioning it?

  • Windows. Mac. Linux.
  • Free.
  • Clean design.
  • Debugger.
  • Extensible.
  • Nice shortcuts.
  • git included, no other version control.

and, not to forget:

  • LINUX, natively, and from Microsoft!!!

Things seem to change, so this just sparked my interest. We might live in really interesting times.

They put in quite some effort to get something easily usable, which looks good and can be used to get work done. (Unlike regular Visual Studio unless you have known it for years, then you are a god even with crappy languages because of this weapon of an IDE.)

They really put some effort together, so it's worth a look.

Get it here.

This blog covers .csv, .htaccess, .pfx, .vmx, /etc/crypttab, /etc/network/interfaces, /etc/sudoers, /proc, 10.04, 14.04, 16.04, AS, ASA, ControlPanel, DS1054Z, GPT, HWR, Hyper-V, IPSEC, KVM, LSI, LVM, LXC, MBR, MTU, MegaCli, PHP, PKI, PS1, R, RAID, S.M.A.R.T., SNMP, SSD, SSL, TLS, TRIM, VEEAM, VMware, VServer, VirtualBox, Virtuozzo, XenServer, acpi, adaptec, algorithm, ansible, apache, apache2.4, apachebench, apple, applet, arcconf, arch, architecture, areca, arping, asa, asdm, autoconf, awk, backup, bandit, bar, bash, benchmarking, binding, bitrate, blackarmor, blockdev, blowfish, bochs, bond, bonding, booknotes, bootable, bsd, btrfs, buffer, c-states, cache, caching, ccl, centos, certificate, certtool, cgdisk, cheatsheet, chrome, chroot, cisco, clamav, cli, clp, clush, cluster, cmd, coleslaw, colorscheme, common lisp, configuration management, console, container, containers, controller, cron, cryptsetup, csync2, cu, cups, cygwin, d-states, database, date, db2, dcfldd, dcim, dd, debian, debug, debugger, debugging, decimal, desktop, df, dhclient, dhcp, diff, dig, display manager, dm-crypt, dmesg, dmidecode, dns, docker, dos, drivers, dtrace, dtrace4linux, du, dynamictracing, e2fsck, eBPF, ebook, efi, egrep, emacs, encoding, env, error, ess, esx, esxcli, esxi, ethtool, evil, expect, exportfs, factory reset, factory_reset, factoryreset, fail2ban, fakeroot, fbsd, fdisk, fedora, file, files, filesystem, find, fio, firewall, firmware, fish, flashrom, forensics, free, freebsd, freedos, fritzbox, fsck, fstrim, ftp, ftps, g-states, gentoo, ghostscript, git, git-filter-branch, gitbucket, github, gitolite, global, gnutls, gradle, grep, grml, grub, grub2, guacamole, hardware, haskell, hdd, hdparm, hellowor, hex, hexdump, history, howto, htop, htpasswd, http, httpd, https, i3, icmp, ifenslave, iftop, iis, imagemagick, imap, imaps, init, innoDB, innodb, inodes, intel, ioncube, ios, iostat, ip, iperf, iphone, ipmi, ipmitool, iproute2, ipsec, iptables, ipv6, irc, irssi, iw, iwconfig, iwlist, iwlwifi, jailbreak, jails, java, javascript, javaws, js, juniper, junit, kali, kde, kemp, kernel, keyremap, kill, kpartx, krypton, lacp, lamp, languages, ldap, ldapsearch, less, leviathan, liero, lightning, links, linux, linuxin3months, lisp, list, livedisk, lmctfy, loadbalancing, locale, log, logrotate, looback, loopback, losetup, lsblk, lsi, lsof, lsusb, lsyncd, luks, lvextend, lvm, lvm2, lvreduce, lxc, lxde, macbook, macro, magento, mailclient, mailing, mailq, make-jpkg, manpages, markdown, mbr, mdadm, megacli, micro sd, microsoft, minicom, mkfs, mktemp, mod_pagespeed, mod_proxy, modbus, modprobe, mount, mouse, movement, mpstat, multitasking, myISAM, mysql, mysql 5.7, mysql workbench, mysqlcheck, mysqldump, nagios, nas, nat, nc, netfilter, networking, nfs, nginx, nmap, nocaps, nodejs, numberingsystem, numbers, od, onyx, opcode-cache, openVZ, openlierox, openssl, openvpn, openvswitch, openwrt, oracle linux, org-mode, os, oscilloscope, overview, parallel, parameter expansion, parted, partitioning, passwd, patch, pct, pdf, performance, pfsense, php, php7, phpmyadmin, pi, pidgin, pidstat, pins, pkill, plasma, plesk, plugin, posix, postfix, postfixadmin, postgres, postgresql, poudriere, powershell, preview, profiling, prompt, proxmox, ps, puppet, pv, pveam, pvecm, pvesm, pvresize, python, python3, qemu, qemu-img, qm, qmrestore, quicklisp, quickshare, r, racktables, raid, raspberry pi, raspberrypi, raspbian, rbpi, rdp, redhat, redirect, registry, requirements, resize2fs, rewrite, rewrites, rhel, rigol, roccat, routing, rs0485, rs232, rsync, s-states, s_client, samba, sar, sata, sbcl, scite, scp, screen, scripting, seafile, seagate, security, sed, serial, serial port, setup, sftp, sg300, shell, shopware, shortcuts, showmount, signals, slattach, slip, slow-query-log, smbclient, snmpget, snmpwalk, software RAID, software raid, softwareraid, sophos, spacemacs, spam, specification, speedport, spi, sqlite, squid, ssd, ssh, ssh-add, sshd, ssl, stats, storage, strace, stronswan, su, submodules, subzone, sudo, sudoers, sup, swaks, swap, switch, switching, synaptics, synergy, sysfs, systemd, systemtap, tar, tcpdump, tcsh, tee, telnet, terminal, terminator, testdisk, testing, throughput, tmux, todo, tomcat, top, tput, trafficshaping, ttl, tuning, tunnel, tunneling, typo3, uboot, ubuntu, ubuntu 16.04, ubuntu16.04, udev, uefi, ulimit, uname, unetbootin, unit testing, upstart, uptime, usb, usbstick, utf8, utm, utm 220, ux305, vcs, vgchange, vim, vimdiff, virtualbox, virtualization, visual studio code, vlan, vmstat, vmware, vnc, vncviewer, voltage, vpn, vsphere, vzdump, w, w701, wakeonlan, wargames, web, webdav, weechat, wget, whois, wicd, wifi, windowmanager, windows, wine, wireshark, wpa, wpa_passphrase, wpa_supplicant, x11vnc, x2x, xfce, xfreerdp, xmodem, xterm, xxd, yum, zones, zsh

Unless otherwise credited all material Creative Commons License by sjas