Posts tagged linux

apt cheatsheet
posted on 2017-01-23 21:31

As short as possible:

apt-cache search = search for package (old)
apt-cache show = show package information
dpkg -l = show installed packages
dpkg -L = show package contents
dpkg -S = search packages for file
apt-get install = install package (old)
apt-get remove = uninstall package, leave configuration on disk (old)
apt-get purge = uninstall package, delete configs (old)

apt search = (new)
apt install = (new)
apt remove = (new)
apt purge = (new)
mysql describe all tables from database
posted on 2017-01-23 12:51

This can be used directly in bash:

DB=your_database_name_here; for i in $(mysql $DB -Ne 'show tables' | cat); do echo; echo $i; mysql $DB -te "describe $i"; done

Just adjust your database.

stop proxmox nagware
posted on 2017-01-05 05:07

This is said to fix proxmox 'no valid license' dialog box which appears when you login to the web interface and do not have a valid subscription:

find /usr/share/pve-manager -name *.js -exec sed -i 's/PVE.Utils.checked_command(function\s*()\s*{\s*\(.*\)\s*}\s*)\s*;\s*/\1/g' {} \;

TODD: I haven't tested it so far, the post will be updated once I can tell more.

debian add another loopback address
posted on 2017-01-04 15:40

Add to /etc/network/interfaces:

auto lo:1
iface lo:1 inet static
address 127.0.0.2
netmask 255.0.0.0

then

ifup lo:1

and an ip a should show you the new ip being live.

gitolite install
posted on 2017-01-02 22:37

A fast setup of a proper gitolite server setup, since the current debian package is either borked, or I just need sleep. Keep in mind this was written on the fly and may have errors.

assumptions

  • this will use the user git (hope its not used already)
  • put the files in `/var/lib/gitolite
  • use the latest gitolite.
  • GITSERVER: ip or domain name or /etc/hosts alias of your git server
  • debian was used, adopt accordingly if you use redhat derivates or (god help) suse

setup and install

On the server: (as root)

apt install git -y
mkdir -p /var/lib/gitolite/bin
useradd -d /var/lib/gitolite/ -U -s /bin/bash git
passwd git
ssh-keygen -trsa -b4096
cp /root/.ssh/id_rsa.pub /var/lib/gitolite/admin.pub
chown -R git:git /var/lib/gitolite

su - git

cat << EOF > .bash_profile
alias l='ls -alh --color'
export PATH=/var/lib/gitolite/bin:\$PATH
EOF
echo $PATH  ## gitolite path missing
logout
su - git
echo $PATH  ## gitolite path not missing anymore, and 'l' works, too

git clone git://github.com/sitaramc/gitolite
gitolite/install -ln /var/lib/gitolite/bin
gitolite setup -pk admin.pub
logout
cd

git clone git@localhost:gitolite-admin
cd gitolite-admin/conf

Now we're mostly set, but no 'testing.git' repo is needed, so let's just delete it. This is also a showcase how to use the admin repo on the server, in case you manage to fuck up your workstation or ssh key, which we will setup later:

vim conf/gitolite.conf  ## remove 'repo testing' line and the one following it
git add -A .
git commit -m '-testing repo'
git push

In case the rhabarber of 'git config' stuff is annoying:

git config --global user.name root
git config --global user.email root@GITSERVER
git config --global push.default simple  ## adopting default behaviour is usually the way to go

So far, so good.

on deleting repositories

Repositories that existed but were deleted later on will still exist under `/var/lib/gitolite/repositories after deletion:

git@git-1:/var/lib/gitolite/$ gitolite list-repos
gitolite-admin
git@git-1:/var/lib/gitolite/$ gitolite list-phy-repos
gitolite-admin
testing

If you want it to be gone, simple delete the repo folder on disk.

adding your workstation key to gitolite, too?

Likely you want ssh access to root via key (you disable key logins for root in ssh, don't you?), so lets set this up and put the key into gitolite, too. I'll provide an example, my user is called 'sjas', of course.

On my workstation:

ssh-copy-id root@GITSERVER  ## in case you didn't do that already
scp ~/.ssh/id_rsa.pub root@GITSERVER:/root/gitolite-admin/keydir/sjas.pub
ssh root@GITSERVER
cd gitolite-admin

# ... now edit gitolite config... 
# ... see next section how I prefer doing things ...

git add -A .
git commit -m '+workstation key'
git push

splitting the gitolite.conf and groups

I prefer having two files, one for the group definitions, one for repositories. Here are how that these files would look like:

root@git-1:~/gitolite-admin/conf# tail -n +1 *
==> gitolite.conf <==
include "groups.conf"
include "repos.conf"

==> groups.conf <==
@sjas   = sjas
@admins = @sjas admin

==> repos.conf <==
repo    gitolite-admin
    RW+ = @admins admin
repo    ansible
    RW+ = @sjas

The @'s depict groups. Actually you can group users to usergroups and repositories to repository-groups, in case you'd ever need that.

Comments also do work, via #.

Only remember to first define a group prior to ever using it, and first cite the groupnames and then the users in group definitions. That is, on the right side after the equals sign, in case you have no idea what the last sentence meant.

On more about this, go here and here. There's way more you can do, but this should be everything as a bare minimum to do most work you'd ever need to do.

The official documentation looks rather sketchy at first, but is pretty good and all you need is covered there.

clustershell
posted on 2016-12-29 13:26

When needing to run commands on several servers over ssh, there's always that for-loop for you.

But you could also try running clustershell:

sjas@ws:~$ clush -w server-[01,02,05,11,12] -b hostname -f
---------------
server-01
---------------
server-01.some-domain.com
---------------
server-02
---------------
server-02.some-domain.com
---------------
server-05
---------------
server-05.some-domain.com
---------------
server-11
---------------
server-11.some-domain.com
---------------
server-12
---------------
server-12.some-domain.com

-b to use it non-interactively and to get the shown aggregated results (the hosts are colored), -w to specify the hosts. Use [ ] instead of { } like you would in bash.

-B also includes STDERR.

A problem you may run into, is when you try to run commands with pipes.

Further you can also predefine hostgroups and copy files from/to remote hosts. This is a rather nice tool.

apache webdav configuration
posted on 2016-12-29 10:27

notes up front

  • don't use suexec. just don't.
  • you should be able to configure a vhost on your own, else the apache config will not be of use to you
  • we'll use ssl certifcates, too

setup

Load the apache modules:

a2enmod dav
a2enmod dav_fs

Create certificate:

cd /etc/apache2/ssl
openssl genrsa -out mydomain.de 1024  ## create private key
openssl req -new -key mydomain.de.key -out mydomain.de.csr  ## create certificate signing request
openssl x509 -in mydomain.de.csr  -out mydomain.de.crt -req -signkey mydomain.de.key -days 3650  ## create certificate
rm mydomain.de.csr 

Create a vhost config for your apache, and enable it:

<VirtualHost *:80>

        ServerName mydomain.de
        ServerAlias www.mydomain.de

        DocumentRoot /var/www/mydomain.de/htdocs

        <Directory /var/www/mydomain.de/htdocs/>
                Options Indexes MultiViews
                AllowOverride None
                Order allow,deny
                allow from all

                DAV on

                AuthType Basic
                AuthName DAV
                AuthUserFile /var/www/mydomain.de/.htpasswd
                Require valid-user
        </Directory>

        ErrorLog /var/www/mydomain.de/logs/error.log
        LogLevel warn
        CustomLog /var/www/mydomain.de/logs/access.log combined

</VirtualHost>

<VirtualHost *:443>

        ServerName mydomain.de
        ServerAlias www.mydomain.de

        DocumentRoot /var/www/mydomain.de/htdocs

        <Directory /var/www/mydomain.de/htdocs/>
                Options Indexes MultiViews
                AllowOverride None
                Order allow,deny
                allow from all

                DAV on

                AuthType Basic
                AuthName DAV
                AuthUserFile /var/www/mydomain.de/.htpasswd
                Require valid-user
        </Directory>

        ErrorLog /var/www/mydomain.de/logs/error.log
        LogLevel warn
        CustomLog /var/www/mydomain.de/logs/access.log combined

        sslengine on
        sslcertificatefile      /etc/apache2/ssl/mydomain.de.crt
        sslcertificatekeyfile   /etc/apache2/ssl/mydomain.de.key

</VirtualHost>

Create a htpasswd file:

htpasswd -c /var/www/mydomain.de/.htpasswd USERNAME

USERNAME and the password you'll enter will be your access credentials.

testing (on linux)

apt install -y cadaver
cadaver https://mydomain.de

Then your are promted for entering the user credentials. ls and help should help you onwards then.

using it with windows

Since this is a setup with SSL (doensn't make much sense to use plain http from my POV), you'll need to import the certifate (mydomain.de.crt) in windows.

Else you will get an error along the lines of Mutual Authentication Failed. The Server's Password Is Out Of Date At The Domain Controller.

It needs to go there: (in windows)

  • win + r
  • certmgr.msc
  • enter
  • trusted root certificates
  • certificates
plesk onyx phpioncube install
posted on 2016-12-13 14:35

install php7 in plesk

plesk sbin autoinstaller --select-product-id plesk --select-release-current  --install-component php7.0

get files

http://www.ioncube.com/loaders.php

Copy to the server, und tar xzvf it.

copy file

cd ioncube
cp ioncube_loader_lin_7.0.so /opt/plesk/php/7.0/lib/php/modules/

link it with php

Into /opt/plesk/php/7.0/etc/php.ini put this:

zend_extension=ioncube_loader_lin_7.0.so

(Somewhere to the other zend options.)

reload php

  plesk bin php_handler --reread

test

/opt/plesk/php/7.0/bin/php -v

Will show you ioncube php loader (enabled) ... so it actually works.

Now don't use the OS php version (i.e. if you already have php 7 available from ubuntu 16.04), but the plesk one from the dropdown menu in the php settings of your hosting.

bonus

If you cannot upload zip files, install php-zip from your OS's package management. (apt install php-zip -y)

postfixadmin update and php7
posted on 2016-12-09 08:52

After a dist upgrade from ubuntu 14.04 to 16.04 and updating to php7 the postfixadmin in version 2.3.7 stopped working due to php5 modules naturally being amiss. These were the steps I took so the upgrade to postfixadmin 3.0 worked. The major roadblocks were the changed php modules and updating the mysql tables during the update.

  • mkdir /root/postfixadmin-backup; cd /root/postfixadmin-backup
  • mysqldump <POSTFIX-DB-NAME> > <POSTFIX-DB-NAME>.sql so I'd have a database backup.
  • Copied the old htdocs to /root/htdocs, to have a webdata backup in case I'd fuck up renaming the htdocs later.
  • cd /path/to/postfixadmin/webroot
  • The apache setup was fine so already, so I renamed the old htdocs (docroot), created a new one and extracted the newly downloaded postfixadmin there.
  • Copied the old config.inc.php over into the new docroot, named config.local.php.disabled so it would not get read upon opening the postfixadmin webinterface.
  • Copied the new config.inc.php to config.local.php.
  • diff config.local.php.disabled config.inc.php to show me the differences to the new installation.
  • Adjust database settings and the other settings I wanted to conserve.
  • Browser: https://domain.of.postfixamin/setup.php

Then it showed:

...

DEBUG INFORMATION:
Invalid query: Invalid default value for 'created'

...

Upon using https://domain.of.postfixadmin/setup.php?debug=1 I found out that that was related to the vacation table in the database. To be exact, this line shown by describe vacation at the mysql client prompt:

| created       | datetime     | NO   |     | 0000-00-00 00:00:00 |       |

The ALTER TABLE statement could not run the change due to mysql's strict mode being active, and a date of 0000-00-00 00:00:00` being forbidden.

At first I tried to alter the table to something like 1970-01-01 01:01:01 but that wouldn't work.

mysql's strict mode is controlled by the variable sql_mode:

mysql> show variables like 'sql_mode'\G
*************************** 1. row ***************************
Variable_name: sql_mode
        Value: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
1 row in set (0.00 sec)

To set this variable upon mysql's starting process, I edited /etc/mysql/my.cnf and added under the [mysqld] section:

sql_mode = ''

Then service mysql stop, service mysql start and upon reopening https://domain.of.postfixamin/setup.php the setup went through and all was well again in postfix-land.

I removed the sql_mode parameter from /etc/mysql/my.cnf again, service mysql stop, service mysql start and I was free to go off to other ventures again.

proxmox magic fix script
posted on 2016-12-05 14:52

From here, this link often is handed out in ##proxmox on FreeNode:

#!/bin/bash

# on all nodes
magicfix() {
        service pve-cluster stop
        service pvedaemon stop
        service cman stop

        service pve-cluster start
        service cman start
        service pvedaemon start

        # this one could possibly restart VMs in 4.x (but doesn't in 3.x), so disable unless you think you need it
        #service pve-manager restart

        service pvestatd restart
        service pveproxy restart
        service pve-firewall restart
        service pvefw-logger restart
}
magicfix

# again after above was done on all nodes (makes /etc/pve rw)
service pve-cluster restart
strongswan ipsec vpn site to site
posted on 2016-12-02 09:42

This guide was written for debian 8.

network layout

local/left       lan: 192.168.0.0/16
local/left   gateway: 10.0.0.2
remote/right gateway: 10.0.0.3
remote/right     lan: 172.16.0.0/16

Our network, expressed differently:

192.168.0.0/16 --- unencrypted --- 10.0.0.2 === vpn === 10.0.0.3 --- unencrypted --- 172.16.0.0/16

In strongswan it doesn't matter which side is defined in either left or right, but this convention helps:

  • local = left
  • rremote = right

ipsec settings for the tunnel

These may be somewhat arbitrarily, but we got to use something:

phase1:
ikev1 / aes256 / sha2 / dh5 / 86400s (24h statt 8h)

phase2:
esp / aes256 / sha2 / dh5 / 3600s

( protocol / encryption / hashing / DH group or PFS if present / lifetime )

install

apt-get install strongswan libcharon-extra-plugins

define PSK

Add to /etc/ipsec.secrets:

10.0.0.2 10.0.0.3 : PSK "thatsmydamnsecretPSKwhichreallyshouldbearandomsting"

setup tunnel

/etc/ipsec.conf:

config setup

conn %default
    keyexchange=ikev1
    keyingtries=%forever
    leftauth=psk
    rightauth=psk
    auto=start

conn myconfig-main
    left=10.0.0.2
    ike=aes256-sha256-modp1536
    ikelifetime=86400s
    esp=aes256-sha256-modp1536
    lifetime=3600s

conn myconfig1
    right=10.0.0.3
    leftsubnet=192.168.0.0/24
    rightsubnet=172.16.0.0/16
    also=myconfig-main

include /var/lib/strongswan/ipsec.conf.inc

That way you can add additional phase2 entries analoguous to conn myconfig1.

%default is valid for everything, myconfig-main is included via auto=myconfig-main into other connection definitions.

test

service ipsec restart

These might help:

tail -f /var/log/syslog
watch -n1 -d ipsec statusall

Ping from withing your lan a host inside the remote lan.

For watching the pings, the ones you want to see will be colored:

tcpdump -D # discern the interface you need to have a look at, usually eth0 / 1
tcpdump -nli 1 icmp | grep -color -e $ -e 192.168.

routing rules are automatically added by strongswan, do service ipsec restart while watching:

watch -n1 -d "ip ru; ip r l t 220"
terminator apprentice color scheme
posted on 2016-11-30 17:38

Using terminator, colorschemes were always somewhat an issue for me, until I found Apprentice.

The terminator version I found like here.

To have something I can copy-paste (with my settings), here's some kind of documentation:

$ cat .config/terminator/config
[global_config]
  enabled_plugins = InactivityWatch, ActivityWatch, TerminalShot, Logger
  title_transmit_fg_color = "#bcbcbc"
  title_inactive_fg_color = "#bcbcbc"
  suppress_multiple_term_dialog = True
  title_transmit_bg_color = "#1c1c1c"
  title_inactive_bg_color = "#444444"
[keybindings]
  go_up = <Primary><Shift>k
  broadcast_group = <Primary><Shift>F10
  next_tab = <Primary>Tab
  prev_tab = <Primary><Shift>Tab
  broadcast_all = <Primary><Shift>F8
  go_down = <Primary><Shift>j
  go_right = <Primary><Shift>l
  broadcast_off = <Primary><Shift>F9
  go_left = <Primary><Shift>h
  group_all = <Primary><Shift>F8
  edit_window_title = <Primary><Shift>F11
[profiles]
  [[default]]
    palette = "#1c1c1c:#af5f5f:#5f875f:#87875f:#5f87af:#5f5f87:#5f8787:#6c6c6c:#444444:#ff8700:#87af87:#ffffaf:#8fafd7:#8787af:#5fafaf:#ffffff"
    visible_bell = True
    background_darkness = 0.73
    urgent_bell = True
    cursor_shape = underline
    background_image = None
    cursor_color = "#39ff35"
    foreground_color = "#bcbcbc"
    scroll_on_output = False
    font = Monospace 6
    background_color = "#262626"
    audible_bell = True
    scrollback_infinite = True
[layouts]
  [[default]]
    [[[child1]]]
      type = Terminal
      parent = window0
    [[[window0]]]
      type = Window
      parent = ""
[plugins]
apache redirect not-existing urls to homepage
posted on 2016-11-22 18:50
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ / [L,QSA]
proxmox vzdump to stdout
posted on 2016-11-21 13:30

Pipe a vzdump directly to STDOUT:

vzdump <VMID> --dumpdir /tmp --mode snapshot --stdout 

In /tmp the config will be dumped, but the dump will not be saved on disk. So the dump can easily piped to nc.

linux shell number converters
posted on 2016-11-19 15:26

These are interactive promts from converting between the different number formats to decimal and reverse.

# hex-dec
h2d() {
    echo
    echo TO DEC, ctrl+c to end
    echo
    while :
    do
        read -p "hex> " i
        echo "ibase=16; $i" | bc
        echo
    done
}
d2h() {
    echo
    echo TO HEX, ctrl+c to end
    echo
    while :
    do
        read -p "dec> " i
        echo "obase=16; $i" | bc 
        echo
    done
}

# oct-dec
o2d() {
    echo
    echo TO DEC, ctrl+c to end
    echo
    while :
    do
        read -p "hex> " i
        echo "ibase=8; $i" | bc
        echo
    done
}
d2o() {
    echo
    echo TO OCT, ctrl+c to end
    echo
    while :
    do
        read -p "dec> " i
        echo "obase=8; $i" | bc 
        echo
    done
}

# bin-dec
b2d() {
    echo
    echo TO DEC
    echo
    while :
    do
        read -p "bin> " i
        echo "ibase=2; $i" | bc
        echo
    done
}
d2b() {
    echo
    echo TO BIN, ctrl+c to end
    echo
    while :
    do
        read -p "dec> " i
        echo "obase=2; $i" | bc 
        echo
    done
}

Put these into your ~/.bashrc.

Enjoy.

highlevel overview how to change partition sizes
posted on 2016-11-18 18:54

These are some rough notes for colleague of mine, on how to make more swapspace available and resizing partitions in general. The workflow highly depends on the previously existing layout. Here's a shot on a manual on how to approach this.

disclaimer

This is mostly written from memory, so bear with me if you stumble upon errors. No guarantees for nothing below this line.

do you use XFS? or do you REALLY need another swap partition, when you don't have unpartitioned space?

In case you need to resize a partition, as you do not have unpartitioned space available, you cannot enlarge the swap partition or add a second one, if you cannot shrink the filesystem (i know that's the case with XFS) on the partition which you want to shrink. Shrinking partitions is more like deleting the currently available partition and recreating it, only smaller. (Linux lets you do that, even if you make the partition smaller then the filesystem that should be in there, rendering the system unbootable in case you do this. Don't worry, it can be fixed by recreating the old partition schema, so you better backup the information good.)

If the stuff above is the case, you need to create a swapfile and use that. Of course you need enough free space on the filesystem. There should not be a speed difference from what I heard (and honestly I am too lazy to test that), if you have enough free space in your filesystem, create the file with dd, do mkswap it and fix /etc/fstab.

Enough tutorials are on google, this approach is the easiest, hands down.

But let's go on.

how does the system boot: does it use BIOS or UEFI?

  • BIOS -> can work with either a MBR or a GPT
  • UEFI -> needs a GPT, using a MBR won't work

Also UEFI needs a bios boot partition. Basically:

  • first partition is like 300m in size
  • with a fat32 file system
  • has boot and esp flags (sometimes also called bios boot partition)
  • is mounted likely to /boot/efi in your linux installation

The rest is as usual, like you can have a separate boot partition housing the /boot mountpoint, or just using another large partition for / and everything else directly.

how is the partitioning info saved

  • MBR -> 4 primary partitions are maximum, or use the 4th one as an extended partition, which points to further partitioning info somewhere else.

That's also the reason why you might have /dev/sda1, /dev/sda2, /dev/sda5 after a fresh install.

  • sda1 = primary partition
  • sda2 = primary used as extended partition
  • sda5 = first logical partition

The MBR is located on the first sector of a harddisk and 512k in size. During the boot process the executed boot code from the BIOS scans all disk in hope of finding a MBR or GPT. Due to the MBR's structure it can only store the information for four partition entries. Information for partitions of type 'primary' is stored directly in the MBR. Partitions cannot be larger then 2t, if you need that you either have to use a GPT instead or build a logical volume via LVM out of several MBR partitions. (Ok, in that case go for GPT...)

An extended partition points to another partitioning table in a VBR. That's like a MBR, but without boot code and located in the first sector of a partition depicted in the MBR.

  • GPT -> all partitions are created equal (haha), but you need a bios boot partition (see above) so it can work.

You can delete the partitions as you please, and it's autmatically backupped to the end of the disk. Its 33 logical blocks in size (like 33 * 512b or 33 * 4k in disk size, depending on block size), and uses the first 33 and the last 33 blocks of the disk. (In comparison to the MBR, which uses only the first block on a disk.)

Maximum size are about 8 zebibytes or 9 zettabytes, which should do rather fine for the storage needs you have with five nines of probability.

Keep that in mind when you want to use a sofware raid and the raid superblock shall be stored at the end of the disk, depending on the version of the software raid metadata.

backup your partitioning info!!!

Resizing partitions is more or less just deleting a partition and recreating it with a different size. This can fail, rendering the system unbootable when the partition is smaller then the filesystem it shall contain. This can be fixed by deleting the partitioning info for the partition in question, and recreating it bigger again.

Nothing is destroyed here, unless you start recreating filesystems on your newly created partitions, keep that in mind prior to panicking.

Partition info's are just pointers to start and end of a partition, so the kernel knows where to look for filesystems relative to its start.

Also the absolute sizes are important. Best in sectors, bytes do work, too.

Copy the output of the commands below into a text editor and save it somewhere (when working over ssh) or use your smartphone camera to make a picture. Of course pen and paper work, too, but don't do anything without this information backed up. SERIOUSLY!

These will give the partition boundaries in sectors or bytes. I prefer sectors.

parted /dev/sdX u s p

and

parted /dev/sdX u b p

Don't read on unless you did this. If you still do and fuck up, you can try testdisk, but this will not work with more complex setups. From my experience, testdisk only works with like a 60% chance.

highlevel overview for the general approach, the shrink and resize operations

You can only use continuous space for creating new partitions.

I.e. if you have like a 1g swap partition which you want to enlarge, followed by a 100g root partition, you can shrink the root partition, but the new unallocated space will be located at the end of the disk.

If you cannot do that, you need to use LVM.

  • Create new partition in the unallocated space.
  • Create physical volumes on the first partition and the newly-created one.
  • Add both to a volume group.
  • Create a new logical volume using the fully available space in that volume group.
  • Use the new LV as swap.

shrink

This would be the work without LVM being used:

  • Reset, and boot from a livedisk like grml
  • To shrink, start with the innermost part, the filesystem.
  • Shrink filesystem via resize2fs. Either to a particular size, or with the -m flag to the minimum size. This may take time.
  • Delete partition of filesystem you re sized.
  • Recreate partition, but larger than the filesystem. To be on the safe side, create it like 1g bigger than the filesystem, calculating that is annoying due to 1000 vs. 1024 base discrepancies.
  • You may also delete partitions you still don't need anymore. If you do that, fix the /etc/fstab, else no boot for you.
  • Reboot and see if the system still works as you need it to.
  • If it doesn't, look up your backup information from above, and recreate the boot and/or root partitions properly.
  • If it does, create your other partition(s)/logical volume(s) and work on.

In case you have LVM in use:

  • From inner parts to outside, too.
  • First shrink filesystem.
  • Then shrink the logical volume where the filesystem lays on, but not smaller than the filesystem was.
  • Resize the physical volume, too, in case you want to create a new volume group for whatever reason.
  • Adjust partition size if you need to for your desired layout.

Remember, if your system does not boot because you made partition(s) or logical volume(s) too small, that is fixable. But only as long as you did not kill any data on disk, i.e. by creating file systems.

enlarge

  • From outside to inside, basically the reverse from above.
  • Enlarge existing partition.
  • Enlarge physical volume if lvm was used, so also the volume group gets bigger. (pvresize /dev/sdXy will use all available space.)
  • Enlarge logical volumn, if lvm was used. (lvextend -l +100%FREE /dev/mapper/<vg-lv-name> is what you want to use all available space.)
  • Enlarge filesystem. (resize2fs /path/to/device, so either /dev/sdXy without lvm, or /dev/mapper/<vg-lv-name>, to use all available space.)

changing partitions via parted

For editing partitions parted does work quite good, both for MBR/GPT partition tables. fdisk/gdisk also still do exist, if you want something with a fancy curses gui go with cfdisk/cgdisk. Also there are are sfdisk/sgdisk for the hacker types, according to the manpage.

  • "f" -> edit MBR's
  • "g" -> edit GPT's

parted commands cause immediate changes, whereas the others let you view your changes, but won't change anything until you write the changes to disk.

I really prefer using parted non-interactively nowadays, though I cannot explain why.

All commands in as short as possible:

# show help
parted /dev/sdX h
# show help on particular command, may help greatly
parted /dev/sda h <parted_command>

# drop partition info
parted /dev/sdX u s p       # "unit sector print"
parted /dev/sdX u s p free  # "unit sector print free"
parted /dev/sdX u b p       # "unit byte print"

# create new disklabel, read: MBR or GPT.
# if you do this you basically delete the complete partitioning table
# do only if you need to, and backupped the 'print' output above!
parted /dev/sdX mkl msdos  ## create MBR
parted /dev/sdX mkl gpt    ## create GPT

# delete partition
parted /dev/sdX rm <ID>  # 1 or 2 or 3, depending which partition you want to edit

# create partition
# -a opt can be used with all commands listed here, but only has impact here
# units can be mixedly used, like 2048s, 10GiB, 10GB, 100%
parted -a opt /dev/sdX mkp  # mkpart, -a opt is essential for optimal alignment!

# show options
parted /dev/sdX h set
# enable/disable options (like boot flag)
parted /dev/sdX set <ID> <OPTION> on   # enable
parted /dev/sdX set <ID> <OPTION> off  # disable

swap

If you can still boot, and have a shiny new partition (or logical volume) which you can use, finish:

  • mkswap /path/to/device
  • fix /etc/fstab, i.e. create an entry so the system knows about the swapspace

This should be everything one may encounter. Good luck.

yet another megacli cheatsheet
posted on 2016-11-17 12:15
## convienience alias
alias asdf=/path/to/MegaCLI/file

## quick overview
asdf showsummary aall                                                    # SHOW STATUS
asdf -AdpEventLog -GetLatest 4000 -f events.log -aALL                    # SHOW ERRORS


## FW version
asdf version cli aall

## controller config status
asdf adpallinfo aall | less

## logical disks status
asdf ldinfo lall aall | less

## physical disks status
asdf pdlist aall | less
asdf pdlist aall | grep -i -e 'enc.*dev' -e slot                         # GET ENCLOSURES/SLOT

## rebuildrate & autorebuild
asdf adpgetprop rebuildrate aall                                         # SPEED STATUS
asdf adpsetprop rebuildrate 40 aall                                      # SET SPEED TO 40%

asdf adpautorbld dsply aall                                              # STATUS AUTOREBUILD
asdf adpautorbld dsbl aall                                               # DISABLE
asdf adpautorbld enbl aall                                               # ENABLE

## rebuild in progress?
asdf pdlist aall | grep -i -e 'enc.*dev' -e slot                         # GET ENCLOSURES/SLOTS
for i in {4..7}; do asdf pdrbld showprog physdrv \[252:$i\]  aall; done  # SHOW REBUILDS, DEPENDS ON ENCLOSURES/SLOTS

## manual rebuild
asdf pdlist aall | grep -i -e 'enc.*id' -e slot -e state                 # UNCONFIGURED(BAD) ODER OFFLINE DRIVES EXIST?
asdf pdmakegood physdrv "[252:4]" aall                                   # MAKE GOOD

asdf cfgforeign scan aall                                                # SCAN DRIVES FOR FOREIGN LSI RAID CONFIGS
asdf cfgforeign clear aall                                               # DELETE FOREIGN CONFIGS

asdf cfgdsply aall                                                       # FIND MISSING SLOT, i.e. [252:4], and adapter (see top)
asdf pdgetmissing aall                                                   # GET ARRAY/ROW NUMBERS, i.e. 1 and 0
asdf pdreplacemissing physdrv "[252:4]" array 1 row 0 a0                 # ADD DRIVE TO RAID
asdf pdlist aall | grep -i -e 'enc.*id' -e slot -e state                 # UNCONFIGURED(BAD) ODER OFFLINE DRIVES EXIST?
for i in {4..7}; do asdf pdrbld showprog physdrv \[252:$i\]  aall; done  # SHOW REBUILDS, DEPENDS ON ENCLOSURES/SLOTS
asdf pdrbld start physdrv "[252:4]" a0                                   # START REBUILD

Some links that helped:

  • https://wiki.hetzner.de/index.php/LSI_RAID_Controller
  • https://wiki.hetzner.de/index.php/LSI_RAID_Controller/en
  • https://www.thomas-krenn.com/de/wiki/MegaRAID_Controller_mit_MegaCLI_verwalten#Controller_Status_und_Config
  • https://calomel.org/megacli_lsi_commands.html
  • https://supportforums.cisco.com/document/62901/megacli-common-commands-and-procedures
apache rewrite non www to www
posted on 2016-11-16 16:20

For https hosts:

RewriteEngine on
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

While trying out the settings above, you should use a 302 instead of a 301.

imap via linux shell
posted on 2016-11-09 23:52

Connect to server:

# IMAP
nc SERVERNAME-OR-IP 143
# IMAPS
openssl s_client -connect SERVERNAME-OR-IP:993

IMAP commands:

  • enumerate/prefix commands with arbitrary labels or simply a '.'
  • login USERNAME "PASSWORD" # login
  • list "" "*" # show all mailboxes
  • status [mailbox]
  • select "MAILBOX" # switch to mailbox
  • fetch FIRST:LAST FLAGS
  • fetch MAILID BODY[HEADER]
  • fetch MAILID BODY[TEXT]
  • uid search all
  • uid store MAILID +flags (\Deleted) # mark as deleted
  • expunge # actual delete
  • logout # logout
openssl convert .pfx
posted on 2016-11-08 15:21

To extract privatekey, certificate and ca-certificate from a .pfx file, do these:

# extract key
openssl pkcs12 -in FILE.PFX -out FILE.key-nodes

# extract cacert
openssl pkcs12 -in FILE.PFX -out FILE.ca-bundle -cacerts

# extract cert
openssl pkcs12 -in FILE.PFX -out FILE.crt -clcerts

To create a .pfx / .p12:

# create .pfx
openssl pkcs12 -export -out FILE.pfx -inkey PRIVATEKEY.key -in CERTIFICATE.crt -certfile CACERTIFICATE.ca-bundle
mysql slow query log
posted on 2016-11-07 10:30

To enable mysql's slow query log:

show variables like '%query%';
show variables like '%slow%';
set global slow_query_log = 'on';
show variables like '%slow%';
flush logs; 

set global long_query_time = 1;
set global long_query_time = 5;
flush logs;

Look up what is set for slow_query_log_file, and try doing a tail -f on it in another window. That way you have instant feedback wether your settings work.

If you don't immeaditly see output, try lowering long_query_time, measured in seconds. Try flush logs; in case you see nothing and slowquery threshold is already set to 1 second.

Also set global log_queries_not_using_indexes = on; might help a lot.

linux run last command as root
posted on 2016-11-06 15:02

In case you have entered a longer (or even several commands) which you should have ran as a different user (usually as root), you might try this. Since if you switched to root, you would not have the command in root's history, usually you'd need to copy-paste.

Or do this:

sudo su -c "!!"
current mouse sensitivity
posted on 2016-10-14 16:02

For documentation purposes:

pointer acceleration:     1.0x
pointer threshold:        2 pixels
double click interval:    400msec
drag start time:          500msec
drag start distance:      4 pixels
mouse wheel scrolls by:   10 lines
plesk show mailaccounts and passwords
posted on 2016-10-13 18:08

To show all mailaccounts and the corresponding passwords, use

/opt/psa/admin/sbin/mail_auth_view

On older plesk installations, the file may be located differently.

Use locate mail_auth_view to find the path there.

ansible and ubuntu 16
posted on 2016-10-11 07:59

In case ansible is not working when being tried on a new ubuntu server, it actually lacks python 2.

fatal: [my_server]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "parsed": false}

On the ubuntu server, do:

apt update
apt install python-minimal
linux find swapping processes
posted on 2016-10-05 11:31

oneliner:

for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file; done | sort -k 2 -n
exclude .git and .svn from apache
posted on 2016-09-27 11:32

To prohibit the source control folders from being served by the webserver, use this in you apache vhost config:

<DirectoryMatch \.svn>
    Order allow,deny
    Deny from all
</DirectoryMatch>

<DirectoryMatch \.git>
    Order allow,deny
    Deny from all
</DirectoryMatch>
ultimate LSI megacli help
posted on 2016-09-23 00:01

I was fed up and did some shell scripting with serious sed'ing.

The result was this:

MC=/root/MegaCli; while read i; do echo; echo '============================================================================='; echo $'\e[31;1m'$i$'\e[0m'; $MC help $(echo $i | sed 's/XD //'); done < <($MC help | grep -e '^MegaCli' | sed 's/\(^MegaCli\s\(-\w\+\|\w\+\s-\w\+\)\)\s.*/\1/g' | awk '$1=" "' | sed 's/-//' | sed 's/\(.*\)/\U\1/' | sort | uniq | cut -c3-) | grep -v -e "MegaCLI SAS RAID Management Tool" -e Copyright | cat -s | sed 's/\(Syntax: \)\(.*\)/\1\L\2/' | sed -e '/Syntax/  s/-//g' -e '/Syntax/ s/\[e/"&/' -e '/Syntax/ s/\.\]/&"/' -e '/Syntax/ s/\(physdrv\)\(\S\)/\1 \2/' | sed 's/arraya,/array A/; s/rowb/rob B/' | grep -v -e '^Exit Code' | sed '/^\s*$/d' | sed '/^Syntax: / s//\n&\n\t/; /^Description:/ s//\n&/; /^Convention:/ s//\n&/' | less -R

This looks like shit, has some bugs and and very likely can use a lot of clean up.

But it gave me this, which should be the best help about the MegaCli out there. Ever. And no, you really don't need hypens or CaseSensitive commands.

=============================================================================
ADPALILOG
            AdpAlILog   
            ---------

Syntax: 
    megacli adpalilog an|a0,1,2|aall

Description: 
        Command retrieve all RAID subsystem log for troubleshooting use. Combines all the INFO commands (adp, vd, pd, encl, bbu) and adds OS information, Memory size, driver version and so on..

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPALLINFO
            AdpAllInfo
            ----------

Syntax: 
    megacli adpallinfo an|a0,1,2|aall

Description: 
        Display parameters on the given adapter(s).
        Displays information of adapter, including cluster state, BIOS, alarm, 
        firmware version, BIOS version, battery charge counter value, rebuild 
        rate, bus number/device number, present RAM, serial number of the board, 
        and SAS address.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL      Specifies the command is for all adapters. 
=============================================================================
ADPAUTORBLD
            AdpAutoRbld
            -----------

Syntax: 
    megacli adpautorbld enbl|dsbl|dsply an|a0,1,2|aall

Description: 
        Enables or disables automatic rebuild on the selected adapter(s).
        The -Dsply option shows the status of the automatic rebuild state.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPBBUCMD
            AdpBbuCmd 
            ---------

Syntax: 
    megacli adpbbucmd an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd getbbustatus an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd getbbucapacityinfo an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd getbbudesigninfo an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd getbbuproperties an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd bbulearn an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd bbumfgsleep an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd bbumfgseal an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd getbbumodes  an|a0,1,2|aall  

Syntax: 
    megacli adpbbucmd schedulelearn dsbl|info|[starttime ddd hh] an|a0,1,2|aall 

Syntax: 
    megacli adpbbucmd getggeepdata offset [hexaddress] numbytes n an|a0,1,2|aall 

Syntax: 
    megacli adpbbucmd setbbuproperties f <filename> an|a0,1,2|aall

Description: 
       Command manages BBU on the selected adapter(s).
        The possible parameters are:
        AdpBbuCmd: Command displays complete information about the BBU
                   such as : status, capacity, design information and properties
        GetBbuStatus: Displays complete information about the BBU status.
                             such as the temperature and voltage.
        GetBbuCapacityInfo: Command displays BBU capacity information. 
        GetBbuDesignInfo: Displays information about the BBU design parameters.
        GetBbuProperties: Command displays current properties of the BBU. 
        BbuLearn: Command Starts the learning cycle on the BBU. 
        getBbumodes: Command display list of bbu mode .
                such as:ID, service time, retention time etc. 
        BbuMfgSleep: Command Places the battery in Low-Power Storage mode. 
        GetGGEEPData: Returns the data of EEPROM starting from "Offset" 
              with n= number of bytes retrieved 
        ScheduleLearn: Scheduling of Battery Learn Cycle on selected Adapter.
            Dsbl: Disable the Battery learn cycle.
            Info: Display Scheduling information.
        StartTime: Schedule and enable the Battery Learn Cycle 
                Accepting Format :- 'DDD hh'.
               'DDD' is day of the week(SUN,MON...SAT). And 'hh' is 0-23 hour.
        SetBbuProperties: Sets the BBU properties on the selected adapter(s) 
                after reading from the ini file. 
        The ini file contains the information in the following formats:
            learnDelayInterval = X
               # X: Time in hours Not greater than 7 days or 168 hours.
            autoLearnMode = Y
               # Y: 0 - Enabled, 1 - Disabled, 2 - WarnViaEvent.
            bbuMode = Z
 # Z: 1 to 255. For gets all supported bbu modes fire 'Adpbbucmd getBbumodes' command.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. 
                        More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPBIOS
            AdpBIOS
            -------

Syntax: 
    megacli adpbios enbl |dsbl | soe | be | hcoe | hsm |  enblautoselectbootld | dsblautoselectbootld | dsply an|a0,1,2|aall 

Description: 
    Sets BIOS options, the following are the settings which can be selected on a single adapter, multiple adapters, or all adapters:-Enbl, -Dsbl, -Dsply:Enables, disables or displays BIOS status.
    The possible parameters are:
    SOE: Stops on BIOS errors during POST for selected adapter(s). When set to -SOE, BIOS stops in case of a problem with the configuration. This gives the option to enter the configuration utility to resolve the problem. This is available only when BIOS is enabled.
    BE: Bypasses BIOS errors during POST. This is available only when BIOS is enabled.
    HCOE: Headless Continue on Error. 
    HSM:  Headless Safe Mode. 
    EnblAutoSelectBootLd/DsblAutoSelectBootLd : Enable/Disable Auto Select Boot option.

Convention:   
    -aN         N specifies the adapter number for the command.
    -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                    select two or more adapters in this manner.
    -aALL       Specifies the command is for all adapters.
=============================================================================
ADPBOOTDRIVE
            AdpBootDrive
            ------------

Syntax: 
    megacli adpbootdrive {set {lx | physdrv "[e0:s0]}}|get an|a0,1,2|aall

Description: 
        Sets or displays the bootable virtual disk ID
        The possible parameters are:
        Set: Sets the virtual disk as bootable so that during the next reboot, the BIOS will look for a boot sector in the specified virtual disk.
        Get: Displays the bootable virtual disk ID.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
ADPCACHEFLUSH
            AdpCacheFlush
            -------------

Syntax: 
    megacli adpcacheflush an|a0,1,2|aall

Description: 
        Flush the adapter cache on the selected adapter(s).

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPCCSCHED
            AdpCcSched
            ----------

Syntax: 
    megacli adpccsched dsbl|info|{modeconc | modeseq "[excludeld ln|l0,1,2]
           [-SetStartTime yyyymmdd hh ] [-SetDelay val ] } -aN|-a0,1,2|-aALL

Description: 
        Schedules check consistency on the logical drive of given adapter.
        The possible parameters are:
        Dsbl: Disables Scheduled CC for the given adapter(s).
        Info: Get Scheduled CC Information for the given adapter(s). 
        ModeConc: Scheduled CC on all LDs concurrently for the given adapter(s)..
        ModeSeq: Scheduled CC on LDs sequentially for the given adapter(s).
        ExcludeLd: Specify the LD numbers not included in scheduled CC. The new list will overwrite the existing list stored on the controller. This is optional.
        StartTime: Set the next start time. The date is in the format of yyyymmdd in decimal digits and followed by a decimal number for the hour between 0 ~ 23 inclusively. This is optional.
        SetDelay: Set the execution delay between executions for the given adapter(s). This is optional.
            Values: Value is the length of delay in hours. Value of 0 means continuous execution.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPCOUNT
            AdpCount
            --------

Syntax: 
    megacli adpcount 

Description: 
        Displays the number of controllers supported on the system and returns 
        the number to the operating system.
=============================================================================
ADPDIAG
            AdpDiag
            -------

Syntax: 
    megacli adpdiag [val] an|a0,1,2|aall

Description: 
        Sets the amount of time for the adapter's diagnostic to run.
        Val: Time in second.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPDOWNGRADE
            AdpDowngrade command
            ---------------------------

Syntax: 
    megacli adpdowngrade an|a0,1,2|aall

Description: 
          This command downgrades MR controller to iMR mode on next reboot if controller has iMR firmware in flash and no memory is found on next reboot.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPEVENTLOG
            AdpEventLog
            -----------

Syntax: 
    megacli adpeventlog geteventloginfo an|a0,1,2|aall

Syntax: 
    megacli adpeventlog getevents {info warning critical fatal} {f <filename>} an|a0,1,2|aall

Syntax: 
    megacli adpeventlog getsinceshutdown {info warning critical fatal} {f <filename>} an|a0,1,2|aall

Syntax: 
    megacli adpeventlog getsincereboot {info warning critical fatal} {f <filename>} an|a0,1,2|aall

Syntax: 
    megacli adpeventlog includedeleted {info warning critical fatal} {f <filename>} an|a0,1,2|aall

Syntax: 
    megacli adpeventlog getlatest n {info warning critical fatal} {f <filename>} an|a0,1,2|aall

Syntax: 
    megacli adpeventlog getccincon f <filename> lx|l0,2,5...|lall an|a0,1,2|aall

Syntax: 
    megacli adpeventlog clear an|a0,1,2|aall

Description: 
        Command manages event log entries. 
        The possible parameters are:
        GetEventlogInfo: Displays overall event information such as total number of events, newest sequence number, oldest sequence number, shutdown sequence number, reboot sequence number, and clear sequence number. 
        GetEvents: Gets event log entry details. The information shown consists of total number of entries available at firmware side since the last clear and details of each entries of the error log. Start_entry specifies the initial event log entry when displaying the log.
        GetSinceShutdown: Displays all the events since last controller shutdown.
        GetSinceReboot: Displays all the events since last adapter reboot.
        IncludeDeleted: Displays all events, including deleted events.
        GetLatest: Displays the latest number of events, if any exist. The event data will be writtent to the file in reverse order.
        Clear: Clears the event log for the selected adapter(s).

Convention:   
          -aN          :N specifies the adapter number for the command.
          -a0,1,2     :Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL      :Specifies the command is for all adapters.
          -info          :Informational message. No user action is necessary.
          -warning   :Some component may be close to a failure point.
          -critical     :A component has failed, but the system has not lost data.
          -fatal        :A component has failed, and data loss has occurred or will occur.
=============================================================================
ADPFACDEFSET
            AdpFacDefSet
            ------------

Syntax: 
    megacli adpfacdefset an

Description: 
        Command sets the factory defaults on the selected adapter(s).

Convention:   
          -aN         N specifies the adapter number for the command.
=============================================================================
ADPFWFLASH
            AdpFwFlash
            ----------

Syntax: 
    megacli adpfwflash f filename [resetnow] [nosigchk] [noverchk] [fwtype n]an| a0,1,2|aall

Description: 
        Flashes the firmware with the ROM file specified at the command line.
        The possible parameters are:
        ResetNow: Firmware will not initiate Online Firmware flash        NoSigChk: option forces the application to flash the firmware even if the check word on the file does not match the required check word for the adapter. This option flashes the firmware only if the existing firmware version on the adapter is lower than the version on the ROM image.
        NoVerChk: also, the application flashes the adapter firmware without checking the version of the firmware image. The version check applies only to the firmware (APP.ROM) version.
        FwType: adapter firmware type. Give the value of Fw-type in number.  
        n: 0:- App or defaut, 1:- TMMC
        This command also supports the Mode 0 flash functionality. For Mode 0 flash, the adapter number is not valid. There are two possible methods:
        - Select which adapter to flash after the adapters are detected.
        - Flash the firmware on all present adapters.
        XML output data is generated by this option.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPGETCONNECTORMODE
            AdpGetConnectorMode
            -------------------

Syntax: 
    megacli adpgetconnectormode connectorn|connector0,1|connectorall an|a0,1,2|aall

Description: 
        Command display connector mode(Internal/External) on selected controllers.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPGETPCIINFO
            AdpGetPciInfo   
            ---------

Syntax: 
    megacli adpgetpciinfo an|a0,1,2|aall

Description: 
        Command retrieve bus number, Device number and Functional number of the adapter 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPGETPROP
            AdpGetProp
            ----------

Syntax: 
    megacli adpgetprop  cacheflushinterval | forcesgpio | rebuildrate 
    | PatrolReadRate | BgiRate | CCRate | ReconRate | SpinupDriveCount 
    | SpinupDelay | CoercionMode | ClusterEnable | PredFailPollInterval 
    | BatWarnDsbl | EccBucketSize | EccBucketLeakRate | EccBucketCount 
    | AbortCCOnError | AlarmDsply | SMARTCpyBkEnbl | SSDSMARTCpyBkEnbl 
    | NCQDsply | MaintainPdFailHistoryEnbl | RstrHotSpareOnInsert 
    | DisableOCR | EnableJBOD | DsblCacheBypass
    | BootWithPinnedCache | enblPI | PreventPIImport | AutoEnhancedImportDsply | AutoDetectBackPlaneDsbl 
    | ExposeEnclDevicesEnbl | EnblSpinDownUnConfigDrvs | SpinDownTime 
    | DefaultSnapshotSpace | DefaultViewSpace | AutoSnapshotSpace 
    | CopyBackDsbl | LoadBalanceMode | UseFDEOnlyEncrypt | UseDiskActivityForLocate 
    | DefaultLdPSPolicy | DisableLdPsInterval | DisableLdPsTime | SpinUpEncDrvCnt | SpinUpEncDelay   
    | PrCorrectUncfgdAreas | ENABLEEGHSP | ENABLEEUG | ENABLEESMARTER | Perfmode | PerfmodeValues 
    | DPMenable | SupportSSDPatrolRead -aN|-a0,1,2|-aALL 

Description: 
        Displays selected adapter properties. 
        The possible settings are: 
        CacheFlushInterval: Returns cache flush interval in seconds. 
            Values: 0 to 255 
        RebuildRate: Rebuild rate. 
            Values: 0 to 100 
        PatrolReadRate: Patrol read rate. 
            Values: 0 to 100 
        BgiRate: Background initilization rate. 
            Values: 0 to 100 
        CCRate: Consistency check rate. 
            Values: 0 to 100 
        ReconRate: Reconstruction rate. 
            Values: 0 to 100 
        SpinupDriveCount: Max number of drives to spin up at one time. 
            Values: 0 to 255 
        SpinupDelay: Number of seconds to delay among spinup groups. 
            Values: 0 to 255 
        CoercionMode: Drive capacity coercion mode. 
            Values: 0 - None 
                    1 - 128 Mbytes 
                    2 - 1 Gbyte 
        ClusterEnable: Clustering is enabled or disabled. 
            Values: 0 - Disabled 
                    1 - Enabled 
        PredFailPollInterval: Number of seconds between predicted fail polls. 
            Values: 0 to 65535 
        BatWarnDsbl: Disable warnings for missing battery or missing hardware. 
            Values: 0 - Disabled 
                    1 - Enabled 
        EccBucketSize: Size of ECC single-bit-error bucket. 
            Values: 0 to 255 
        EccBucketLeakRate: ECC single-bit-error bucket leak rate. 
            Values: 0 to 65535 minutes 
        EccBucketCount: Count of single-bit ECC errors currently in bucket. 
            Values: 0 to 65535 
        AbortCCOnError: Enable aborting check consistency on error. 
            Values: 0 - Disabled 
                    1 - Enabled 
        AlarmDsply: Returns alarm setting
            Values: 0 - Disabled 
                    1 - Enabled 
                    2 - Silenced 
                    3 - Missing 
        SMARTCpyBkEnbl: Copyback on SMART error setting. 
            Values: 0 - Disabled 
                    1 - Enabled 
        SSDSMARTCpyBkEnbl: Copyback to SSD on SMART error setting. 
            Values: 0 - Disabled
                    1 - Enabled.
        JBOD: 
            Values: 0 - Disabled 
                    1 - Enabled 
        CacheBypass: 
            Values: 0 - Enabled 
                    1 - Disabled 
        NCQDsply: Returns NCQ setting. 
            Values: 0 - Enabled 
                    1 - Disabled 
        MaintainPdFailHistoryEnbl: Enables tracking of bad PDs across reboot; 
                    will also show failed LED status for missing bad drives. 
            Values: 0 - Disabled 
                    1 - Enabled 
        RstrHotSpareOnInsert: 
            Values: 0 - Do not restore hot spare on insertion 
                    1 - Restore hot spare on insertion 
        EnblSpinDownUnConfigDrvs: Spin down unconfigured drives option. 
            Values: 0 - Disabled 
                    1 - Enabled 
        DisableOCR:
            Values: 0 - Online controller reset enabled 
                    1 - Online controller reset disabled 
        BootWithPinnedCache: 
            Values: 0 - Do not allow controller to boot with pinned cache 
                    1 - Allow controller to boot with pinned cache 
       enblPI : Active protection information.
            Values: 0 - Disable SCSI PI for controller
                    1 - Enable SCSI PI for controller 
        PreventPIImport: Prevent import of SCSI DIF protected logical disks.
            values : 0 or 1 
        AutoEnhancedImportDsply: Foreign configuration import auto mode option.
            Values: 0 - Disabled 
                    1 - Enabled 
        AutoDetectBackPlaneDsbl: Get auto-detect options for the back-plane. 
            Values: 0 - Enabled Auto Detect of SGPIO and i2c SEP 
                    1 - Disabled Auto Detect of SGPIO 
                    2 - Disabled Auto Detect of i2c SEP 
                    3 - Disabled Auto Detect of SGPIO and i2c SEP 
       ExposeEnclDevicesEnbl:  Enable device drivers to expose enclosure devices.
            Values: 0 - Expose enclosure devices 
                    1 -Hide enclosure devices 
        CopyBackDsbl: 
            Values: 0 - Enabled 
                    1 - Disabled 
        LoadBalanceMode: 
            Values: 0 - Auto Load balance mode 
                    1 - Disable Load balance mode 
        UseFDEOnlyEncrypt: Applies if disk or controller HW support encryption 
            Values: 0 - FDE and controller encryption both allowed 
                    1 - Only support FDE encryption, prohibit controller 
        DsblSpinDownHsp: Disable spin down Hot spares option. 
            Values: 0 - Disabled i.e. spin down hot spares
                    1 - Enabled i.e. do not spin down hot spares.
        SpinDownTime: Spin down time in minutes. i.e After SpinDownTime, firmware will start spinning down unconfigured good drives and hotspare depending on the DsblSpinDownHsp option.
            Values: 30 to 65535
        DefaultSnapshotSpace: Default Snapshot Space in percentage.
        DefaultViewSpace: Default View Space in percentage.
        AutoSnapshotSpace: Default Auto Snapshot Space in percentage.
        UseDiskActivityForLocate: Use disk activity to locate PD in Chenbro backplane
        DefaultLdPSPolicy: Default LD power savings policy 
        DisableLdPsInterval: LD power savings are disabled for yy hours beginning at disableLdPSTime 
        DisableLdPsTime: LD power savings shall be disabled at xx minutes from 12:00am 
        SpinUpEncDrvCnt: Maximum number of drives within an enclosure to spin up at one time 
        SpinUpEncDelay: Number of seconds to delay among spinup groups within an enclosure 
        PrCorrectUncfgdAreas: Correct media errors during PR 
            Values: 0- Disabled. 
                    1 - Enabled. 
        DPMenable: 
            Values: 0 - Disabled 
                    1 - Enabled 
        SupportSSDPatrolRead: 
            Values: 0 - Disabled 
                    1 - Enabled 

Convention:   
        ENABLEEGHSP: Enable Emergency Global Hot spares option. 
            Values: 0 - Disabled i.e.Disabled Emergency Global hot spares
                    1 - Enabled i.e. Enabled Emergency Global hot spares.
        ENABLEEUG: Enable Emergency UG as a Spare option. 
            Values: 0 - Disabled i.e.Disabled Emergency UG as a hot spares
                    1 - Enabled i.e. Enabled Emergency UG as a hot spares.
        ENABLEESMARTER: Enable Emergency spares as a SMARTER option. 
            Values: 0 - Disabled i.e.Disabled Emergency spares as a SMARTER
                    1 - Enabled i.e. Enabled Emergency spares as a SMARTER.
       Perfmode: Performance Tuning.
            Values: 0 - BestIOPS 
                    1 - Least Latency 
       MaxFlushLines: Maximum Number of Flushes.
            Values: 0 - 255 
       NumIOsToOrder: Frequency at which Ordered I/Os should be issued per disk drive.
            Values: 0 - 25 
        ForceSGPIO: 
            Values: 0 - Disabled 
                    1 - Enabled 
          -aN         N specifies the adapter number for the command. 
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You 
                      can select two or more adapters in this manner. 
          -aALL       Specifies the command is for all adapters. 
=============================================================================
ADPGETTIME
            AdpGetTime
            ----------

Syntax: 
    megacli adpgettime an|a0,1,2|aall

Description: 
        Displays selected adapter time and date. This command uses a 24-hour format.
        For example, 7 p.m. would display as 19:00:00.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPPR
            AdpPR
            -----

Syntax: 
    megacli adppr dsbl|enblauto|enblman|start|stop|suspend|resume|info|ssdpatrolreadenbl |ssdpatrolreaddsbl |{setdelay val}| an|a0,1,2|aall

Description: 
        Sets Patrol Read option on a single, multiple, or all adapter's. Patrol Read will not start on degraded or undergoing Initialization/Consistency Check.
        The possible parameters are:
        Dsbl: Disables Patrol Read for the selected adapter(s).
        EnblAuto: Enables Patrol Read automatically for the selected adapter(s). This means Patrol Read will start automatically on the scheduled intervals.
        EnblMan: Enables Patrol Read manually for the selected adapter(s). This means that Patrol Read does not start automatically; it has to be started manually by selecting the Start command. 
        Start: Starts Patrol Read for the selected adapter(s). 
        Suspend: Suspend Patrol Read for the selected adapter(s). 
        Resume: Resume Patrol Read for the selected adapter(s). 
        Stop: Stops Patrol Read for the selected adapter(s). 
        Info: Displays the following Patrol Read information for the selected adapter(s): 
            - Patrol Read operation mode 
            - Patrol Read execution delay value
            - Patrol Read status
        SSDPatrolReadEnbl: Enable Patrol Read that include VDs constituting only SSD drives 
        SSDPatrolReadDsbl: Disable Patrol Read that include VDs constituting only SSD drives 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPSETCONNECTORMODE
            AdpSetConnectorMode
            -------------------

Syntax: 
    megacli adpsetconnectormode internal|external|auto connectorn|connector0,1|connectorall an|a0,1,2|aall

Description: 
        Command sets connector mode on selected controllers.
        The possible parameters are:
        External: Set Multiplexer to select External port. e.g. scan external bus.
        Internal: Set Multiplexer to select Internal port. e.g. scan SAS bus for connected devices.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPSETPROP
            AdpSetProp
            ----------

Syntax: 
    megacli adpsetprop {cacheflushinterval val} | { rebuildrate val} 
    | {PatrolReadRate -val} | {BgiRate -val} | {CCRate -val} 
    | {ReconRate -val} | {SpinupDriveCount -val} | {SpinupDelay -val} 
    | {CoercionMode -val} | {ClusterEnable -val} | {PredFailPollInterval -val} 
    | {BatWarnDsbl -val} | {EccBucketSize -val} | {EccBucketLeakRate -val} 
    | {AbortCCOnError -val} | AlarmEnbl | AlarmDsbl | AlarmSilence 
    | {SMARTCpyBkEnbl -val} | {SSDSMARTCpyBkEnbl -val} | NCQEnbl | NCQDsbl 
    | {MaintainPdFailHistoryEnbl -val} | {RstrHotSpareOnInsert -val} 
    | {EnblSpinDownUnConfigDrvs -val} |{DefaultSnapshotSpace -Val%}|{AutoSnapshotSpace -Val%} 
    | {DisableOCR -val} | {BootWithPinnedCache -val} | {enblPI -val} | {PreventPIImport -val} 
    | AutoEnhancedImportEnbl | AutoEnhancedImportDsbl | {ExposeEnclDevicesEnbl -val} | {CopyBackDsbl -val} 
    | {AutoDetectBackPlaneDsbl -val} | {LoadBalanceMode -val}| {DefaultViewSpace -Val%} 
    | {UseFDEOnlyEncrypt -val} | {DsblSpinDownHsp -val} | {SpinDownTime -val}| {Perfmode -val}
    | {PerfMode -val MaxFlushLines -val NumIOsToOrder -val} -aN|-a0,1,2|-aALL 
    | {EnableJBOD -val} | {DsblCacheBypass -val} 
    | {useDiskActivityForLocate -val} | {SpinUpEncDrvCnt -val} 
    | {SpinUpEncDelay -val}| {PrCorrectUncfgdAreas -val} | {ENABLEEGHSP -val} | {ENABLEEUG -val} 
    | {ENABLEESMARTER -val} | {DPMenable -val} | {SupportSSDPatrolRead -val} |  {ForceSGPIO -val}
     -aN|-a0,1,2|-aALL 

Description: 
        Command sets the properties on the selected adapter(s). 
        The possible settings are: 
        CacheFlushInterval: Cache flush interval in seconds. 
            Values: 0 to 255 
        RebuildRate: Rebuild rate. 
            Values: 0 to 100 
        PatrolReadRate: Patrol read rate. 
            Values: 0 to 100 
        BgiRate: Background initilization rate. 
            Values: 0 to 100 
        CCRate: Consistency check rate. 
            Values: 0 to 100 
        ReconRate: Reconstruction rate. 
            Values: 0 to 100 
        SpinupDriveCount: Max number of drives to spin up at one time. 
            Values: 0 to 255 
        SpinupDelay: Number of seconds to delay among spinup groups. 
            Values: 0 to 255 
        CoercionMode: Drive capacity Coercion mode. 
            Values: 0 - None 
                    1 - 128 Mbytes 
                    2 - 1 Gbyte 
        ClusterEnable: Clustering is enabled or disabled. 
            Values: 0 - Disabled 
                    1 - Enabled 
        PredFailPollInterval: Number of seconds between predicted fail polls. 
            Values: 0 to 65535 
        BatWarnDsbl: Disable warnings for missing battery or missing hardware. 
            Values: 0 - Disabled 
                    1 - Enabled 
        EccBucketSize: Set size of ECC single-bit-error bucket. 
            Values: 0 to 255 
        EccBucketLeakRate: ECC single-bit-error bucket leak rate. 
            Values: 0 to 65535 minutes 
        AbortCCOnError: Enable aborting check consistency on error. 
            Values: 0 - Disabled 
                    1 - Enabled 
        AlarmEnbl: Set alarm to Enabled. 
        AlarmDsbl: Set alarm to Disabled. 
        AlarmSilence: Silence an active alarm. 
        SMARTCpyBkEnbl: Copyback on SMART error Enabled. 
            Values: 0 - Disabled 
                    1 - Enabled 
        SSDSMARTCpyBkEnbl: Copyback to SSD on SMART error Enabled. 
            Values: 0 - Disabled 
                    1 - Enabled 
        NCQEnbl: Enables NCQ option on controller. 
        NCQDsbl: Disables NCQ option on controller. 
        MaintainPdFailHistoryEnbl: Enable tracking of bad PDs across reboot; 
                    will also show failed LED status for missing bad drives. 
            Values: 0 - Disabled 
                    1 - Enabled 
        RstrHotSpareOnInsert: 
            Values: 0 - Do not restore hot spare on insertion 
                    1 - Restore hot spare on insertion 
        EnblSpinDownUnConfigDrvs: Spin down un-configured drives option. 
            Values: 0 - Disabled 
                    1 - Enabled 
        DisableOCR: 
            Values: 0 - Online controller reset enabled 
                    1 - Online controller reset disabled 
        BootWithPinnedCache: 
            Values: 0 - Do not allow controller to boot with pinned cache 
                    1 - Allow controller to boot with pinned cache 
       enblPI : Active protection information.
            Values: 0 - Disable SCSI PI for controller
                    1 - Enable SCSI PI for controller 
        PreventPIImport: Prevent import of SCSI DIF protected logical disks.
            values : 0 or 1 
        AutoEnhancedImportEnbl: Enable automatic foreign configuration import. 
        AutoEnhancedImportDsbl: Disable automatic foreign configuration import.
       ExposeEnclDevicesEnbl:  Enable device drivers to expose enclosure devices.
            Values: 0 - Expose enclosure devices 
                    1 - Hide enclosure devices 
        CopyBackDsbl: 
            Values: 0 - Enable Copyback 
                    1 - Disable Copyback 
        EnableJBOD  : 
            Values: 0 - Disable JBOD mode 
                    1 - Enable JBOD mode 
        DsblCacheBypass  : 
            Values: 0 - Enable Cache Bypass 
                    1 - Disable Cache Bypass 
        AutoDetectBackPlaneDsbl: Set auto-detect options for the back-plane. 
            Values: 0 - Enable Auto Detect of SGPIO and i2c SEP 
                    1 - Disable Auto Detect of SGPIO 
                    2 - Disable Auto Detect of i2c SEP 
                    3 - Disable Auto Detect of SGPIO and i2c SEP 
        LoadBalanceMode: 
            Values: 0 - Auto Load balance mode 
                    1 - Disable Load balance mode 
        UseFDEOnlyEncrypt: Applies if disk or controller HW support encryption 
            Values: 0 - FDE and controller encryption both allowed 
                    1 - Only support FDE encryption, prohibit controller 
        DsblSpinDownHsp: Disable spin down Hot spares option. 
            Values: 0 - Disabled i.e. spin down hot spares
                    1 - Enabled i.e. do not spin down hot spares.
        SpinDownTime: Spin down time in minutes. i.e After SpinDownTime, firmware will start spinning down unconfigured good drives and hotspare depending on the DsblSpinDownHsp option.
            Values: 30 to 65535
       Perfmode: Performance Tuning.
            Values: 0 - BestIOPS 
                    1 - Least Latency 
       MaxFlushLines: Maximum Number of Flushes.
            Values: 0 - 255 
       NumIOsToOrder: Frequency at which Ordered I/Os should be issued per disk drive.
            Values: 0 - 25 
        DefaultSnapshotSpace: Default Snapshot Space in percentage.
        DefaultViewSpace: Default View Space in percentage.
        AutoSnapshotSpace: Default Auto Snapshot Space in percentage.
        useDiskActivityForLocate: 
            Values: 0 - Disable use of disk activity to locate a physical disk in Chenbro backplane 
                    1 - Enable use of disk activity to locate a physical disk in Chenbro backplane 
        SpinUpEncDrvCnt: Max number of drives within an enclosure to spin up at one time. 
            Values: 0 to 255 
        SpinUpEncDelay: Number of seconds to delay among spinup groups within an enclosure 
            Values: 0 to 255 
        PrCorrectUncfgdAreas: Correct media errors during PR 
            Values: 0 - Correcting Media error during PR is disabled. 
                    1 -Correcting Media error during PR is allowed. 
        DPMenable  : 
            Values: 0 - Disable Drive Performance Monitoring 
                    1 - Enable Drive Performance Monitoring 
        SupportSSDPatrolRead  : 
            Values: 0 - Disable Patrol read fo SSD drives 
                    1 - Enable Patrol read fo SSD drives 

Convention:   
        ENABLEEGHSP: Enable Emergency Global Hot spares option. 
            Values: 0 - Disabled i.e.Disabled Emergency Global hot spares
                    1 - Enabled i.e. Enabled Emergency Global hot spares.
        ENABLEEUG: Enable Emergency UG as a Spare option. 
            Values: 0 - Disabled i.e.Disabled Emergency UG as a hot spares
                    1 - Enabled i.e. Enabled Emergency UG as a hot spares.
        ENABLEESMARTER: Enable Emergency spares as a SMARTER option. 
            Values: 0 - Disabled i.e.Disabled Emergency spares as a SMARTER
                    1 - Enabled i.e. Enabled Emergency spares as a SMARTER.
        ForceSGPIO:  
            Values: 0 - Disabled 
                    1 - Enabled 
          -aN         N specifies the adapter number for the command. 
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You 
                      can select two or more adapters in this manner. 
          -aALL       Specifies the command is for all adapters. 
=============================================================================
ADPSETTIME
            AdpSetTime
            ----------

Syntax: 
    megacli adpsettime yyyymmdd hh:mm:ss an

Description: 
        Sets the time and date on selected adapter, this command uses a 24-hour format. For 
        example, 7 p.m. displays as 19:00:00. The order of date and time is reversible.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPSETVERIFY
            AdpSetVerify
            ------------

Syntax: 
    megacli adpsetverify f filename an|a0,1,2|aall

Description: 
        Validates adapter configuration using given input(ini) file, input(ini) file contains all the adapter settings information. 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ADPSHUTDOWN
            AdpShutDown     
            ------------

Syntax: 
    megacli adpshutdown an|a0,1,2|aall
Descritption: 
    Shutdown the selected Adapter(s).All background operations are put on 
    hold for resume. The controller cache is flushed, all disk drive 
    caches are flushed, and the on-disk configuration is closed to  
    indicate redundancy data is consistent. Further writes will 
    cause the system to reopen the configuration,thus undoing the effects
    of the shutdown command.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL      Specifies the command is for all adapters. 
=============================================================================
CACHECADE
            Cachecade
               ----------

Syntax: 
    megacli cachecade assign|remove lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        This command assigns or removes the association of VDs with the Cachecade pool.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGALLFREEDRV
            CfgAllFreeDrv
            -------------

Syntax: 
    megacli cfgallfreedrv rx [sataonly] [spancount xxx] [wt|wb] [nora|ra|adra] [direct|cached] [cachedbadbbu|nocachedbadbbu] [strpszm] [hspcount xx [hsptype dedicated|enclaffinity|nonrevertible]] | [fde|ctrlbased] [default| automatic| none| maximum| maximumwithoutcaching] "[enblpi] an

Description: 
        Adds all the unconfigured physical drives to RAID level 0, 1, 5, or 6 configuration to a specified adapter. Even if no configuration is present, you have the option to write the configuration to the adapter.
        The possible parameters are:
        Rx[E0:S0,...]: Specifies the RAID level and the physical drive enclosure/slot numbers to construct a disk array.
        WT (Write through), WB (Write back): Selects write policy.
        NORA (No read ahead), RA (Read ahead), ADRA (Adaptive read ahead): Selects read policy.
        Cached, -Direct: Selects cache policy.
        [{CachedBadBBU|NoCachedBadBBU }]: Specifies whether to use write cache when the BBU is bad.
        szXXXXXXXX: Specifies the size for the virtual disk, where XXXX is a decimal number of Mbytes. However, the actual size of the virtual disk may be smaller, because the driver requires the number of blocks from the physical drives in each virtual disk to be aligned to the strip size. If multiple size options are specified, CT will configure the virtual disks in the order of the options entered in the command line. The configuration of a particular virtual disk will fail if the remaining size of the array is too small to configure the virtual disk with the specified size. This option can also be used to create a configuration on the free space available in the array.
        strpszM: Specifies the strip size, where the strip size values are 8, 16, 32, 64, 128, 256, 512, or 1024 Mega Bytes.
        HspType: If HspType is not mentioned it will be a global Hot Spare.
            Dedicated: The new hot spares will be dedicated to the virtual disk used in creating the configuration.
            EnclAffinity: Associates the hot spare to a selected enclosure.
        AfterLdX: This command is optional. By default, the application uses the first free slot available in the virtual disk. This option is valid only if the virtual disk is already used for configuration.
        FDE|CtrlBased: If controller support security feature, this option enables FDE/Ctrl based encryption on virtual disk.  
        [-Default| -Automatic| -None| -Maximum| -MaximumWithoutCaching] : If the controller supports power savings on virtual disk, these options specify the possible levels of power savings that can be applied on a virtual disk. 
        [-enblPI]: Allows to create PIenabled configuration.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGCACHECADEADD
            CfgCacheCadeAdd
            -----------

Syntax: 
    megacli cfgcachecadeadd [rx] physdrv "[e0:s0,...]" {name ldnamestring} [wt|wb|forcedwb] [assign lx|l0,2,5..|lall] an|a0,1,2|aall 

Description: This command is used to create CacheCade which can be used as secondary cache 
        The possible parameters are:
        Rx: Specifies the RAID level.
        Physdrv[E0:S0,...]: Specifies the physical drive enclosure/slot numbers to construct a disk array.
        WT (Write through), WB (Write back), ForcedWB (Forced Write back): Selects write policy.
        [-assign -LX|L0,2,5..|LALL]: Specifies the Virtual disk that is to be cached.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGCACHECADEDEL
            CfgCacheCadeDel
            --------

Syntax: 
    megacli cfgcachecadedel lx|l0,2,5...|lall an|a0,1,2|aall

Description: 
        Command deletes the specified CacheCade on the selected adapter(s).
        Multiple or all CacheCades can be deleted.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGCACHECADEDSPLY
            CfgCacheCadeDsply
            --------

Syntax: 
    megacli cfgcachecadedsply an|a0,1,2|aall

Description: 
        Command displays the existing CacheCade configuration on the selected adapter.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGCLR
            CfgClr
            ------

Syntax: 
    megacli cfgclr [force] an|a0,1,2|aall

Description: 
        Command clears the existing configuration on selected adapter.
        [-Force]: If Specified the Configuration will be cleared even if there are some dirty Cache Lines.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGDSPLY
            CfgDsply
            --------

Syntax: 
    megacli cfgdsply an|a0,1,2|aall

Description: 
        Command displays the existing configuration on the selected adapter.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGEACHDSKRAID0
            CfgEachDskRaid0
            ---------------

Syntax: 
    megacli cfgeachdskraid0 [wt|wb] [nora|ra|adra] [direct|cached] [cachedbadbbu|nocachedbadbbu] [strpszm] [fde|ctrlbased] [default| automatic| none| maximum| maximumwithoutcaching] [cache] "[enblpi] an|a0,1,2|aall

Description: 
        Command configures each physical disk in unconfigured-good state as RAID 0 on the selected adapter.
        [-Cache]: If Specified the virtual disk will be cached by using the Cachepool.
        [-enblPI]: Allows to create PIenabled configuration.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGFOREIGN
            CfgForeign
            ----------

Syntax: 
    megacli cfgforeign scan | [securitykey sssssssssss] an|a0,1,2|aall

Syntax: 
    megacli cfgforeign dsply [x] | [securitykey sssssssssss] an|a0,1,2|aall    

Syntax: 
    megacli cfgforeign preview [x] | [securitykey sssssssssss] an|a0,1,2|aall    

Syntax: 
    megacli cfgforeign import [x] | [securitykey sssssssssss] an|a0,1,2|aall    

Syntax: 
    megacli cfgforeign clear [x]|[securitykey sssssssssss] an|a0,1,2|aall    

Description: 
        Command manages foreign configurations. 
        The possible parameters are:
        Scan: Scans and displays available foreign configurations.
        Preview: Provides a preview of the imported foreign configuration. The foreign configuration ID (FID) is optional.
        Dsply: Displays the foreign configuration.
        Import: Imports the foreign configuration. The FID is optional.
        Clear [FID]: Clears the foreign configuration. The FID is optional.
         X: index of foreign configurations. It is Optional, all by default.
        SecurityKey: Security Key needs to be given if the foreign drive is locked. 
        If multiple keys are required to unlock all the PDs then this command needs to be 
        executed multiple times passing different security keys.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGFREESPACEINFO
            CfgFreeSpaceinfo
            ----------------

Syntax: 
    megacli cfgfreespaceinfo an|a0,1,2|aall

Description: 
        Command displays all the free space available for configuration on the selected adapter(s). The information displayed includes the number of disk groups, the number of spans in each disk group, the number of free space slots in each disk group, the start block, and the size (in both blocks and megabytes) of each free space slot.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGLDADD
            CfgLdAdd
            --------

Syntax: 
    megacli cfgldadd rx"[e0:s0,e1:s1,...]" [wt|wb] [nora|ra|adra] [direct|cached] [cachedbadbbu|nocachedbadbbu] [szxxx [szyyy ...]] [strpszm] [hsp[e0:s0,...]] [afterldx] | [fde|ctrlbased] [default| automatic| none| maximum| maximumwithoutcaching] [cache] [enblpi] [force] an

Description: 
        Command adds a RAID level 0, 1, 5, or 6 to a specified adapter. Even if no configuration is present, you have the option to write the configuration to the adapter.
        The possible parameters are:
        Rx[E0:S0,...]: Specifies the RAID level and the physical drive enclosure/slot numbers to construct a disk array.
        WT (Write through), WB (Write back): Selects write policy.
        NORA (No read ahead), RA (Read ahead), ADRA (Adaptive read ahead): Selects read
        policy.
        Cached, -Direct: Selects cache policy.
        [{CachedBadBBU|NoCachedBadBBU }]: Specifies whether to use write cache when the BBU is bad.
        szXXXXXXXX: Specifies the size for the virtual disk, where XXXX is a decimal number of Mbytes. However, the actual size of the virtual disk may be smaller, because the driver requires the number of blocks from the physical drives in each virtual disk to be aligned to the stripe size. If multiple size options are specified, CT will configure the virtual disks in the order of the options entered in the command line. The configuration of a particular virtual disk will fail if the remaining size of the array is too small to configure the virtual disk with the specified size. This option can also be used to create a configuration on the free space available in the array.
        strpszM: Specifies the strip size, where the strip size values are 8, 16, 32, 64, 128, 256, 512, or 1024 Mega Bytes.
        Hsp[E5:S5,...]: Creates hot spares when you create the configuration. The new hot spares will be dedicated to the virtual disk used in creating the configuration. This option does not allow you to create global hot spares. To create global hot spares, you must use the -PdHsp command with proper subcommands. User can also use this option to create a configuration on the free space available in the virtual disk. 
        AfterLdX: This command is optional. By default, the application uses the first free slot available in the virtual disk. This option is valid only if the virtual disk is already used for configuration.
        Force: This option will force the creation of virtual disk in situations where the application finds that it is convenient to create the virtual disk only with user's consent.
        [-Cache]: If Specified the virtual disk will be cached by using the Cachepool.
        [-enblPI]: Allows to create PIenabled configuration.
        FDE|CtrlBased: If controller support security feature, this option enables FDE/Ctrl based encryption on virtual disk.  
        [-Default| -Automatic| -None| -Maximum| -MaximumWithoutCaching] : If the controller supports power savings on virtual disk, these options specify the possible levels of power savings that can be applied on a virtual disk. 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGLDDEL
            CfgLdDel
            --------

Syntax: 
    megacli cfglddel lx|l0,2,5...|lall [force] an|a0,1,2|aall

Description: 
        Command deletes the specified virtual disk(s) on the selected adapter(s).
        Multiple or all virtual disks can be deleted.
        [-Force]: If Specified the Configuration will be cleared even if there are some dirty Cache Lines.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
CFGRESTORE
            CfgRestore
            ----------

Syntax: 
    megacli cfgrestore f filename an

Description: 
        Reads the configuration from the file and loads it on the adapter. MegaCLI can store or restore all read and write adapter properties, all read and write properties for virtual disks, and the RAID configuration including hot spares. 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
        Note:
        - MegaCLI does not validate the setup when restoring the RAID configuration.
        - CfgSave option stores the configuration data and adapter properties in the file. Configuration data has only the device ID and sequence number information of the physical drives used in the configuration. The CfgRestore option will fail if the same device IDs of the physical drives are not present.
=============================================================================
CFGSAVE
            CfgSave
            -------

Syntax: 
    megacli cfgsave f filename an  

Description: 
        Command saves the configuration for the selected adapter(s) to the given filename.

Convention:   
          -aN         N specifies the adapter number for the command.
=============================================================================
CFGSPANADD
            CfgSpanAdd
            ----------

Syntax: 
    megacli cfgspanadd r10 array0"[e0:s0,e1:s1] array1[e0:s0,e1:s1] [arrayx[e0:s0,e1:s1] ...]" [wt|wb] [nora|ra|adra] [direct|cached] [cachedbadbbu|nocachedbadbbu][strpszm][szxxx[szyyy ...]][afterldx]| [fde|ctrlbased] [default| automatic| none| maximum| maximumwithoutcaching] [enblpi] [force]  an

Syntax: 
    megacli cfgspanadd r50 array0"[e0:s0,e1:s1,e2:s2,...]" array1[e0:s0,e1:s1,e2:s2,...] [arrayx[e0:s0,e1:s1,e2:s2,...] ...] [wt|wb] [nora|ra|adra] [direct|cached] [cachedbadbbu|nocachedbadbbu][strpszm][szxxx[szyyy ...]][afterldx] [fde|ctrlbased] [default| automatic| none| maximum| maximumwithoutcaching] [enblpi] [force]  an

Syntax: 
    megacli cfgspanadd r60 array0"[e0:s0,e1:s1,e2:s2,e3,s3...]" array1[e0:s0,e1:s1,e2:s2,e3,s3...] [arrayx[e0:s0,e1:s1,e2:s2,...] ...] [wt|wb] [nora|ra|adra] [direct|cached] [cachedbadbbu|nocachedbadbbu][strpszm][szxxx[szyyy ...]][afterldx] [fde|ctrlbased] [default| automatic| none| maximum| maximumwithoutcaching] [enblpi] [force]  an

Description: 
        Command creates a RAID level 10, 50, or 60 (spanned) configuration from the specified arrays. Even if no configuration is present, you must use this option to write the configuration to the adapter.
        The possible parameters are:
        Rx: Spcecifies the RAID Level.
        ArrayX[E0:S0,...]: Specifies the Array and the physical drive enclosure/slot numbers to construct a disk array.
        WT (Write through), WB (Write back): Selects write policy.
        NORA (No read ahead), RA (Read ahead), ADRA (Adaptive read ahead): Selects read
        policy.
        Cached, -Direct: Selects cache policy.
        [{CachedBadBBU|NoCachedBadBBU }]: Specifies whether to use write cache when the BBU is bad.
        szXXXXXXXX: Specifies the size for the virtual disk, where XXXX is a decimal number of Mbytes. However, the actual size of the virtual disk may be smaller, because the driver requires the number of blocks from the physical drives in each virtual disk to be aligned to the stripe size. If multiple size options are specified, CT will configure the virtual disks in the order of the options entered in the command line. The configuration of a particular virtual disk will fail if the remaining size of the array is too small to configure the virtual disk with the specified size. This option can also be used to create a configuration on the free space available in the array.
        strpszM: Specifies the strip size, where the strip size values are 8, 16, 32, 64, 128, 256, 512, or 1024 Mega Bytes.
        AfterLdX: This command is optional. By default, the application uses the first free slot available in the virtual disk. This option is valid only if the virtual disk is already used for configuration.
        Force: This option will force the creation of virtual disk in situations where the application finds that it is convenient to create the virtual disk only with user's consent.
        FDE|CtrlBased: If controller support security feature, this option enables FDE/Ctrl based encryption on virtual disk.  
        [-Default| -Automatic| -None| -Maximum| -MaximumWithoutCaching] : If the controller supports power savings on virtual disk, these options specify the possible levels of power savings that can be applied on a virtual disk. 
        [-enblPI]: Allows to create PIenabled configuration.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
        Note: 
        -   Multiple arrays are specified using the -ArrayX[E0:S0,...] option, Where X starts from 0.
        -   All of the arrays must have the same number of physical drives.
        -   At least two arrays must be provided. The order of options {WT |WB} {NORA | RA | ADRA} {Direct | Cached} is flexible.
        -   Command exits and does not create a configuration if the size(-szXXXXXXXX) or the AfterLd option is specified but the controller does not support slicing in the spanned arrays.
=============================================================================
CHANGESECURITYKEY
            ChangeSecurityKey
            -----------------

Syntax: 
    megacli changesecuritykey oldsecuritykey sssssssssss | securitykey sssssssssss| [passphrase sssssssssss] | [keyid kkkkkkkkkkk] an

Description: 
        Command changes security key on specified controller.
        The possible parameters are:
        OldSecurityKey: It is the old security key used to create security feature on specified controller. 
        SecurityKey: This security key will replace the old security.
        Passphrase: This pass phrase will replace the old pass phrase.
        KeyID: Security key Id of given controller.

Convention:   
          -aN         N specifies the adapter number for the command.
        Note: 
        -   Security key is mandatory and pass phrase is optional.
        -   Security key and pass phrase have special requirements.
        Security key & pass phrase should have 8 - 32 chars, case-sensitive; 1 number, 1 lowercase letter, 1 uppercase letter, 1 non-alphanumeric character (no spaces).
       - In case of Unix based systems, if the character '!' is used as one of the input characters in the value of Security key or pass phrase, it must be preceded by a back slash character('\'). 
=============================================================================
CREATESECURITYKEY
            CreateSecurityKey
            -----------------

Syntax: 
    megacli createsecuritykey securitykey sssssssssss | [passphrase sssssssssss] |[keyid kkkkkkkkkkk] an 

Description: 
        Command enables security feature on specified controller.
        The possible parameters are:
        SecurityKey: Security key will be used to generate lock key when drive security is enabled.
        Passphrase: Pass phrase to provide additional security.
        KeyID: Security key Id.

Convention:   
          -aN         N specifies the adapter number for the command.
        Note: 
        -   Security key is mandatory and pass phrase is optional.
        -   Security key and pass phrase have special requirements.
        Security key & pass phrase should have 8 - 32 chars, case-sensitive; 1 number, 1 lowercase letter, 1 uppercase letter, 1 non-alphanumeric character (no spaces).
       - In case of Unix based systems, if the character '!' is used as one of the input characters in the value of Security key or pass phrase, it must be preceded by a back slash character('\'). 
=============================================================================
DESTROYSECURITYKEY
            DestroySecurityKey
            ------------------

Syntax: 
    megacli destroysecuritykey | [force] an

Description: 
        Command destroys the key completely on specified controller.

Convention:   
          -aN         N specifies the adapter number for the command.
        Force:  This option will force the destroying of the security key otherwise the CLI will give a warning that destroying key will result in data corruption and quit.
=============================================================================
DIRECTPDMAPPING
            DirectPdMapping   
            ---------------

Syntax: 
    megacli directpdmapping enbl|dsbl|dsply an|a0,1,2|aall 

Description: 
        Command sets the mapping mode of physical drive.
        The possible parameters are:
        Enbl: Enables Direct physical drive mapping mode. 
        Dsbl: Disables Direct physical drive mapping mode. 
        Dsply: Displays current state of direct physical drive mapping. 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
DISCARDPRESERVEDCACHE
            DiscardPreservedCache    
            ---------------------

Syntax: 
    megacli discardpreservedcache lx|l0,1,2|lall force an|a0,1,2|aall 

Description: 
        Command discards pinned cache of Vd

Convention:   
          -force         force option must be specified in the command when preserved cache associated with Offline virtual drives must be discarded.          Offline virtual drives will be deleted on discarding the preserved cache associated with them.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
DPMSTAT
                     DPMStat   
                    ------------
MegaCli -DpmStat -Dsply {lct | hist | ra | ext } [-physdrv[E0:S0]] -aN|-a0,1,2|-aALL  
MegaCli -DpmStat -Clear {lct | hist | ra | ext } -aN|-a0,1,2|-aALL  

Description: 
          These commands display or clear the drive performance statistics on the controller.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ELF
            GetSafeId   
          ----------------

Syntax: 
    megacli elf getsafeid an|a0,1,2|aall

Description: 
        Displays the Safe ID of the controller

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
            ControllerFeatures   
          ----------------

Syntax: 
    megacli elf controllerfeatures an|a0,1,2|aall

Description: 
        Displays the Advanced Software Options that are enabled on the controller including the ones in trial mode

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
            ApplyKey   
          ----------------

Syntax: 
    megacli elf applykey key <val> [preview] an|a0,1,2|aall

Description: 
        Applies the Activation Key either in preview mode or in real mode

Convention:   
          -Preview - optional parameter, provides the preview of the Advanced Software Option(s) that gets activated or deactivated after applying the Activation key.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
            TransferToVault   
          ----------------

Syntax: 
    megacli elf transfertovault an|a0,1,2|aall

Description: 
        Transfers the Advanced Software Options from NVRAM to keyvault

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
            DeactivateTrialKey   
          ----------------

Syntax: 
    megacli elf deactivatetrialkey an|a0,1,2|aall

Description: 
        Deactivates the trial key

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
            ReHostInfo   
          ----------------

Syntax: 
    megacli elf rehostinfo an|a0,1,2|aall

Description: 
        Displays the Re-Host information and if Re-Hosting is necessary then it will also displays the Controller and KeyVault serial numbers

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
            ReHostComplete   
          ----------------

Syntax: 
    megacli elf rehostcomplete an|a0,1,2|aall

Description: 
        This notifies the Controller that Re-Host is being done

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ENCINFO
            EncInfo
            -------

Syntax: 
    megacli encinfo an|a0,1,2|aall

Description: 
        Command displays information of enclosure's connected to the selected adapter(s).

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
ENCSTATUS
            EncStatus
            ----------

Syntax: 
    megacli encstatus an|a0,1,2|aall

Description: 
        Command displays status of the enclosure connected to the selected adapter(s).

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
FWTERMLOG
            FwTermLog
            ---------

Syntax: 
    megacli fwtermlog bbuoff|bbuofftemp|bbuon|bbuget|dsply|clear an|a0,1,2|aall

Description: 
        Sets BBU terminal logging options, following are the settings to select on a single adapter, multiple adapters, or all adapters: 
        The possible parameters are:
        Bbuoff: Turns off the BBU for firmware terminal logging. To turn off the BBU for logging, shut down system or turn off power to the system after running the command. 
        BbuoffTemp: Temporarily turns off the BBU for TTY (firmware terminal) logging. The battery will be turned on at the next reboot. 
        Bbuon: Turns on the BBU for TTY (firmware terminal) logging.
        BbuGet: Displays the current BBU settings for TTY logging.
        Dsply: Displays the TTY log (firmware terminal log) entries with details on the given adapters. The information shown consists of the total number of entries available at a firmware side. 
        Clear: Clears the TTY log.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
GETBBTENTRIES
            GetBbtEntries
            ------

Syntax: 
    megacli getbbtentries lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        Command displays information about the Bad Block Entries of virtual disk(s) on the selected adapter(s).

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
GETKEYID
            GetKeyID
            --------

Syntax: 
    megacli getkeyid [physdrv "[e0:s0]] an

Description: 
        Gets the security key Id of specified physical disk drive on given adapter.

Convention:   
          -aN         N specifies the adapter number for the command.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
GETLDEXPANSIONINFO
            getLdExpansionInfo
            ------

Syntax: 
    megacli getldexpansioninfo lx|l0,1,2|lall an|a0,1,2|aall 

Description: 
        Command displays information on how much this particular VD can grow in size. The output displays Size available to grow within Array and Size available to grow within Disks that belong to the Array.

Convention:   
          -lN         N specifies the virtual/logical drive number for the command.
          -l0,1,2     Specifies the command is for virtual/logical drive number 0, 1, and 2. More than one virtual/logical driver number can be selected.
          -lALL       Specifies the command is for all virtual/logical drive.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
GETPRESERVEDCACHELIST
            GetPreservedCacheList    
            ---------------------

Syntax: 
    megacli getpreservedcachelist an|a0,1,2|aall 

Description: 
        Command displays list of vd that have pinned cache.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
HELP|-H|?
Invalid input at or near token     HELP|-H|?
Use -help to know command name
=============================================================================
LDBBMCLR
            LDBBMClr
            --------

Syntax: 
    megacli ldbbmclr lx|l0,1,2,...|lall an|a0,1,2|aall

Description: 
        Command clears the LDBBM table entries for the logical drive(s) on the given adapter(s).

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDBI
            LDBI
            ----

Syntax: 
    megacli ldbi enbl|dsbl|getsetting|abort|suspend|resume|showprog|progdsply lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        Command manages background initialization options. Single, multiple or all adapters can be selected.
        The possible parameters are:
        Enbl, Dsbl: Enables or disables the background initialization on the selected adapter(s).
        Suspend: Suspend an ongoing background initialization. 
        Resume: Resume a Suspend background initialization.
        ProgDsply: Displays an ongoing background initialization in a loop. This function completes only when all background initialization processes complete or you press a key to exit. 
        ShowProg: Displays the current progress value.
        GetSetting: Displays current background initialization setting (Enabled or Disabled).

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDCC
            LDCC
            ----

Syntax: 
    megacli ldcc {start [force]}|abort|suspend|resume|showprog|progdsply lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        Command performs Check consistence operation on given virtual disk.
        The possible parameters are:
        Start: Starts a CC on the virtual disk(s), then displays the progress (optional) and time remaining.
        Suspend: Suspend an ongoing CC on the virtual disk(s). 
        Resume: Resume a Suspend CC on the virtual disk(s). 
        Abort: Aborts an ongoing CC on the virtual disk(s). 
        ShowProg: Displays a snapshot of an ongoing CC. 
        ProgDsply: Displays ongoing CC progress. The progress displays until at least one CC is completed or a key is pressed.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDEXPANSION
            LDExpansion
            ------

Syntax: 
    megacli ldexpansion pn [dontexpandarray] lx|l0,1,2|lall an|a0,1,2|aall 

Description: 
        This command will expands the VD by N percentage if space is available. The space available for expansion for the VD is given by command  -getLdExpansionInfo.
        Option -dontExpandArray needs to be given if increase in Array size is not required (i.e. VD will not grow using Size available to grow within Disks that belong to the Array)

Convention:   
          -lN         N specifies the virtual/logical drive number for the command.
          -l0,1,2     Specifies the command is for virtual/logical drive number 0, 1, and 2. More than one virtual/logical driver number can be selected.
          -lALL       Specifies the command is for all virtual/logical drive.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDGETNUM
            LDGetNum
            --------

Syntax: 
    megacli ldgetnum an|a0,1,2|aall

Description: 
        Displays the number of virtual disks attached to the adapter. The return value is the number of virtual disks.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDGETPROP
            LDGetProp 
            ---------

Syntax: 
    megacli ldgetprop  cache | access | name | dskcache | pspolicy | consistency  lx|l0,1,2|lall  
                -aN|-a0,1,2|-aALL

Description: 
        Displays the cache and access policies of the virtual disk(s)
        The possible parameters are:
        Cache: Cached, Direct: Displays cache policy.
        WT (Write through), WB (Write back): Selects write policy.
        NORA (No read ahead), RA (Read ahead), ADRA (Adaptive read ahead): Selects read policy.
        Access: -RW, -RO, Blocked: Displays access policy.
        DskCache: Displays physical disk cache policy.
        PSPolicy: Displays the default & current power savings policy of the virtual disk.

Convention:   
        Consistency: Displays LD Consistency State .
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDINFO
            LDInfo
            ------

Syntax: 
    megacli ldinfo lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        Command displays information about the virtual disk(s) on the selected adapter(s). This information includes the name, RAID level, RAID level qualifier, size in megabytes, state, strip size, number of drives, span depth, cache policy, access policy, and ongoing activity progress, if any, including initialization, background initialization, consistency check, and reconstruction.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDINIT
            LDInit
            ------

Syntax: 
    megacli ldinit {start [full]}|abort|showprog|progdsply lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        Command performs initialization operation on given virtual disk.
        The possible parameters are:
        Start: Starts the initialization (writing 0s) on the virtual disk(s) and displays the progress (this is optional). The fast initialization option initializes the first and last 8 Mbyte areas on the virtual disk. The full option allows you to initialize the entire virtual disk. 
        Abort: Aborts the ongoing initialization on the LD(s).
        ShowProg: Displays the snapshot of the ongoing initialization, if any.
        ProgDsply: Displays the progress of the ongoing initialization. The routine continues to display the progress until at least one initialization is completed or a key is pressed.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDJOINMIRROR
            LDJoinMirror
            ------

Syntax: 
    megacli ldjoinmirror datasrc <val>[force] lx|l0,1,2,...|lall an|a0,1,2|aall

Description: 
        command joins the VD with its mirror.

Convention:   
          -DataSrc <val>        if the val is 0, then data will be copied from existing VD to drives.if the val is 1 then data will be copied from drives to VD
          -force        This option will force the copying of data from drives to VD otherwise the CLI will give a warning that copying data from drives to VD will result in data corruption and quit.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDMAKESECURE
            LDMakeSecure
            ------

Syntax: 
    megacli ldmakesecure lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        This operation will secure all the virtual drives that are a part of drive group.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDPDINFO
            LdPdInfo
            --------

Syntax: 
    megacli ldpdinfo an|a0,1,2|aall

Description: 
        Command displays information about the present virtual disk(s) and physical disk drive(s) on the selected adapter(s). Information including the number of virtual disks, the RAID level of the virtual disks, and physical drive size information, which includes raw size, coerced size, uncoerced size, and the SAS address.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDRECON
            LDRecon
            -------

Syntax: 
    megacli ldrecon {start rx [{add | rmv} physdrv "[e0:s0,...]"]}|showprog|progdsply lx an

Description: 
        Command controls and manages virtual disk reconstruction. The following are the virtual disk reconstruction settings you can select on a single adapter:
        The possible parameters are:
        Start: Starts a reconstruction of the selected virtual disk to a new RAID level.
        -   Add: Adds listed physical disks to the virtual disk and starts reconstruction on the selected virtual disk.
        -   Rmv: Removes one physical disk from the existing virtual disks and starts a reconstruction.
        ShowProg: Displays a snapshot of the ongoing reconstruction process. 
        R0|-R1|-R5: Changes the RAID level of the virtual disk when you start reconstruction. You may need to add or remove a physical drive to make this possible. 
        ProgDsply: Allows you to view the ongoing reconstruction. The routine continues to display progress until at least one reconstruction is completed or a key is pressed.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
LDSETPOWERPOLICY
            LdSetPowerPolicy   
          ----------------

Syntax: 
    megacli ldsetpowerpolicy default| automatic| none| maximum| maximumwithoutcaching
        -Lx|-L0,1,2|-Lall -aN|-a0,1,2|-aALL 

Description: 
        Sets the power saving level on the virtual disk.

Convention:   
          -Lx         x specifies the LD number for the command and the LD has to be a repository.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDSETPROP
            LDSetProp 
            ---------

Syntax: 
    megacli ldsetprop  {name ldnamestring} | rw|ro|blocked|removeblocked | wt|wb|ra|nora|adra | dsblpi | cached|direct | endskcache|disdskcache | cachedbadbbu|nocachedbadbbu lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        Command to change virtual disk properties on specified controller.
        The possible parameters are:
        WT (Write through), WB (Write back): Selects write policy.
        NORA (No read ahead), RA (Read ahead), ADRA (Adaptive read ahead): Selects read policy.
        Cached/Direct: Selects cache policy. 
        CachedBadBBU|NoCachedBadBBU: Specifies whether to use write cache when the BBU is bad.
        RW, -RO, Blocked: Selects access policy. 
        EnDskCache: Enables disk cache. 
        DisDskCache: Disables disk cache.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
LDVIEWMIRROR
            LDViewMirror
            ------

Syntax: 
    megacli ldviewmirror lx|l0,1,2|lall an|a0,1,2|aall

Description: 
        command displays the information about the mirror assocaited with the VD.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
PDCLEAR
            PDClear
            -------

Syntax: 
    megacli pdclear start|stop|showprog |progdsply physdrv "[e0:s0,e1:s1,...]" an|a0,1,2|aall

Description: 
        Manages physical disk initialization or displays initialization progress on a single adapter, multiple adapters, or all adapters:
        The possible parameters are:
        Start: Starts initialization on the selected physical disk drive(s).
        Stop: Stops an ongoing initialization on the selected physical disk drive(s). 
        ShowProg: Displays the current progress percentage and time remaining for the initialization. This option is useful for running the application through scripts. ProgDsply: Displays the ongoing clear progress. The routine continues to display the initialization progress until at least one initialization is completed or a key is pressed.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDCPYBK
            PDCpyBk
            --------

Syntax: 
    megacli pdcpybk start physdrv "[e0:s0,e1:s1] an|a0,1,2|aall

Syntax: 
    megacli pdcpybk stop|suspend|resume|showprog|progdsply physdrv "[e0:s0] an|a0,1,2|aall

Description: 
        Command performs the copy back operation on given physical drive.
        The possible parameters are:
        Start:  Initializes the copy back operation on physical drive.
        Suspend: Suspend the copy back operation on physical drive.
        Resume: Resume the copy back operation on physical drive.
        Stop: Stops the copy back operation on physical drive.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
           E0:S0 - Specifies the source Physical drive 
           E1:S1 - Specifies the Destination Physical drive 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDFWDOWNLOAD
            PdFwDownload
            ------------

Syntax: 
    megacli pdfwdownload [offline] [forceactivate] {[satabridge] physdrv [0:1] }|{encdevid[devid1]} f <filename> an|a0,1,2|aall 

Description: 
        Flashes the firmware from the file specified at command line. Firmware files used to flash the physical drive or Enclosure can be of any format. Command assumes that user is providing valid firmware image and flashes the same. Its up to the physical drive or Enclosure to do error checking. 
        -forceactivate option should be used only if target device is an enclosure.
        -offline option should be used only if target device is an enclosure firmware and if the enclosure type is Shea or MileHigh. The firmware file extension should be .esm if this option is used. This option forces the application to flash enclosure firmware using offline method 
         and is supported only in DOS version of the command tool. 
        -SataBridge option must be used if target device is Alta. 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0]  Physical drive, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
            EncdevId[devId1] deviceId of the enclosure.  
=============================================================================
PDGETMISSING
            PdGetMissing
            ------------

Syntax: 
    megacli pdgetmissing an|a0,1,2|aall

Description: 
        Command displays all the physical disk drive(s) in missing status. 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
PDGETNUM
            PDGetNum
            --------

Syntax: 
    megacli pdgetnum an|a0,1,2|aall

Description: 
        Displays the total number of physical disk drives attached to an adapter. Drives can be attached directly or through enclosures. The return value is the number of physical disk drives.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
PDHSP
            PDHSP
            ----------

Syntax: 
    megacli pdhsp {set [dedicated [arrayn|array0,1,2...]"] "[enclaffinity] 
        [-nonRevertible]} |-Rmv -PhysDrv[E0:S0,E1:S1,...] -aN|-a0,1,2|-aALL

Description: 
        Changes the physical disk drive state (as it relates to hot spares) and associates the drive to an enclosure and virtual disk on a single adapter, multiple adapters, or all adapters.
        The possible parameters are:
        Set: Changes the physical disk drive state to dedicated hot spare for the enclosure. 
        Rmv: Changes the physical drive state to ready (removes the hot spare).
        EnclAffinity: Associates the hot spare to a selected enclosure.
        Array0: Dedicates the hot spare to a specific virtual disk.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDINFO
            pdInfo
            ------

Syntax: 
    megacli pdinfo physdrv "[e0:s0,e1:s1,...]" an|a0,1,2|aall

Description: 
        Provides information of physical disk drives connected to the enclosure and adapter slot. This includes information such as the enclosure number, slot number, device ID, sequence number, drive type, size (if a physical drive), foreign state, firmware state, and inquiry data.  For SAS devices, this includes additional information such as the SAS address of the drive. For SAS expanders, this includes additional information such as the number of devices connected to the expander.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDINSTANTSECUREERASE
            PDInstantSecureErase
            -------------

Syntax: 
    megacli  pdinstantsecureerase physdrv "[e0:s0,e1:s1,...]" | [force] an|a0,1,2|aall

Description: 
        Command erases specified drives security configuration, so that it can be used on given controller. This operation removes current data available on drive.   

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDLIST
            PDList
            ------

Syntax: 
    megacli pdlist an|a0,1,2|aall

Description: 
        Displays information about all physical disk drives and other devices connected to the selected adapter(s). This includes information such as the drive type, size (if a physical disk drive), serial number, and firmware version of the device. For SAS devices, this includes additional information such as the SAS address of the device. For SAS expanders, this includes additional information such as the number of drives connected to the expander.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
=============================================================================
PDLOCATE
            PdLocate
            --------

Syntax: 
    megacli pdlocate {[start] | stop } physdrv "[e0:s0,e1:s1,...]" an|a0,1,2|aall

Description: 
        Locates the physical disk drive(s) for the selected adapter(s) and activates the physical disk activity LED.
        The possible parameters are:
        Start:  Activates LED on the selected physical disk drive(s).
        Stop: Stops active LED on the selected physical disk drive(s). 

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDMAKEGOOD
            PDMakeGood
            ----------

Syntax: 
    megacli pdmakegood physdrv "[e0:s0,e1:s1,...]" | [force] an|a0,1,2|aall

Description: 
        Command changes the physical disk drive state to Ready.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
        Force:  This option will force PD state to be Unconfigured Good and is applicable only if the previous state is SYSTEM.
=============================================================================
PDMAKEJBOD
            PDMakeJBOD
            ----------

Syntax: 
    megacli pdmakejbod physdrv "[e0:s0,e1:s1,...]" an|a0,1,2|aall

Description: 
        Command changes the physical disk drive state to JBOD.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDMARKMISSING
            PdMarkMissing
            -------------

Syntax: 
    megacli pdmarkmissing physdrv "[e0:s0,e1:s1,...]" an|a0,1,2|aall

Description: 
        Command Marks the configured physical disk drive as missing for the selected adapter.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDOFFLINE
            PDOffline
            ----------

Syntax: 
    megacli pdoffline physdrv "[e0:s0,e1:s1,...]" an|a0,1,2|aall

Description: 
        Command changes the physical disk drive state to Offline.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDONLINE
            PDOnline  
            --------

Syntax: 
    megacli pdonline  physdrv "[e0:s0,e1:s1,...]" an|a0,1,2|aall

Description: 
        Command changes the physical drive state to Online.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. You can 
                      select two or more adapters in this manner.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDPRPRMV
            PdPrpRmv
            --------

Syntax: 
    megacli pdprprmv [undo] physdrv "[e0:s0] an|a0,1,2|aall

Description: 
        Command prepares unconfigured physical drive(s) for removal. The firmware will spin down this drive. The drive state is set to unaffiliated, which marks it as offline even though it is not a part of configuration. 
        The possible parameters is:
        Undo: undoes this operation. If you select undo, the firmware marks this physical disk as unconfigured good.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PDRBLD
            PDRbld
            ------

Syntax: 
    megacli pdrbld start|stop|suspend|resume|showprog |progdsply physdrv "[e0:s0,e1:s1,...]" an|a0,1,2|aall

Description: 
        Manages a physical disk rebuild or displays the rebuild progress on a single adapter, multiple adapters, or all adapters. 
        The possible parameters are:
        Start: Starts a rebuild on the selected physical drive(s) and displays the rebuild progress (optional).
        Suspend: Suspend a rebuild on the selected physical drive(s).
        Resume: Resume a rebuild on the selected physical drive(s).
        Stop: Stops an ongoing rebuild on the selected physical drive(s). 
        ShowProg: Displays the current progress percentage and time remaining for the rebuild. This option is useful for running the application through scripts. 
        ProgDsply: Displays the ongoing rebuild progress. This routine displays the rebuild progress until at least one initialization is completed or a key is pressed.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
        Note: Physical disk must meet the size requirements before it can be rebuilt, and it must be part of an array:
=============================================================================
PDREPLACEMISSING
            PdReplaceMissing
            ----------------

Syntax: 
    megacli pdreplacemissing physdrv "[e0:s0] array A rob B an

Description: 
        Replaces the configured physical disk drives that are identified as missing and then starts an automatic rebuild.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
        Note: Specified array Index/row must be a missing drive. Automatic rebuild will start.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
=============================================================================
PERFMON
            Perfmon
               ----------

Syntax: 
    megacli perfmon {start interval <val>} | {stop} | {getresults f <filename>} an 

Description: 
        This command show the performance data.
interval: Interval is perfomance data capture time in minutes. 
=============================================================================
PHYERRORCOUNTERS
            PhyErrorCounters    
            ----------------

Syntax: 
    megacli phyerrorcounters an|a0,1,2|aall   

Description: 
        Command gets information about PHY's error logs for the PHYs.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
PHYINFO
            PhyInfo
            -------

Syntax: 
    megacli phyinfo phym an|a0,1,2|aall

Description: 
        Command displays PHY connection information for physical PHY M on the adapter(s).

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
PHYSETLINKSPEED
            PhySetLinkSpeed
            -------

Syntax: 
    megacli physetlinkspeed phym speed an|a0,1,2|aall

Description: 
        Command sets PHY link speed for physical PHY M on the adapter(s).
        Where speed can be 0-(No Limit), 1-(1.5GB/s), 2-(3GB/s), 4-(6GB/s).

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
SECUREERASE
            SecureErase
            -----------

Syntax: 
    megacli secureerase 
    Start[
        Simple|
        [Normal   [ |ErasePattern ErasePatternA|ErasePattern ErasePatternA ErasePattern ErasePatternB]]|
        [Thorough [ |ErasePattern ErasePatternA|ErasePattern ErasePatternA ErasePattern ErasePatternB]]]
    | Stop
    | ShowProg
    | ProgDsply 
    [-PhysDrv [E0:S0,E1:S1,...] | -Lx|-L0,1,2|-LALL] -aN|-a0,1,2|-aALL

Description: 
        Securely erases data on non-SEDs and unsecured VDs 
        The possible parameters are:
        Start: Starts Secure Erase on the selected physical drive(s) or virtual drive(s).
        Simple|Normal|Thorough:These are the erase types.
        ErasePattern:The pattern for erasing
        ErasePatternA|ErasePatternB:This is an 8-Bit binary pattern for erasing(Example:01001101)
        Stop: Stops the ongoing Secure Erase on the selected physical drive(s) or virtual drive(s). 
        ShowProg: Displays the snapshot of ongoing SecureErase.
        ProgDsply: Displays the ongoing SecureErase progress. This routine displays the SecureErase progress until at least one SecureErase is completed or a key is pressed.

Convention:   
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
          -PhysDrv[E0:S0,E1:S1....]  List of physical drives, E specifies enclosure 
                     id and S specifies Slot number of physical drive.  
          -Lx         x specifies the LD number.
          -aN         N specifies the adapter number for the command.
          -a0,1,2     Specifies the command is for adapters 0, 1, and 2. More than one adapter can be selected.
          -aALL       Specifies the command is for all adapters.
=============================================================================
SETKEYID
            SetKeyID
            --------

Syntax: 
    megacli setkeyid keyid kkkkkkkkkkk an

Description: 
        Command sets the security key Id on given adapter.
        The parameters are:
        KeyID: Security key Id of given controller.

Convention:   
          -aN         N specifies the adapter number for the command.
=============================================================================
SHOWSUMMARY
            ShowSummary   
          ----------------

Syntax: 
    megacli showsummary [f filename] an

Description: 
        Displays the summary of all the important information about the controller

Convention:   
          -aN         N specifies the adapter number for the command.
=============================================================================
V
            Version 
            -------

Syntax: 
    megacli v

Description: 
        Command displays the version number of the MegaCLI utility.
=============================================================================
VERIFYSECURITYKEY
            VerifySecurityKey
            -----------------

Syntax: 
    megacli verifysecuritykey securitykey sssssssssss an

Description: 
        Command validates the given security key with the security key of given controller. 
        The parameters is:
        SecurityKey: Security key need to be verified. 

Convention:   
          -aN         N specifies the adapter number for the command.
=============================================================================
VERSION
            Version   
          ------------------

Syntax: 
    megacli version cli|ctrl|driver|pd an|a0,1,2|aall

Description: 
         Displays the version corresponding to tne option
Note: -Driver option is not supported for MegaCliKL application.
=============================================================================
XD ADDCDEV
            AddCdev command
            ---------------

Syntax: 
    megacli xd addcdev <devlist> | force

Description: 
          This command adds the given cache devices to the cache group.
              <devList>  List of devices seperated by ":" (without leading/trailing space/tab).
              -force     Force adding device with filesystem / MBR / swap / LVM2.
=============================================================================
XD ADDVD
            AddVd command
            -------------

Syntax: 
    megacli xd addvd <devlist>

Description: 
          This command adds the given virtual drives to the cache group.
              <devList>  List of devices seperated by ":" (without leading/trailing space/tab).
=============================================================================
XD APPLYACTIVATIONKEY
            ApplyActivationKey command
            ---------------------------

Syntax: 
    megacli xd applyactivationkey <key> in 

Description: 
          This command applies Activation Key a WarpDrive. 
              <key>     Activation key
              -iN       Applies the Activation key to the Nth WarpDrive. N is an index of a WD from 
                        the WD list(listed by "MegaCli64 XD -WarpDriveInfo -iALL" command).
=============================================================================
XD CDEVLIST
            CdevList command
            ----------------

Syntax: 
    megacli xd cdevlist | configured | unconfigured

Description: 
          This command lists configured and unconfigured cache devices.
          without any option this will list both configured and unconfigured devices.
          The information displayed are: Device Node, WWN, Capacity.
          The capacity of the device is displayed in terms of blocks.
              -configured    Lists only configured cache devices.
              -unconfigured  Lists only unconfigured cache devices.
=============================================================================
XD CONFIGINFO
            ConfigInfo command
            ------------------

Syntax: 
    megacli xd configinfo

Description: 
          This command displays information about XD driver.
=============================================================================
XD FETCHSAFEID
            FetchSafeId command
            --------------------

Syntax: 
    megacli xd fetchsafeid in|iall 

Description: 
          This command displays the Safe ID of a WarpDrive.
              -iN       Displays the SafeID of the Nth WarpDrive. N is an index of a WD from 
                        the WD list(listed by "MegaCli64 XD -WarpDriveInfo -iALL" command).
=============================================================================
XD ONLINEVD
            OnlineVd command
            -----------------

Syntax: 
    megacli xd onlinevd <devlist> 

Description: 
          This command reconfigures a VD which is in Ready for Online State.
=============================================================================
XD PERFSTATS
            PerfStats command
            -----------------

Syntax: 
    megacli xd perfstats

Description: 
          This command displays information about XD performance statistics.
=============================================================================
XD REMCDEV
            RemCdev command
            ---------------

Syntax: 
    megacli xd remcdev <devlist>

Description: 
          This command removes the given cache devices from the cache group.
              <devList>  List of devices seperated by ":" (without leading/trailing space/tab).
=============================================================================
XD REMVD
            RemVd command
            -------------

Syntax: 
    megacli xd remvd <devlist>

Description: 
          This command removes the given virtual drives from the cache group.
              <devList>  List of devices seperated by ":" (without leading/trailing space/tab).
=============================================================================
XD VDLIST
            VdList command
            --------------

Syntax: 
    megacli xd vdlist | configured | unconfigured

Description: 
          This command lists configured and unconfigured virtual drives,
          without any option this will list both configured and unconfigured devices.
          The information displayed are: Device Node, WWN, Capacity.
          The capacity of the device is displayed in terms of blocks.
              -configured    Lists only configured virtual drives.
              -unconfigured  Lists only unconfigured virtual drives.
=============================================================================
XD WARPDRIVEINFO
            WarpDriveInfo command
            ----------------------

Syntax: 
    megacli xd warpdriveinfo in|iall 

Description: 
          This command displays the list of WarpDrives connected to the system. 
      The information displayed includes controller ID and other information 
      about the WarpDrive controller. The index of a particular WarpDrive in the list 
      is needed to be used in the PFK related XD commands, i.e., 
      FetchSafeId and ApplyActivationKey. 
              -iN   Lists info about only Nth Warpdrive. N is an index of a WD from 
                        the WD list(listed by iALL).
              -iALL Lists info about all WarpDrives in the system.
proper bash history logging
posted on 2016-09-14 23:32

By appending these to your .bashrc:

HISTTIMEFORMAT="%s "
PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND ; }"'echo $$ $USER "$(history 1)" >> ~/.bash_history2'

you get a proper history looking like that from all shells connected to a server for each individual user:

root@fahi:~# cat .bash_history2
2786 root    91  1473887187 echo test
2786 root    92  1473887262 l
2786 root    93  1473887267 rm .bash_eternal_history 
2806 root    98  1473887148 tail -f .bash_eternal_history 
2806 root    99  1473888769 cat .bash_history2
2806 root   100  1473888788 lsblk
2821 root    98  1473887148 tail -f .bash_eternal_history 
2821 root    99  1473888794 history 
2821 root   100  1473888809 cat .bash_history2
2835 root   102  1473888809 cat .bash_history2

From the first look it looks good so far, but I fear there is some testing due to make sure there are no bad edge cases. One that I've found so far, was the last command gets repeated on login, maybe, and also when ctrl-c'ing commands. But this could have been the cause due to different shells being active with and without the prompt_command.

More alternatives can be found here.

xxd vs hexdump vs od for examining disk dumps from a VMware image
posted on 2016-09-14 21:49

the problem

The problem at hand was, VEEAM backup could not be restored. Neither could the backup be restored, nor could the the backup be opened in from the GUI. So how to verify wether something could be rescued from there?

getting the disk image out of the VEEAM backup

VEEAM lets you extract single images from the complete backup with its Extract.exe utility. Simply locate the executable on disk and start it without parameters. Then you are prompted for the full path to the complete .vbk backup file, afterward select the image you want to extract.

first look at the disk dump

After copying the folder with all the extracted contents, onto a linux box, the fun could start.

  • The VMware image is in the `diskname-###.vmdk' file.
  • .vmdk is the disk configuration file.
  • .nvram is the virtual machine's BIOS.
  • .vmx is the primary configuration file.
  • .vmxf is supplemental configuration.

examining the disk image

Easiest this is done through parted, showing once the size in sectors. This helps when using dd later and skipping over the first x sectors. Afterwards in bytes, for the offset in losetup, which will be easier than dd-skipping around..

Sectors:

root@workstation:/home/sjas/ftp# parted my_server-flat.vmdk u s p
Error: Can't have a partition outside the disk!
Ignore/Cancel? i                                                          
Error: Can't have a partition outside the disk!
Ignore/Cancel? i                                                          
Model:  (file)
Disk /home/sjas/ftp/my_server-flat.vmdk: 83869185s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start     End        Size       Type     File system     Flags
 1      2048s     8390655s   8388608s   primary  linux-swap(v1)
 2      8390656s  83886079s  75495424s  primary                  boot

Bytes:

root@workstation:/home/sjas/ftp# parted my_server-flat.vmdk u b p
Error: Can't have a partition outside the disk!
Ignore/Cancel? i                                                          
Error: Can't have a partition outside the disk!
Ignore/Cancel? i                                                          
Model:  (file)
Disk /home/sjas/ftp/my_server-flat.vmdk: 42941022720B
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start        End           Size          Type     File system     Flags
 1      1048576B     4296015871B   4294967296B   primary  linux-swap(v1)
 2      4296015872B  42949672959B  38653657088B  primary                  boot

setting up the loop device, so the filesystem from within the file could be read

losetup  # should show nothing, so the first loop device we will use will be loop0
losetup -f # can alternatively be used to find the first free loop device
losetup /dev/loop0 my_server-flat.vmdk

To have easier access to the second partition (so we can use dd without having to use the skip flag all the time), we will loop the second partition, too. Offset is passed by -o in sectors, see the parted output above:

losetup -o 8390656 /dev/loop1 /dev/loop0

Then losetup should look like this:

root@workstation:/home/sjas/ftp# losetup 
NAME       SIZELIMIT     OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0          0         0  0 /home/sjas/ftp/my_server-flat.vmdk
/dev/loop1         0 4296015872         0  0 /dev/loop0

Alternatively you can use losetup -a to show the currently used loop devices.

Once you are done with everything, the loop devices could be deleted via losetup -d /dev/loopX for each one in use.

Alternatively, kpartx can be used, too. It would create device mappings automatically when run like kpartx -av my_server-flat.vmdk. The next free loop device under /dev/loopX would be chosen, and its partition could then be found under loopXp1, loopXp2, etc. Afterwards it could be deleted via kpartx -d my_server-flat.vmdk. However I prefer doing it manually, as with broken partitions kpartx cannot work properly, of course.

examination

Via dd the blocks can be read directly from the loop device'd disk. hexdump, xxd or od will make visible what is actually on there.

Initially this post grew out of the will to document the differences between them, but grew to include how to use the loop device stuff, too.

First lets have a look at the MBR, which is on the first block / 512 bytes on the device

root@workstation:/home/sjas/ftp# dd if=/dev/loop0 bs=512 count=1 2>/dev/null | file -
/dev/stdin: DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x2443e60, GRUB version 0.94

Now lets check wether a VBR is present on the second partition or not, which is not the cause:

root@workstation:/home/sjas/ftp# dd if=/dev/loop1 bs=512 count=1 2>/dev/null | file -
/dev/stdin: data

For illustration here are the three tools in action, showing the MBR of loop0. Lets have a look at the actual disk contents:

xxd:

root@workstation:/home/sjas/ftp# dd if=/dev/loop0 bs=512 count=1 2>/dev/null | xxd
0000000: eb48 9010 8ed0 bc00 b0b8 0000 8ed8 8ec0  .H..............
0000010: fbbe 007c bf00 06b9 0002 f3a4 ea21 0600  ...|.........!..
0000020: 00be be07 3804 750b 83c6 1081 fefe 0775  ....8.u........u
0000030: f3eb 16b4 02b0 01bb 007c b280 8a74 0302  .........|...t..
0000040: 8000 0080 603e 4402 0008 fa90 90f6 c280  ....`>D.........
0000050: 7502 b280 ea59 7c00 0031 c08e d88e d0bc  u....Y|..1......
0000060: 0020 fba0 407c 3cff 7402 88c2 52f6 c280  . ..@|<.t...R...
0000070: 7454 b441 bbaa 55cd 135a 5272 4981 fb55  tT.A..U..ZRrI..U
0000080: aa75 43a0 417c 84c0 7505 83e1 0174 3766  .uC.A|..u....t7f
0000090: 8b4c 10be 057c c644 ff01 668b 1e44 7cc7  .L...|.D..f..D|.
00000a0: 0410 00c7 4402 0100 6689 5c08 c744 0600  ....D...f.\..D..
00000b0: 7066 31c0 8944 0466 8944 0cb4 42cd 1372  pf1..D.f.D..B..r
00000c0: 05bb 0070 eb7d b408 cd13 730a f6c2 800f  ...p.}....s.....
00000d0: 84f0 00e9 8d00 be05 7cc6 44ff 0066 31c0  ........|.D..f1.
00000e0: 88f0 4066 8944 0431 d288 cac1 e202 88e8  ..@f.D.1........
00000f0: 88f4 4089 4408 31c0 88d0 c0e8 0266 8904  ..@.D.1......f..
0000100: 66a1 447c 6631 d266 f734 8854 0a66 31d2  f.D|f1.f.4.T.f1.
0000110: 66f7 7404 8854 0b89 440c 3b44 087d 3c8a  f.t..T..D.;D.}<.
0000120: 540d c0e2 068a 4c0a fec1 08d1 8a6c 0c5a  T.....L......l.Z
0000130: 8a74 0bbb 0070 8ec3 31db b801 02cd 1372  .t...p..1......r
0000140: 2a8c c38e 0648 7c60 1eb9 0001 8edb 31f6  *....H|`......1.
0000150: 31ff fcf3 a51f 61ff 2642 7cbe 7f7d e840  1.....a.&B|..}.@
0000160: 00eb 0ebe 847d e838 00eb 06be 8e7d e830  .....}.8.....}.0
0000170: 00be 937d e82a 00eb fe47 5255 4220 0047  ...}.*...GRUB .G
0000180: 656f 6d00 4861 7264 2044 6973 6b00 5265  eom.Hard Disk.Re
0000190: 6164 0020 4572 726f 7200 bb01 00b4 0ecd  ad. Error.......
00001a0: 10ac 3c00 75f4 c300 0000 0000 0000 0000  ..<.u...........
00001b0: 0000 0000 0000 0000 9b09 0b00 0000 0020  ............... 
00001c0: 2100 824b 810a 0008 0000 0000 8000 804b  !..K...........K
00001d0: 820a 83fe ffff 0008 8000 00f8 7f04 0000  ................
00001e0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00001f0: 0000 0000 0000 0000 0000 0000 0000 55aa  ..............U.

hexdump:

root@workstation:/home/sjas/ftp# dd if=/dev/loop0 bs=512 count=1 2>/dev/null | hexdump -vC
00000000  eb 48 90 10 8e d0 bc 00  b0 b8 00 00 8e d8 8e c0  |.H..............|
00000010  fb be 00 7c bf 00 06 b9  00 02 f3 a4 ea 21 06 00  |...|.........!..|
00000020  00 be be 07 38 04 75 0b  83 c6 10 81 fe fe 07 75  |....8.u........u|
00000030  f3 eb 16 b4 02 b0 01 bb  00 7c b2 80 8a 74 03 02  |.........|...t..|
00000040  80 00 00 80 60 3e 44 02  00 08 fa 90 90 f6 c2 80  |....`>D.........|
00000050  75 02 b2 80 ea 59 7c 00  00 31 c0 8e d8 8e d0 bc  |u....Y|..1......|
00000060  00 20 fb a0 40 7c 3c ff  74 02 88 c2 52 f6 c2 80  |. ..@|<.t...R...|
00000070  74 54 b4 41 bb aa 55 cd  13 5a 52 72 49 81 fb 55  |tT.A..U..ZRrI..U|
00000080  aa 75 43 a0 41 7c 84 c0  75 05 83 e1 01 74 37 66  |.uC.A|..u....t7f|
00000090  8b 4c 10 be 05 7c c6 44  ff 01 66 8b 1e 44 7c c7  |.L...|.D..f..D|.|
000000a0  04 10 00 c7 44 02 01 00  66 89 5c 08 c7 44 06 00  |....D...f.\..D..|
000000b0  70 66 31 c0 89 44 04 66  89 44 0c b4 42 cd 13 72  |pf1..D.f.D..B..r|
000000c0  05 bb 00 70 eb 7d b4 08  cd 13 73 0a f6 c2 80 0f  |...p.}....s.....|
000000d0  84 f0 00 e9 8d 00 be 05  7c c6 44 ff 00 66 31 c0  |........|.D..f1.|
000000e0  88 f0 40 66 89 44 04 31  d2 88 ca c1 e2 02 88 e8  |..@f.D.1........|
000000f0  88 f4 40 89 44 08 31 c0  88 d0 c0 e8 02 66 89 04  |..@.D.1......f..|
00000100  66 a1 44 7c 66 31 d2 66  f7 34 88 54 0a 66 31 d2  |f.D|f1.f.4.T.f1.|
00000110  66 f7 74 04 88 54 0b 89  44 0c 3b 44 08 7d 3c 8a  |f.t..T..D.;D.}<.|
00000120  54 0d c0 e2 06 8a 4c 0a  fe c1 08 d1 8a 6c 0c 5a  |T.....L......l.Z|
00000130  8a 74 0b bb 00 70 8e c3  31 db b8 01 02 cd 13 72  |.t...p..1......r|
00000140  2a 8c c3 8e 06 48 7c 60  1e b9 00 01 8e db 31 f6  |*....H|`......1.|
00000150  31 ff fc f3 a5 1f 61 ff  26 42 7c be 7f 7d e8 40  |1.....a.&B|..}.@|
00000160  00 eb 0e be 84 7d e8 38  00 eb 06 be 8e 7d e8 30  |.....}.8.....}.0|
00000170  00 be 93 7d e8 2a 00 eb  fe 47 52 55 42 20 00 47  |...}.*...GRUB .G|
00000180  65 6f 6d 00 48 61 72 64  20 44 69 73 6b 00 52 65  |eom.Hard Disk.Re|
00000190  61 64 00 20 45 72 72 6f  72 00 bb 01 00 b4 0e cd  |ad. Error.......|
000001a0  10 ac 3c 00 75 f4 c3 00  00 00 00 00 00 00 00 00  |..<.u...........|
000001b0  00 00 00 00 00 00 00 00  9b 09 0b 00 00 00 00 20  |............... |
000001c0  21 00 82 4b 81 0a 00 08  00 00 00 00 80 00 80 4b  |!..K...........K|
000001d0  82 0a 83 fe ff ff 00 08  80 00 00 f8 7f 04 00 00  |................|
000001e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa  |..............U.|
00000200

od:

root@workstation:/home/sjas/ftp# dd if=/dev/loop0 bs=512 count=1 2>/dev/null | od -v -A d -t x2z
0000000 48eb 1090 d08e 00bc b8b0 0000 d88e c08e  >.H..............<
0000016 befb 7c00 00bf b906 0200 a4f3 21ea 0006  >...|.........!..<
0000032 be00 07be 0438 0b75 c683 8110 fefe 7507  >....8.u........u<
0000048 ebf3 b416 b002 bb01 7c00 80b2 748a 0203  >.........|...t..<
0000064 0080 8000 3e60 0244 0800 90fa f690 80c2  >....`>D.........<
0000080 0275 80b2 59ea 007c 3100 8ec0 8ed8 bcd0  >u....Y|..1......<
0000096 2000 a0fb 7c40 ff3c 0274 c288 f652 80c2  >. ..@|<.t...R...<
0000112 5474 41b4 aabb cd55 5a13 7252 8149 55fb  >tT.A..U..ZRrI..U<
0000128 75aa a043 7c41 c084 0575 e183 7401 6637  >.uC.A|..u....t7f<
0000144 4c8b be10 7c05 44c6 01ff 8b66 441e c77c  >.L...|.D..f..D|.<
0000160 1004 c700 0244 0001 8966 085c 44c7 0006  >....D...f.\..D..<
0000176 6670 c031 4489 6604 4489 b40c cd42 7213  >pf1..D.f.D..B..r<
0000192 bb05 7000 7deb 08b4 13cd 0a73 c2f6 0f80  >...p.}....s.....<
0000208 f084 e900 008d 05be c67c ff44 6600 c031  >........|.D..f1.<
0000224 f088 6640 4489 3104 88d2 c1ca 02e2 e888  >..@f.D.1........<
0000240 f488 8940 0844 c031 d088 e8c0 6602 0489  >..@.D.1......f..<
0000256 a166 7c44 3166 66d2 34f7 5488 660a d231  >f.D|f1.f.4.T.f1.<
0000272 f766 0474 5488 890b 0c44 443b 7d08 8a3c  >f.t..T..D.;D.}<.<
0000288 0d54 e2c0 8a06 0a4c c1fe d108 6c8a 5a0c  >T.....L......l.Z<
0000304 748a bb0b 7000 c38e db31 01b8 cd02 7213  >.t...p..1......r<
0000320 8c2a 8ec3 4806 607c b91e 0100 db8e f631  >*....H|`......1.<
0000336 ff31 f3fc 1fa5 ff61 4226 be7c 7d7f 40e8  >1.....a.&B|..}.@<
0000352 eb00 be0e 7d84 38e8 eb00 be06 7d8e 30e8  >.....}.8.....}.0<
0000368 be00 7d93 2ae8 eb00 47fe 5552 2042 4700  >...}.*...GRUB .G<
0000384 6f65 006d 6148 6472 4420 7369 006b 6552  >eom.Hard Disk.Re<
0000400 6461 2000 7245 6f72 0072 01bb b400 cd0e  >ad. Error.......<
0000416 ac10 003c f475 00c3 0000 0000 0000 0000  >..<.u...........<
0000432 0000 0000 0000 0000 099b 000b 0000 2000  >............... <
0000448 0021 4b82 0a81 0800 0000 0000 0080 4b80  >!..K...........K<
0000464 0a82 fe83 ffff 0800 0080 f800 047f 0000  >................<
0000480 0000 0000 0000 0000 0000 0000 0000 0000  >................<
0000496 0000 0000 0000 0000 0000 0000 0000 aa55  >..............U.<
0000512

When i know where to look at, I prefer od, as it lets you see the position in decimal bytes (first column, compare to previous output). This helps A LOT when using dd input where you skip-ed the first N blocks / sectors, since you can read wether you are looking at the part which you wanted to examine.

Some notes on its parameters:

  • -A d = show position in decimals. Use x for hexadecimal.
  • -t x1z = show hex output, double-byte-wise (x2, use x1 for single-byte-wise output), z shows the data in the rightmost column.
  • --endian=little = choose endianness. Since this is a x86_64 intel cpu, we need little endian. I could have omitted this, but didn't for illustrating.
  • While still searching on the disk for data (using the dd-to-od from above, but piped ot less, using the -v flag with od is pratical, as it will condense lines consisting only of zeroes, showing only an asterisk.

Now that the basics are covered here, the rest should be easy, so only some more notes along the way:

  • No need to specify the blocksize with dd, since its 512 bytes by default.
  • Change it, in case you know how many bytes you want to jump around and want to be able to calculate easier (use bs=1024 and count=20 to read 20KiB from disk, instead of thinking it's count=40 what you need.
  • Using dd with the skip option, jumps so-and-so many blocks forward. For the sake of brevity, assume that both blocks and sectors are 512 bytes long. Remeber the output of parted in sectors from above?
  • Do use losetup, when kpartx fails.

result

I was able to discern that the backup was indeed broken, as there was not magic ext4 number present anywhere.

0xEF53 was nowhere to be found at the 0x38 offset after the initial padding of 1024 bytes in front of the start of the filesystem. Such info can be found here, for example.

At least I got some training with that stuff, been a while I got around to do so.

firewall with systemd file
posted on 2016-09-14 00:38

Some while ago I created a firewall script here, but this was prior to systemd. Now here's an update on how to fix this. First the unit-file, then the firewallscript in fullquote again.

prerequisites

apt install -y libnetfilter-conntrack3 libnfnetlink0
echo "net.netfilter.nf_conntrack_acct=1" >> /etc/sysctl.d/iptables.conntrack.accounting.conf

systemd unit file

/lib/systemd/system/firewall.service:

[Unit]
Description=Do some Firewalling.
Requires=local-fs.target
After=local-fs.target
Before=network.target

[Install]
WantedBy=multi-user.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/firewall start
ExecStop=/usr/sbin/firewall stop

firewallscript

/usr/sbin/firewall:

#!/bin/bash

# aliasing
IPTABLES=$(which iptables)
# set IF to work on
O=eth0
I=eth0


# load kernel modules
modprobe ip_conntrack
modprobe ip_conntrack_ftp

case "$1" in

    start)
        echo 60 > /proc/sys/net/ipv4/tcp_fin_timeout
        echo 0 > /proc/sys/net/ipv4/tcp_ecn

        echo -n "Starting stateful paket inspection firewall... "

        # delete/flush old/existing chains
        $IPTABLES -F
        # delete undefined chains
        $IPTABLES -X

        # create default chains
        $IPTABLES -N INPUT
        $IPTABLES -N OUTPUT

        # create log-drop chain
        $IPTABLES -N LOGDROP

        # set default chain-actions, accept all outgoing traffic per default
        $IPTABLES -P INPUT LOGDROP
        $IPTABLES -P OUTPUT ACCEPT
        $IPTABLES -P FORWARD ACCEPT

        # make NAT Pinning impossible
        $IPTABLES -A INPUT -p udp --dport 6667 -j LOGDROP
        $IPTABLES -A INPUT -p tcp --dport 6667 -j LOGDROP
        $IPTABLES -A INPUT -p tcp --sport 6667 -j LOGDROP
        $IPTABLES -A INPUT -p udp --sport 6667 -j LOGDROP
        $IPTABLES -A OUTPUT -p tcp --dport 6667 -j LOGDROP
        $IPTABLES -A OUTPUT -p udp --dport 6667 -j LOGDROP
        $IPTABLES -A OUTPUT -p tcp --sport 6667 -j LOGDROP
        $IPTABLES -A OUTPUT -p udp --sport 6667 -j LOGDROP

        # drop invalids
        $IPTABLES -A INPUT -m conntrack --ctstate INVALID -j LOGDROP

        # allow NTP and established connections
        $IPTABLES -A INPUT -p udp --dport 123 -j ACCEPT
        $IPTABLES -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
        $IPTABLES -A INPUT -i lo -j ACCEPT

        # pings are allowed
        $IPTABLES -A INPUT -p icmp --icmp-type 8 -m conntrack --state NEW -j ACCEPT

        # drop not routable networks
        $IPTABLES -A INPUT -i $I -s 169.254.0.0/16 -j LOGDROP
        $IPTABLES -A INPUT -i $I -s 172.16.0.0/12 -j LOGDROP
        $IPTABLES -A INPUT -i $I -s 192.0.2.0/24 -j LOGDROP
        #$IPTABLES -A INPUT -i $I -s 192.168.0.0/16 -j LOGDROP
        #$IPTABLES -A INPUT -i $I -s 10.0.0.0/8 -j LOGDROP
        $IPTABLES -A INPUT -s 127.0.0.0/8  ! -i lo -j LOGDROP




        # OPEN PORTS FOR USED SERVICES

        ## SSH
        $IPTABLES -A INPUT -i $I -p tcp -m conntrack --ctstate NEW --dport 22 -j ACCEPT

        ## HTTPD
        #$IPTABLES -A INPUT -i $I -p tcp -m conntrack --ctstate NEW --dport 80 -j ACCEPT
        #$IPTABLES -A INPUT -i $I -p tcp -m conntrack --ctstate NEW --dport 443 -j ACCEPT

        ## OVPN
        #$IPTABLES -A INPUT -i $I -p udp -m conntrack --ctstate NEW --dport 1194 -j ACCEPT

        ## MySQL
        #$IPTABLES -A INPUT -i $I -p tcp -m conntrack --ctstate NEW --dport 3306 -j ACCEPT






        # Portscanner will be blocked for 15 minutes
        $IPTABLES -A INPUT  -m recent --name psc --update --seconds 900 -j LOGDROP

        # only use when ports not available from the internet
        $IPTABLES -A INPUT ! -i lo -m tcp -p tcp --dport 1433  -m recent --name psc --set -j LOGDROP
        $IPTABLES -A INPUT ! -i lo -m tcp -p tcp --dport 3306  -m recent --name psc --set -j LOGDROP
        $IPTABLES -A INPUT ! -i lo -m tcp -p tcp --dport 8086  -m recent --name psc --set -j LOGDROP
        $IPTABLES -A INPUT ! -i lo -m tcp -p tcp --dport 10000 -m recent --name psc --set -j LOGDROP

        ### drop ms specific WITHOUT LOGGING - because: else too much logging
        $IPTABLES -A INPUT -p UDP -m conntrack --ctstate NEW --dport 137:139 -j DROP
        $IPTABLES -A INPUT -p UDP -m conntrack --ctstate NEW --dport 67:68 -j DROP

        # log packets to be dropped and drop them afterwards
        $IPTABLES -A INPUT -j LOGDROP
        $IPTABLES -A LOGDROP -j LOG --log-level 4 --log-prefix "dropped:"
        $IPTABLES -A LOGDROP -j DROP

        echo "Done."
    ;;

    stop)
        echo -n "Stopping stateful paket inspection firewall... "
        /etc/init.d/fail2ban stop
        # flush
        $IPTABLES -F
        # delete
        $IPTABLES -X
        # set default to accept all incoming and outgoing traffic
        $IPTABLES -P INPUT ACCEPT
        $IPTABLES -P OUTPUT ACCEPT
        echo "Done."
    ;;

    restart)
        echo -n "Restarting stateful paket inspection firewall... "
        echo -n
        /etc/init.d/firewall stop
        /etc/init.d/firewall start
        /etc/init.d/fail2ban start
    ;;

    status)
        $IPTABLES -L -vnx --line-numbers | \
        sed ''/Chain[[:space:]][[:graph:]]*/s//$(printf "\033[31;1m&\033[0m")/'' | \
        sed ''/^num.*/s//$(printf "\033[31m&\033[0m")/'' | \
        sed ''/[[:space:]]DROP/s//$(printf "\033[31m&\033[0m")/'' | \
        sed ''/REJECT/s//$(printf "\033[31m&\033[0m")/'' | \
        sed ''/ACCEPT/s//$(printf "\033[32m&\033[0m")/'' | \
        sed -r ''/\([ds]pt[s]\?:\)\([[:digit:]]\+\(:[[:digit:]]\+\)\?\)/s//$(printf "\\\1\033[35;1m\\\2\033[0m")/''| \
        sed -r ''/\([0-9]\{1,3\}\\.\)\{3\}[0-9]\{1,3\}\(\\/\([0-9]\)\{1,3\}\)\{0,1\}/s//$(printf "\033[37;1m&\033[0m")/g'' | \
        sed -r ''/\([^n][[:space:]]\)\(LOGDROP\)/s//$(printf "\\\1\033[1;31m\\\2\033[0m")/'' | \
        sed -r ''/[[:space:]]LOG[[:space:]]/s//$(printf "\033[36;1m&\033[0m")/''
    ;;

    monitor)
        if [ -n "$2" ]
            then $(which watch) -n1 -d $IPTABLES -vnxL "$2" --line-numbers
            else $(which watch) -n1 -d $IPTABLES -vnxL --line-numbers; fi
    ;;

    *)
        echo "Usage: $0 {start|stop|status|monitor [<chain>]|restart}"
        exit 1
    ;;

esac

exit 0

The coloring at the status part when using firewall status is borked. It works, but its completely shit from what I know now. The '' were a single double-apostrophe, but I was not good enough with bash when I copy pasted it and tried to color the shell output. Some day I may fix it. Hopefully.

finishing

chmod u+x /usr/sbin/firewall
systemctl enable firewall
firewall start

usage

This should suffice, just try it:

firewall
firewall start
firewall stop
firewall restart
firewall status
firewall monitor
linux bonding without ifenslave
posted on 2016-09-13 15:46

Sometimes you configured bonding on the switch and on the host itself. After a reboot, you figure out your server won't come up.

iKVM tells you, networking's not working.

Now the configuration can be fixed easily, but what if you simply forgot about the ifenslave package? Since your networking config is out of order, how do you get the files there?

  • boot livedisk?
  • manually plug the cable into another switchport and reconfigure unbonded networking?
  • use an USB stick plugged directly into the server and copy the missing package onto there so you can install it?

Heres another way:

#modprobe bond
# (the bonding module has to be present in the kernel)

echo "+bond0" >  /sys/class/net/bonding_masters

echo "+eth0" > /sys/class/net/bond0/bonding/slaves
echo "+eth1" > /sys/class/net/bond0/bonding/slaves

# Remove a slave interface from bond0

echo "-eth0" > /sys/class/net/bond0/bonding/slaves

# Delete a bond interface

echo "-bond0" >  /sys/class/net/bonding_masters

Official documentation can be found here.

linux clear buffers and cache
posted on 2016-09-13 14:09

Use this to clear all buffers and caches at once:

free && sync && echo 3 > /proc/sys/vm/drop_caches && free

More in-depth info here.

linux ssd and TRIM
posted on 2016-09-12 13:44

One of my setups is an encrypted LVM on top of a SSD.

The important part here is, should you want to TRIM the SSD, then TRIM has to be available on all layers:

  • SSD
  • dm-crypt
  • LVM

Now before you do anything, you might want to jump to the back of this post and to a test with fio for getting a baseline, so you got some numbers to compare for after you changed everything.

check storage media

hdparm, like always, is your friend:

root@ctr-014:~# hdparm -I /dev/sda | grep TRIM
    *   Data Set Management TRIM supported (limit 8 blocks)

If there is nothing to be found, TRIM just won't work for you.

dm-crypt

grep discard /etc/crypttab:

sjas@host:~/blog$ grep discard /etc/crypttab 
sda5_crypt UUID=985f3826-e502-4402-ad01-5c13b84e9141 none luks,discard

If discard is not present, add it there.

LVM

grep issue_discards /etc/lvm/lvm.conf:

root@ctr-014:~# grep issue_discards /etc/lvm/lvm.conf
    #issue_discards = 0
    issue_discards = 1

Set it to "1" if it is not already.

update initramfs

Debian:

update-initramfs -u

RedHat:

dracut -f

reboot

In case you changed anything above, to a restart so your settings become active.

TRIM !

for fs in $(lsblk -o MOUNTPOINT,DISC-MAX,FSTYPE | grep -E '^/.* [1-9]+.* ' | awk '{print $1}'); do fstrim "$fs"; done

Voila!

cron

You might want to put this into a file in /etc/cron.weekly with a #!/bin/sh shebang, to it will be run automatically.

test method

Test were done with fio:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=512M --readwrite=randrw --rwmixread=75

Keep in mind these are single-threaded thus somewhat aenemic, but the result should become visible.

after

read iops = 19947
write iops =  6671

before

read iops = 3374 
write iops =  1128

actual measurement sample

root@ctr-014:/home/sjas# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=512M --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 512MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [78716KB/26344KB/0KB /s] [19.7K/6586/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3245: Mon Sep 12 13:40:44 2016
  read : io=392888KB, bw=79790KB/s, iops=19947, runt=  4924msec
  write: io=131400KB, bw=26686KB/s, iops=6671, runt=  4924msec
  cpu          : usr=5.77%, sys=33.72%, ctx=71337, majf=0, minf=6
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=98222/w=32850/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=392888KB, aggrb=79790KB/s, minb=79790KB/s, maxb=79790KB/s, mint=4924msec, maxt=4924msec
  WRITE: io=131400KB, aggrb=26685KB/s, minb=26685KB/s, maxb=26685KB/s, mint=4924msec, maxt=4924msec

Disk stats (read/write):
    dm-2: ios=95567/32093, merge=0/0, ticks=197872/140256, in_queue=355552, util=97.95%, aggrios=98222/33021, aggrmerge=0/0, aggrticks=203680/161852, aggrin_queue=366328, aggrutil=97.02%
    dm-0: ios=98222/33021, merge=0/0, ticks=203680/161852, in_queue=366328, util=97.02%, aggrios=96973/32744, aggrmerge=1249/277, aggrticks=197660/149212, aggrin_queue=347000, aggrutil=96.94%
  sda: ios=96973/32744, merge=1249/277, ticks=197660/149212, in_queue=347000, util=96.94%
throughput measurement with iperf
posted on 2016-09-12 13:32

In short:

  • iperf -s start the server on node 1
  • iperf -c <node2_ip_or_dns> connects node 2 to node 1 and starts the test

Example:

root@server1:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 188.64.57.149 port 5001 connected with 158.181.55.4 port 24169
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   761 MBytes   636 Mbits/sec

and

sjas@server2~$ iperf -c server1
------------------------------------------------------------
Client connecting to server1, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.20.1.14 port 44928 connected with 188.64.57.149 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   761 MBytes   638 Mbits/sec
use strace to snoop on ssh sessions
posted on 2016-09-12 00:17

To snoop on another running SSH session, these oneliners come in handy.

  1. use w to find out which session you want to have a look at
  2. ps aux|grep pts to find out the PID
  3. replace the PID in the scripts below

CentOS

Tested on 6.8:

strace -p PID -e trace=write 2>&1 | grep --line-buffered -o '".*[^"]"' | sed -e 's/^"//' -e 's/"$//'

Debian

Tested on jessie / 8:

strace -p PID -e write 2>&1 | grep --line-buffered -e '^write(7' | grep --line-buffered -o '".*[^"]"' | sed -e 's/^"//' -e 's/"$//'
get web page via netcat
posted on 2016-09-07 15:21

Open a connection to the webserver via netcat and then issue the request:

nc mydomain.com 80

GET / http/1.0
GET / http/1.1
apache mod_pagespeed installation on debian 8
posted on 2016-08-29 09:36

In short for x86_64:

wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb
dpkg -i mod-pagespeed-stable_current_amd64.deb
service apache2 restart
proxmox nat howto
posted on 2016-08-29 08:29

Network Adress Translation in combination with port forwarding lets you access a VM of a proxmox instance via the IP of the hypervisor itself. A second bridge is added for creating the internal network, and the hypervisor is configurated to forward packets destined to a certain port to the VM on the internal network. The added bridge is called vmbr1 here, and this was added to our networking config.

This is just an excerpt of the relevant part of the /etc/network/interfaces file there:

auto vmbr1
iface vmbr1 inet static
    address  10.0.2.1
    netmask  255.255.255.252
    bridge_ports none
    bridge_stp off
    bridge_fd 0

    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s 10.0.2.0/30 -o eth0 -j MASQUERADE
    post-up iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 2222 -j DNAT --to 10.0.2.2:22

    post-down iptables -t nat -D POSTROUTING -s 10.0.2.0/30 -o eth0 -j MASQUERADE
    post-down iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 2222 -j DNAT --to 10.0.2.2:22 

This is the network configuration for the VM in question:

auto eth0
iface eth0 inet static
    address 10.0.2.2
    netmask 255.255.255.252
    gateway 10.0.2.1

As soon as the bridge on the HV is started, the masqerading and port forwarding rules are added, they are removed again when the interface gets disabled.

reinstalling gentoo
posted on 2016-08-24 21:47

The last post already told the reason for this one. Also the prerequisites were described there.

This installation takes place from within another running system on the same computer. After the partitions/luks/lvs/filesystem stuff was done, now the files will be copied and set up.

Now mount your root partition cd into there.

getting the files

get download link:

Go to https://www.gentoo.org/downloads/ and choose your download. Likely you want the stage3 tarball for AMD64.

Use links or lynx if you don't have access to a graphical browser.

Otherwise:

wget http://distfiles.gentoo.org/releases/amd64/autobuilds/20160818/stage3-amd64-20160818.tar.bz2
tar xjvpf stage3-amd64-20160818.tar --xattrs

fix make.conf

As an editor you can use what you want, after chrooting into your new environment, you likely only have nano.

(Remember, you are still in / of your new gentoo installation where you just unpacked your files.)

vim etc/portage/make.conf

Here's a diff:

root@zen:/mnt/gentoo/etc/portage# diff -u make.conf{,.original}
--- make.conf   2016-08-24 22:55:05.809203312 +0200
+++ make.conf.original  2016-08-24 22:53:38.197198629 +0200
@@ -2,9 +2,8 @@
 # built this stage.
 # Please consult /usr/share/portage/config/make.conf.example for a more
 # detailed example.
-CFLAGS="-march=native -O2 -pipe"
+CFLAGS="-O2 -pipe"
 CXXFLAGS="${CFLAGS}"
-MAKEOPTS="-j4"
 # WARNING: Changing your CHOST is not something that should be done lightly.
 # Please consult http://www.gentoo.org/doc/en/change-chost.xml before changing.
 CHOST="x86_64-pc-linux-gnu"

prepare to chroot!

cd /mnt/gentoo
cp -L /etc/resolv.conf etc/resolv.conf
mount -t proc proc ./proc
mount -t sysfs /sys ./sys
for i in dev dev/pts run; do mount --rbind /$i ./$i; done

If you use a non-gentoo livedisk, or want systemd (god forbid), have a look at the gentoo installation wiki, you may need additional steps then.

Since I have an UEFI based setup and need, due to the crypted setup, an extra boot, both the EFI partition and /boot need to be mounted, too:

mount /dev/sdXM boot
mount /dev/sdXL boot/efi

chroot . /bin/bash
. /etc/profile
export PS1="(chroot) $PS1"

configure the package manager

Now lets deal with portage:

emerge-webrsync
eselect news read

If you actually do this, sometimes there is useful information. In my case, it hints on how to set cpu USE flags.

emerge -alv gentoolkit
equery uses ffmpeg | grep cpu_flags
cat /proc/cpuifo | grep flags | uniq

Now both outputs can be compared. Easier is this:

emerge -alv app-portage/cpuid2cpuflags
cpuinfo2cpuflags-x86 >> /etc/portage/make.conf

Now lets set the profile and update:

eselect profile list
eselect profile set 6 # kde without systemd
emerge -alv --update --deep --newuse @world

Use all available USE flags:

sed -i 's/^USE/#&/' /etc/portage/make.conf'
emerge --info | grep ^USE >> /etc/portage/make.conf

To lower compile time and startup times, remove the USE flags which you won't need by prepending a minus sign. I don't bother with that for now.

timezone

echo 'Europe/Berlin' > /etc/timezone
emerge --config timezone-data

configure locale

vim /etc/locale.gen

There I uncomment my en_US and de_DE for both ISO 8859-1 and utf8.

locale-gen
eselect locale list
eselect locale set 8
env-update
. /etc/profile
export PS1="(chroot) $PS1"

fix fstab

blkid
vim /etc/fstab

configure the kernel

I will use genkernel and not configure the kernel by myself here.

emerge -alv gentoo-sources
emerge -alv genkernel
luks and lvm and partitioning and filesystem from the shell
posted on 2016-08-24 20:58

Don't overwrite your devices via cp. But we've all been there, done that.

If you don't want to reinstall 'just because', an idea might be to use testdisk depending on what you did.

Getting nice partition layout I tend to use parted (see below), for creating partitions cgdisk (for GPT stuff) or cfdisk (for MBR creation only IIRC) are decent choices.

Back on topic.

preparation

Partitions were still present in my cause, so no need create them anew.

If you have to, do parted /dev/sda p and parted /dev/sdX u b p and use your phone to make photos, in case you have to redo something.

luks

Create and open the cryptocontainer to hold the complete partition, wherein the LVM and your filesystems will lie.

cryptsetup --cipher=aes-xts-plain64 luksFormat /dev/sdXN --force-password
cryptsetup open /dev/sdXN sdXN_crypt

Did you really type an uppercased YES when you were promted? The password you were prompted for is the one you will have to enter in the future.

In case you did something wrong:

cryptsetup close
cryptsetup erase /dev/sdaX

Then start by recreating the container. Did you really type an uppercased YES when you were promted?

lvm2

After the crypto device was opened, you can reference it through the device mapper. Now create the physical volume (PV), volume group (VG) and logical volumes (LV's) where your system will be installed later on:

pvcreate /dev/mapper/sdXN_crypt
vgcreate `hostname` /dev/mapper/sdXN_crypt
lvcreate -L 2G -n swap `hostname`
lvcreate -l 100%FREE -n root `hostname`

Here is a catch: I did not have to recreate a separate /boot partition, as I already had one. If you don't create one first. It has to be located outside the crypto container, else you won't be able too boot after your installation.

If something went wrong, here's how to delete things, too. Choose what you need in particular:

pvremove /dev/sdXN_crypt
vgremove `hostname`
lvremove /dev/`hostname`/<LVname>

filesystems and swap

Create swap:

mkswap /dev/mapper/`hostname`-swap

Create root filesystem:

mkfs -t ext4 /dev/mapper/`hostname`-root

This is pretty much it. From here on you can chroot or do whatever else you want.

Maybe you only want the container for data but for installing a system on there. In that case not calling the LV 'root' and omitting the swap partition up there would have been a wise choice.

postgres introduction
posted on 2016-08-24 10:50

Client is run through the postgres system user named 'postgres'. Homedir is /var/lib/postgresql usually.

Connection info

.my.cnf equivalent is the .pgpass in postgres user homedir, containing the following syntax:

hostname:port:database:username:password

Command history

For .mysql_history equivalent, see .psql_history in postgres user homedir (/var/lib/postgres/.psql_history).

Apostrophe's usage

  • use single ones for strings/values
  • use double ones for objects (user-/table-/dbnames... )

Most important shell commands

  • createdb DATABASE
  • dropdb DATABASE
  • createuser ROLE
  • dropuser ROLE
  • psql ## as user 'postgres'
  • su -c psql postgres ## invoking the CLI as any user
  • postgres=# grant all privileges on database DATABASE to USER;

As postgres user:

  • psql -c 'SQL_STATEMENT' = mysql -e 'SQL_STATEMENT' with .my.cnf
  • psql DATABASE # open client and connects to database

psql cli commands

help = shows help howto
\h = show help for sql commands
\h create role; = show help on CREATE ROLE command

\c DATABASE = "use DATABASE"
\? = show pg shortcuts
\l = "show databases;"
\d = show tables/views/sequences
\dt = "show tables;"
\du = show roles (users)
\dp = show privileges

mytop

There exists an equivalent called pgtop. Package on debian is called pgtop, too.

Usage:

su - postgres
pg_top

User management

Postgres user management differs in that there are 'roles'. These can be tweaked to work like users or like groups.

$ sudo postgres
$ createuser my-user
$

Main use cases

Create db with corresponding user:

createdb DATABASENAME
createuser DATABASENAME
su -c psql DATABASENAME postgres
grant all privileges on DATABASE to USER;
\q

Change password:

su -c psql postgres
alter role DATABASE with password 'PASSWORD';
\q
roccat kova buttons in linux
posted on 2016-08-23 11:00

This was tested on debian 8 and seems to work so far.

echo "deb http://ppa.launchpad.net/berfenger/roccat/ubuntu trusty main" > /etc/apt/sources.list.d/roccat.list
echo "deb-src http://ppa.launchpad.net/berfenger/roccat/ubuntu trusty main" > /etc/apt/sources.list.d/roccat.list
sudo roccatkovaplusconfig

My main problem was to fix the mousebuttons on the left side so the forward/backward-browser-history functions worked.

There is easyshift, which basically is a meta-button, usually on the left-backward button. It seems these can only be set onto the backwards buttons on the sides.

Setting the left side buttons to these did the trick:

  • Button 8 (Browser backward)
  • Button 9 (Browser forward)

Apply and exit.

linux create patches via diff and apply via patch
posted on 2016-08-22 19:14

Since I tend to forget this way too often...

create a patch

diff -u <file1> <file2> > <filename>.patch

test a patch

patch --dry-run <file> < <filename>.patch

apply a patch

patch -p0 <file> < <filename>.patch

-p0 strips no prefixes, -p1 strips the leftmost path folder, etc.

apply a patch and create backup of the original

patch -b -p0 <file> < <filename>.patch

Creates <file>.orig in the process.

reverse an applied patch

patch -R <file> < <filename>.patch
debugging iptables with traced packets
posted on 2016-08-10 19:14

For debugging iptables (make all interactions of a packet in the netfilter chains visible via syslog!), tracing helps quite a bit.

prerequisite

modprobe ipt_LOG # this is for ipv3
modprobe ip6t_LOG # this is for ipv6

ICMP tracing

For tracing ICMP packets:

# IPv4
iptables -t raw -A OUTPUT -p icmp -j TRACE
iptables -t raw -A PREROUTING -p icmp -j TRACE
# IPv6
ip6tables -t raw -A OUTPUT -p icmpv6 --icmpv6-type echo-request -j TRACE
ip6tables -t raw -A OUTPUT -p icmpv6 --icmpv6-type echo-reply -j TRACE
ip6tables -t raw -A PREROUTING -p icmpv6 --icmpv6-type echo-request -j TRACE
ip6tables -t raw -A PREROUTING -p icmpv6 --icmpv6-type echo-reply -j TRACE

ping the destination server with its firewall from the source server and let run tail -f /var/log/syslog | grep TRACE in parallel.

UDP tracing with netcat

iptables -t raw -A PREROUTING -p udp -s 10.0.0.0/24 -j TRACE
iptables -t raw -A OUTPUT     -p udp -s 10.0.0.0/24 -j TRACE

Change 10.0.0.0/24 to the IP where your source server comes from.

On the destination server do:

nc -ulp 12345

On the source server do:

nc -u <dst_server_ip> 12345

and type a bit and hit enter.

Now you should see in /var/log/syslog on the destination server what happens to your packets.

iptables and netfilter chains diagram
posted on 2016-08-10 18:56

This is a NICE diagram I stumbled across here:

 +---------------------+                              +-----------------------+
 | NETWORK INTERFACE   |                              | NETWORK INTERFACE     |
 +----------+----------+                              +-----------------------+
            |                                                    ^
            |                                                    |
            |                                                    |
            v                                                    |
 +---------------------+                                         |
 | PREROUTING          |                                         |
 +---------------------+                                         |
 |                     |                                         |
 | +-----------------+ |                                         |
 | | raw             | |                                         |
 | +--------+--------+ |                                         |
 |          v          |                                         |
 | +-----------------+ |                              +----------+------------+
 | | conn. tracking  | |                              | POSTROUTING           |
 | +--------+--------+ |                              +-----------------------+
 |          v          |                              |                       |
 | +-----------------+ |                              | +-------------------+ |
 | | mangle          | |                              | | source NAT        | |
 | +--------+--------+ |                              | +-------------------+ |
 |          v          |                              |          ^            |
 | +-----------------+ |                              | +--------+----------+ |
 | | destination NAT | |                              | | mangle            | |
 | +-----------------+ |                              | +-------------------+ |
 +----------+----------+  +------------------------+  +-----------------------+
            |             | FORWARD                |             ^
            |             +------------------------+             |
            v             |                        |             |
     +-------------+      | +--------+  +--------+ |             |
     | QOS ingress +----->| | mangle +->| filter | |------------>+
     +------+------+      | +--------+  +--------+ |             |
            |             |                        |             |
            |             +------------------------+             |
            |                                                    |
            |                                                    |
            v                                                    |
 +---------------------+                              +----------+------------+
 | INPUT               |                              | OUTPUT                |
 +---------------------+                              +-----------------------+
 |                     |                              |                       |
 |  +---------------+  |                              |  +-----------------+  |
 |  | mangle        |  |                              |  | filter          |  |
 |  +-------+-------+  |                              |  +-----------------+  |
 |          v          |                              |          ^            |
 |  +---------------+  |                              |  +-------+---------+  |
 |  | filter        |  |                              |  | destination NAT |  |
 |  +---------------+  |                              |  +-----------------+  |
 +----------+----------+                              |          ^            |
            |                                         |  +-------+---------+  |
            |                                         |  | mangle          |  |
            |                                         |  +-----------------+  |
            |                                         |          ^            |
            |                                         |  +-------+---------+  |
            |                                         |  | conn. tracking  |  |
            |                                         |  +-----------------+  |
            |                                         |          ^            |
            |                                         |  +-------+---------+  |
            |                                         |  | raw             |  |
            |                                         |  +-----------------+  |
            |                                         +-----------------------+
            v                                                    ^
+----------------------------------------------------------------+------------+
|                             LOCAL PROCESS                                   |
+-----------------------------------------------------------------------------+
cisco sg300 upload new firmware via xmodem
posted on 2016-08-07 09:30

After a reset a sg300 did not want to boot, both slices were corrupt. The result was an endless boot loop, where it'd try downloading the firmware but without success.

Using minicom the upload was easy:

  • Turn off switch.
  • If you use an USB-to-serial adapter with a nullmodem cable, most likely the interface is /dev/ttyUSB0. Look through your devices under /dev, in case it is /dev/ttyUSB1
  • minicom -s and setup everything.
  • baud rate 115200 (instead of the usual 9600 with most cisco devices), 8N1 (8 bits, no parity, 1 stop bit), no hardware or software flow control
  • minicom -D /dev/ttyUSB0
  • start switch, press ESC when prompted
  • ctrl-a, s and choose xmodem
  • navigate to the file, or choose the [Goto] menu at the bottom
  • Space to select the file, Enter.

A window like this should pop up and the upload should begin:

+-----------[xmodem upload - Press CTRL-C to quit]------------+                                                                 
|Sending sx300_fw-14502.ros, 57760 blocks: Give your local XMO|                                                                 
|DEM receive command now.                                     |                                                                 
|                                                             |                                                                 
|                                                             |                                                                 
|                                                             |                                                                 
|                                                             |                                                                 
|                                                             |                                                                 
+-------------------------------------------------------------+   

After the upload is finished, the switch should successfully reboot again and be factory reset.

mysql 5.7 reset root password
posted on 2016-08-04 14:04

Resetting the mysql root password changed with version 5.7, along with quite some other stuff.

If you have further trouble logging in with the local root account itself, these steps should fix all the problems.

  • stop mysql server (service mysql stop or ps aux | grep mysql to determine the PID and then kill -9 PID)
  • mysqld_safe --skip-grant-tables
  • mysql -Ne "UPDATE mysql.user SET plugin = 'mysql_native_password' WHERE User = 'root';" to fix the root PW not working
  • mysql -Ne "UPDATE mysql.user SET authentication_string=password('YOURNEWPASSWORD') WHERE user='root';"
  • killall -9 mysqld_safe
  • service mysql start
linux p2v via rsync
posted on 2016-08-03 13:09

To virtualize an existing and running linux system via rsync, install a fresh linux system. (Or do just the partitioning of the disk of the VM which you want the system to run on afterwards.) It helps to just use a single partition for /, otherwise you have to sync the mountpoints individually. In that case, create a script, in case you have to redo the data sync. (Which is likely to happen.)

If you installed a complete system first, you might consider backup up its /etc/fstab, in case you do not want to fix it afterwards by hand, but just copy-pasting the config back.

Also, if you did not install a complete linux installation on the destination VM, you will have to fix the bootmanager (read: grub2 nowadays) after the initial sync. If you did a complete install, just exclude /boot from rsync.

Boot from a live disk like GRML and mount your partition(s) where you want the data to end up(s) where you want the data to end up.

cd into the folder where you mounted the destination system's / to in your live-disk.

Then:

rsync -av --delete --progress --exclude=/dev --exclude=/sys --exclude=/proc --exclude=/mnt --exclude=/media--exclude=/boot <source-server>:/* .
proxmox and VLANs
posted on 2016-07-15 13:07

This is a howto with a sample configuration on how to create a proxmox setup using vlans. No bonding is used.

  • network: 10.0.0.0/24
  • gateway ip: 10.0.0.1
  • proxmox ip: 10.0.0.2
  • VM ip: 10.0.0.3
  • vlan id: 222
  • physical NIC: eth0

proxmox

Physical NIC is set to manual, also the coresponding vlan device. Also the main bridge, only the specific bridge-vlan adapter is of type inet.

Main bridge uses physical NIC, vlan-bridge used the vlan-adapter the the physical NIC.

/etc/network/interfaces:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth0.222
iface eth0.222 inet manual
    vlan-raw-device eth0

auto vmbr0
    iface vmbr0 inet manual
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

auto vmbr0v222
iface vmbr0v222 inet static
    address     10.0.0.2
    netmask     255.255.255.0
    gateway     10.0.0.1
    bridge_ports eth0.222
    bridge_stp off
    bridge_fd 0

Naming convention is ethX.VLAN for the physical NIC's VLAN adapter. For the bridge, do vmbrXvVLAN.

Set up more ethX.VLAN / vmbrXvVLAN couples for more VLANs.

VM

Setup the network as usual, as if no VLAN is in place:

auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static
    address     10.0.0.3
    netmask     255.255.255.0
    network     10.0.0.0
    broadcast   10.0.0.255
    gateway     10.0.0.1

Also set the VLAN from withing the proxmox interface for your VM's desired adapter. (Tab Hardware in the VM's menu, double-click onto Network Device, select main bride, which is vmbr0 here, and add the VLAN id in the field VLAN Tag.)

switch

You have to have set up trunking on the physical switch's switchport that your proxmox hardware is using.

If you omit this, no vlan tagging will take place and you will have no connectivity even if your proxmox network config is solid.

apache htpasswd
posted on 2016-07-14 13:27

To password-protect a phpmyadmin interface via a .htpasswd authentication through the apache, try this in your vhost. (I prefer doing these things in the vhost instead of from within a .htaccess in the webfolder.)

`vim /etc/apache/sites-enabled/phpmyadmin.conf

...

<Directory /var/www/phpmyadmin/htdocs>
    Options -Indexes +ExecCGI +FollowSymLinks -MultiViews
    AllowOverride all

    AuthType basic
    AuthName "phpmyadmin pw"
    AuthUserFile    /var/www/phpmyadmin/.htpasswd
    Require   valid-user

</Directory>

...

Afterwards create a user (here called 'admin') in the .htpasswd file, which lies in the WEBROOT, not the DOCROOT of your hosting, so it cannot be changed via FTP access. (FTP is available only for htdocs folder in my example.)

htpasswd -c /var/www/phpmyadmin/.htpasswd admin

Then you will be prompted for entering a password twice.

A service apache2 reload to activate the changes afterwards and you are done.

snmp querying
posted on 2016-07-05 11:57

prerequisites

For testing your SNMP setup, it needs to have these defined:

  • agentaddress with protocol,public ip,port
  • community string (often 'public' or 'mrtgread')
  • snmpd service restart, if changes are pending (config was edited in the past but service not restarted/reloaded yet)

Then for querying: (this is an example)

snmpwalk -c public -v 2c <IP>

or

snmpget -c public -v 2c <IP> <OID>
determine ASN from shell
posted on 2016-06-29 01:35

whois can actually use different whois servers for querying. Their output differs, but whois.cymru.com is pretty decent:

whois -h whois.cymru.com <IP-OR-ASN>

I.e.

sjas@zen:~$ whois -h whois.cymru.com $(host sjas.de | head -1 | awk '{print $4}')
AS      | IP               | AS Name
24940   | 78.47.176.149    | HETZNER-AS , DE

In reverse, for looking up which organization is behind a specific AS number:

sjas@zen:~$ whois -h whois.cymru.com AS24940 | tail -1
HETZNER-AS , DE

Downside is, it will not lookup domains, only IP's or ASN's.

If no whois server is specified via the -h flag, whois.arin.net will be used for domains, IP addresses and ASN numbers. whois.cymru.com is however more terse and often more preferable.

nodejs debian install howto
posted on 2016-06-28 18:40

create dedicated node user

 # add user
 useradd -m -U -G sudo -s /bin/bash nodejs

 # create random password for copy-pasting
 pwgen -A0 16 1

 # set password
 passwd nodejs

install from package management and official script

# change user
su nodejs
# go to homedir
cd

# install
### you could also use for version 4, instead of v6 as we will do
##curl -sL https://deb.nodesource.com/setup_4.x | sudo bash -
curl -sL https://deb.nodesource.com/setup_6.x | sudo bash -
sudo apt-get install -y nodejs

# back to root
exit

If you just install the nodejs package without executing the script, npm, nodejs's package manager will be missing.

fix user rights

 # remove complete sudo
 deluser nodejs sudo
 # let our user handle node installl stuff
 # `visudo` for editing `/etc/sudoers`, then put in there:
 Cmnd_Alias NODE_CMDS = /usr/bin/npm
 nodejs ALL=(ALL) NOPASSWD: NODE_CMDS
linux find long paths
posted on 2016-06-28 12:10

Sometimes applications have problems with pathnames the exceed 1024 characters. I.e. this happens with certain backup applications. Here the pathnames came for apache caching files.

The easiest way to find those on a linux system is via find:

find / -regextype posix-extended -regex '.{1000,}'

This will show all paths that exceed 1000 characters in lengths.

linux: resize vm to full disk size
posted on 2016-06-11 10:38

fter resizing the the virtual harddisk of your virtual machine, several other steps are needed so you can utilize this additional space within the VM. This will only talk about increased sizing, which will usually just work. Unlike with downsizing, which are the same steps just in reverse order, but where you can easily kill your currently still running system. Handle downsizing with very, very much care.

This guide assumes you have a single partition, which is used by LVM, where in you have your filesystem(s) in different logical volumes.

resize the partition

I have a vm with hostname called 'test', which has a single disk (/dev/vda), with a single partition (/dev/vda1), which is used by lvm. LVM volume groups are usually called like the hostname (best approach I know of, so here test), and the logical volumes what they are used for (root, swap), or where they are mounted (i.e. var_lib_vz, not shown here).

root uses ext4 as file system.

Initially the disk size was 50G and was increased to 500G.

After the disk size was increased, you can see the available space on the device:

root@test:~# lsblk
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0               11:0    1 1024M  0 rom
vda              253:0    0  500G  0 disk
+-vda1           253:1    0   50G  0 part
  +-test-root 252:0    0 14.3G  0 lvm  /
  +-test-swap 252:1    0  976M  0 lvm  [SWAP]

Use a partition manager of your choice (fdisk or cfdisk for disks with an MBR, gdisk or cgdisk for disks using a GPT, or parted if you know what you are doing.), delete your partition. Recreate it with the maximum size, reboot.

Then it should look like this, with adjusted partition size:

root@test:~# lsblk
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0               11:0    1 1024M  0 rom
vda              253:0    0  500G  0 disk
+-vda1           253:1    0  500G  0 part
  +-test-root 252:0    0   49G  0 lvm  /
  +-test-swap 252:1    0  976M  0 lvm  [SWAP]

resize PV, LV, file system

First make LVM format the addition free space: (It will 'partition' it so it can work with it, effectively splitting it into junks of like 4MB if I recall correctly.)

root@test:~# pvresize /dev/vda1
  Physical volume "/dev/vda1" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

Since the PV was already a member of the VG, no need to extend the VG.

Now for the actual volume:

root@test:~# lvextend -L 499G /dev/test/root
  Size of logical volume test/root changed from 49.04 GiB (12555 extents) to 499.00 GiB (127744 extents).
  Logical volume root successfully resized.

Here I specified it to be resized to 499GB. If I wanted to just use all available space, I'd do:

lvextend -l +100%FREE /dev/mapper/test-root

root@test:~# lvextend -l +100%FREE /dev/mapper/test-root
  Size of logical volume test/root changed from 450.00 GiB (115200 extents) to 499.04 GiB (127755 extents).
  Logical volume root successfully resized.

The -L is just easier to remember.

Lastly, resize the used filesystem:

root@test:~# resize2fs -p /dev/mapper/test-root
resize2fs 1.42.13 (17-May-2015)
Dateisystem bei /dev/mapper/test-root ist auf / eingeh�ngt; Online-Gr��en�nderung ist
erforderlich
old_desc_blocks = 1, new_desc_blocks = 32
Das Dateisystem auf /dev/mapper/test-root is nun 130821120 (4k) Bl�cke lang.

Verify it:

root@test:~# df -h
Dateisystem              Groesse Benutzt Verf. Verw% Eingehaengt auf
udev                      983M       0  983M    0% /dev
tmpfs                     201M    3.2M  197M    2% /run
/dev/mapper/test-root  492G    2.3G  469G    1% /
tmpfs                    1001M       0 1001M    0% /dev/shm
tmpfs                     5.0M       0  5.0M    0% /run/lock
tmpfs                    1001M       0 1001M    0% /sys/fs/cgroup
proxmox: qemu-img convert
posted on 2016-06-11 10:33

In proxmox you sometimes want to convert images from one type to another.

available types

QCOW2 (KVM, Xen)    qcow2
QED   (KVM)         qed
raw                 raw
VDI   (VirtualBox)  vdi
VHD   (Hyper-V) vpc
VMDK  (VMware)  vmdk
RBD   (ceph)    rbd

example

qemu-img convert -f raw -O qcow2 vm-100-disk-1.raw vm-100-disk-1.qcow2

-f is the first image format, -O the second. Look at the manpage to guess why -f is called -f.

gitolite emergency access
posted on 2016-06-11 10:32

In case you somehow managed to lock yourself out of your gitolite access list (lost key, commited misconfiguration, ...), there is an easy way to bypass this problem.

  1. ssh to your server
  2. su gitolite (or whatever user you use for running gitolite)
  3. cd
  4. git clone $HOME/repositories/gitolite-admin.git temp
  5. fix everything you need, exchange keys, do whatever you need to fix it
  6. git commit your changes
  7. gitolite push

Done. 7.) is gitolite push, not git push!

clamav
posted on 2016-06-11 10:30

For quick virus scans, if you have nothing else handy:

# install
apt install clamav

# install virus bases
freshclam

# scan
#everything
clamscan -r /
#specific folder, and show only 'hits'
clamscan -r -i /var/www

Some other valueable options:

  • --bell rings a bell
  • --remove deletes directly, files are gone!
  • --move=/some/path/here moves infected files it found to the given path
debian: build newest kernel
posted on 2016-06-05 19:23

prerequisites

You should have like 50G of harddisk space available. Since I currently don't (SSD), I use an external harddisk.

Also:

apt install build-essential kernel-package libssl-dev xz-utils ncurses-dev

These are like ~1,3G of additional files on your main system, if you install without the --no-recommends flag.

get sources

Head over to kernel.org and download the source of the kernel of your choice. You should know what stable means and wether you want to use a release client or not, else get the stable kernel.

I will use the current RC release.

extract, copy current config, start compiling

tar xJvf linux-4.7-rc1.tar.xz
cd linux-4.7-rc1/

# copy currently used configuration. otherwise use `make menuconfig`.
cp /boot/config-`uname -r` .
yes "" | make oldconfig

# see how many cores you can use for compiling
# count amount of cores on top, then press 'q'
htop

# build, pass corecount with '-j' flag
make-kpkg -j4 --append-to-version "-sjas" --initrd buildpackage
# if the previous step failed, do these
## make clean
## rm -rf .config
## make menuconfig # save to .config  and exit immediatly
## then redo the previous make-kpkg

I also had a more specific error last time:

make[4]: *** No rule to make target 'debian/certs/benh@debian.org.cert.pem', needed by 'certs/x509_certificate_list'.  Stop.
Makefile:985: recipe for target 'certs' failed

Solution was to comment the CONFIG_SYSTEM_TRUSTED_KEYS line out from my .config.

linux wifi cli handling
posted on 2016-06-04 14:56

Here's a walktrough on using a linux computer with a wifi card to access wlans via the cli tools. This guide is debian specific and assumes you use one of the WPA protocols for encrypting your wifi.

available tools

  • ip
  • iw
  • iwlist
  • iwconfig
  • dhclient
  • wpa_supplicant
  • wpa_passphrase
  • /etc/network/interfaces
  • wicd

You can use all of these, but it just happens you do not really need them all.

discern wlan IF

iw dev:

phy#0
    Interface wlan0
    ifindex 2
    wdev 0x1
    addr 00:22:66:88:00:22
    type managed

wlan0 is my wifi interface and will be uses as an example here from now on.

enable IF (if needed)

ip l s dev wlan0 up

find networks

iw dev wlan0 scan | grep -i -e ssid -e signal

brings:

    signal: -79.00 dBm
SSID: ng-2.4G
signal: -85.00 dBm
SSID: ng-5G

So you know the available networks as well as the signal quality.

An alternative would be: iwlist wlan0 scan | grep -i -e ssid -e signal:

    Quality=26/70  Signal level=-84 dBm  
    ESSID:"ng-5G"
    Quality=36/70  Signal level=-74 dBm  
    ESSID:"ng-2.4G"

I will choose ng-2.4g for the next examples.

set up WPA and run daemon

# i just do not like these storing under /etc
mkdir /root/.wpa
# you are promted for the passphrase
# tee is used to show the output written to the file also directly at the shell
wpa_passphrase ng-2.4g | tee -a ~/.wpa/wpa_supplicant.conf  
## hide contents from others since the original pass is included as comment
chmod 600 ~/.wpa/wpa_supplicant.conf
# run daemon in the backgroud, automtically brings IF up
wpa_supplicant -B -i wlan0 -c ~/.wpa/wpa_supplicant.conf

So now your wpa_supplicant.conf should contain something like this:

root@zen:/home/sjas/blog# wpa_passphrase ng-2.4G MY_PASSWORD
network={
    ssid="ng-2.4G"
    #psk="MY_PASSWORD"
    psk=0b1846ee861de86ebbf663bcd5087ba6cc2bbf0b3d9125361c52e95eef28ef6a
}

This is likely not everything you need to connect. So either discern everything else that is missing parameter-wise by hand, or use wicd if you have a desktop environment installed.

Once you connected to the wifi of your choice, do ps aux | grep wpa_supplicant and see how it was started, and which config references via -c. Copy its contents over to your wpa_supplicant.conf.

set up interface in /etc/network/interfaces for automatic connecting

manual wlan0
iface wlan0 inet dhcp
    pre-up wpa_supplicant -B -D wext -i wlan0 -c /etc/wpa_supplicant.conf
    post-down killall -q wpa_supplicant

usage

# enable
ifup wlan0

#disable
ifdown wlan0

how about several wlan configurations?

This is what I might use in my case:

/etc/network/interfaces:

manual wlan0
iface home2 inet dhcp
    pre-up wpa_supplicant -B -D wext -i wlan0 -c /var/lib/wicd/configurations/c404150241b4
    post-down killall -q wpa_supplicant
iface home5 inet dhcp
    pre-up wpa_supplicant -B -D wext -i wlan0 -c /var/lib/wicd/configurations/c404150241b3
    post-down killall -q wpa_supplicant

These have to be used a little differently, i.e.:

## acitivate one network:
ifup wlan0=home2
## and deactivate
ifdown wlan0

## or activate the other one:
ifup wlan0=home5
## and deactivate
ifdown wlan0

This may seem quite a bit unwieldy, but I am just fed up with network-manager or its relative, wicd, by now.

iwlwifi problems
posted on 2016-06-04 14:49

If running into problems with a intel-based wifi card, try these:

/etc/modprobe.d/iwlwifi.conf:

# disable 801.11n support and enable software crypto:
options iwlwifi 11m_disable=1
options iwlwifi swcrypto=1

# or with slow network speed in n-mode, try antenna aggregation:
options iwlwifi 11n_disable=8
gnu parallel instead of bash for loops
posted on 2016-05-24 18:15

If you happen to have to iterate over a list of files/strings/whatever, gnu parallel comes in handy after you installed it from your linux distro's package manager.

Then instead of:

for i in *; do echo "test $i"; done

You can simply do:

ls -1 | parallel echo "test "

# alternatively:
parallel echo 'test ' ::: `ls -1`

If you happen to have more complex scripts, simply double-quote the commands handed to parallel. Use {} if you happen to need a reference to the current variable.

I.e.

parallel "echo 'number {} echoed'" ::: `seq 1 10`

which gives:

number 1 echoed
number 2 echoed
number 3 echoed
number 4 echoed
number 5 echoed
number 6 echoed
number 7 echoed
number 8 echoed
number 9 echoed
number 10 echoed

At first this does not look like much, but how often have you messed up for loops? The example is rather made up, but does work better the more complex your examples become.

debian permanent ctrl on caps in pty's and tty's
posted on 2016-05-23 00:52

To set both the pseudoterminals and the virtual consoles up to have CTRL instead of CAPSLOCK:

vim /etc/default/keyboard:

XKBOPTIONS="ctrl:nocaps"

Save and run:

dpkg-reconfigure -phigh console-setup
desktop installation documentation
posted on 2016-05-22 18:47

After running debian testing became annoying (kworker threads in state D killing network access), it was time to reinstall, this serves as a documentation for the next time.

OS install

  • install debian 8 with encrypted lvm for root and swap partitions
  • use usb-ethernet adapther, wlan firmware is missing: iwlwifi-7265-9.ucode, iwlwifi-7265-8.ucode
  • install it via usbstick (copy it from another usb stick from another virtual console to /lib/firmware)
  • or do it later after the installation (copy to /lib/firmware, install linux-firmware, linux-firmware-nonfree after adding the apt sources for contrib and non-free, then modprobe -r b43 and modprobe -r iwlwifi, not sure what exactly did the trick last time)
  • kde as regular window manager, desktop env, ssh server
  • reboot, enter grub, add nomodeset to kernel line, 3.16 kernel display does not work and just stays black
  • control on capslock

enable debian testing for newer kernel

cat << EOF >> /etc/apt/preferences.d/sjas
Package: *
Pin: release a=stable
Pin-Priority: 700


Package: *
Pin: release a=jessie-backports
Pin-Pirority: 660


Package: *
Pin: release a=unstable
Pin-Priority: 90
EOF

fix apt sources

cat << EOF > /etc/apt/sources.list
deb http://ftp.uni-erlangen.de/debian/ jessie main non-free contrib
deb-src http://ftp.uni-erlangen.de/debian/ jessie main non-free contrib

deb http://security.debian.org/ jessie/updates main non-free contrib
deb-src http://security.debian.org/ jessie/updates main non-free contrib

# jessie-updates, previously known as 'volatile'
deb http://ftp.uni-erlangen.de/debian/ jessie-updates main non-free contrib
deb-src http://ftp.uni-erlangen.de/debian/ jessie-updates main non-free contrib


deb http://ftp.us.debian.org/debian/ unstable main contrib non-free
deb-src http://ftp.us.debian.org/debian/ unstable main contrib non-free
EOF

install

apt update -y

Then apt search linux-image and see what is a current kernel

apt install -y linux-image-<CURRENT_KERNEL>
apt install -y i3 htop openvpn vim git terminator firmware-linux* firmware-iwlwifi parted tree parallel mlocate apt-file hdparm nmon rsync mc ethstatus nmap traceroute tcpdump screen iftop iotop mytop curl wget sysstat bash-completion multitail chromium tmux ansible pwgen pv clusterssh clustershell freerdp-x11 rdesktop tmux libreadline-gplv2-dev python-apt aptitude

terminator config

shortcuts

  • ctrl-shift-hjkl for pane movement
  • ctrl-shift-f8/f10/f9 for broadcast all/group/none
  • ctrl-(shift)-tab for (prev)/next tab

colors

  • solarized, customize red/blue/pink to be be lighter
  • background 0.7 transparency
  • green blinking cursor

other

  • infinite scrollback
  • focus follows mouse

plugins

  • activity watch
  • inactivity watch
  • terminalshot
  • logger

i3 config

In ~/.i3/config the following got to be adjusted: (jkl; instead of hjkl simply SUCKS)

# start browser
bindsym $mod+g exec google-chrome

# change focus
bindsym $mod+h focus left
bindsym $mod+j focus down
bindsym $mod+k focus up
bindsym $mod+l focus right

# move focused window
bindsym $mod+Shift+h move left
bindsym $mod+Shift+j move down
bindsym $mod+Shift+k move up
bindsym $mod+Shift+l move right

# split in horizontal orientation
bindsym $mod+semicolon split h

mode "resize" {
        # These bindings trigger as soon as you enter the resize mode

        # Pressing left will shrink the window’s width.
        # Pressing right will grow the window’s width.
        # Pressing up will shrink the window’s height.
        # Pressing down will grow the window’s height.
        bindsym h resize shrink width 10 px or 10 ppt
        bindsym j resize grow height 10 px or 10 ppt
        bindsym k resize shrink height 10 px or 10 ppt
        bindsym l resize grow width 10 px or 10 ppt

        # same bindings, but for the arrow keys
        bindsym Left resize shrink width 10 px or 10 ppt
        bindsym Down resize grow height 10 px or 10 ppt
        bindsym Up resize shrink height 10 px or 10 ppt
        bindsym Right resize grow width 10 px or 10 ppt

        # back to normal: Enter or Escape
        bindsym Return mode "default"
        bindsym Escape mode "default"
}

bar {
        #status_command sudo i3status --config /home/sjas/.i3/status.conf
        status_command i3status -c /home/sjas/.i3/i3status.conf
}


# kde-like screen locking ctrl-alt-l
bindsym Control+mod1+l exec i3lock

# make two monitors show up as one
#fake-outputs 3840x1080+0+0

cp -va /etc/i3status.conf /home/sjas/.i3/i3status.conf

Then vim /home/sjas/.i3/i3status.conf:

general {
        colors = true
        interval = 1
}

order += "ipv6"
order += "disk /"
order += "run_watch DHCP"
order += "run_watch VPN"
order += "wireless wlan0"
order += "ethernet eth0"
order += "volume master"
order += "battery 0"
order += "load"
order += "tztime local"

load {
        format = "⚇ %1min"
}

volume master {
        format = "♪: %volume"
        format_muted = "♪: muted (%volume)"
        device = "default"
        mixer = "Master"
        mixer_idx = 0
}

That's it so far, other things may be appended here eventually.

Don't forget to stop and disable bluetooth.

current blogpost-creation shortcut
posted on 2016-05-22 18:42

This is put here for documentation purposes:

createpost(){
TITLE=$1
TEMP=$(date --rfc-3339=seconds) 
CURRENT_COUNT=$(basename $(find . -iname "*.post" | sort | tail -1 ) | cut -c1- | sed 's/-.\+//g')
COUNT=$(( $CURRENT_COUNT + 1 ))
DATE=${TEMP:0:16}

FINAL_FILENAME=$COUNT-$TITLE.post
FINAL_TITLE=$(echo $1 | sed 's/[[:digit:]]\+-//' | sed 's/-/ /g')

cat << EOF >> $FINAL_FILENAME
;;;;;
title: $FINAL_TITLE
tags: todo
date: $DATE
format: md
;;;;;

EOF

vim $FINAL_FILENAME
}
DNS: resolution and reverse resolution script
posted on 2016-05-21 20:54

This is a quick-and-dirty for loop for checking a list of dns A resource records using dig. CNAME's are not handled like they should, thus are printed not in the same line, so this can not be used for being parsed without first having a look at the output and curating it first, if these are in use.

for i in sjas.de blog.sjas.de; do echo -n $'\e[33;1m'$i$'\e[0m '; TEMP=`dig +short $i`; echo -n "$TEMP "; TEMP=`dig -x $TEMP +short`; echo ${TEMP%.}; done | column -t

Instead of editing the for-loop, it might be helpful using a heredoc instead:

echo; cat << EOF | while read i; do echo -n $'\e[33;1m'$i$'\e[0m '; TEMP=`dig +short $i`; echo -n "$TEMP "; TEMP=`dig -x $TEMP +short`; echo ${TEMP%.}; done | column -t; echo

Paste this into the shell, followed by a paste of lines of the domains you want, and type EOF afterwards.

Example:

sjas@sjas:~/blog$ echo; cat << EOF | while read i; do echo -n $'\e[33;1m'$i$'\e[0m '; TEMP=`dig +short $i`; echo -n "$TEMP "; TEMP=`dig -x $TEMP +short`; echo ${TEMP%.}; done | column -t; echo
sjas.de
ix.de
asdf.de
EOF

sjas.de  78.47.176.149  static.149.176.47.78.clients.your-server.de
ix.de    193.99.144.80  redirector.heise.de
asdf.de  80.237.132.85  wp078.webpack.hosteurope.de

sjas@sjas:~/blog$ 
plesk: show mailpasswords
posted on 2016-05-19 07:08

To show all passwords for all mailaccounts on a plesk installation, do this:

/usr/local/psa/admin/sbin/mail_auth_view
linux: lsyncd setup
posted on 2016-05-18 19:08

This was done on ubuntu, but should work accordingly on other linux systems.

install

apt-get install lsyncd
mkdir /etc/lsyncd
mkdir /var/log/lsyncd
touch /var/log/lsyncd/{lsyncd.log,lsyncd-status.log}

I did not know wether the folders and files under /var/log are created automatically, so I just created them myself.

configuration

vim /etc/lsyncd/lsyncd.conf.lua and do something like this:

settings = {
    logfile = "/var/log/lsyncd/lsyncd.log",
    statusfile = "/var/log/lsyncd/lsyncd-status.log",
    statusintervall = 20,
nodaemon = false,
maxProcesses = 5,
maxDelays = 0
}

sync{
        default.rsyncssh,
        source="/var/www/",
        host="<OTHER_HOSTNAME>",
        targetdir="/var/www/",
        rsyncOpts = {"-av", "--delete"}
}
sync{
        default.rsyncssh,
        source="/home/test/",
        host="<OTHER_HOSTNAME>",
        targetdir="/home/test/",
        rsyncOpts = {"-av", "--delete"}
}

This config could be written differently. I don't know lua or what this supposedly is, so I stick with this very basic config.

The configuration is only needed on the host sending data. Of course, ssh has to be set up accordingly.

notes

If you happen to have a lot of files, initial startup can take quite a while. First lsyncd starts and gets a list of all files (or whatever it is that it does), and afterwards the rsync subprocesses are started.

If you want to stop it, it might very likely hang. In that case do ps aux | grep -e lsync -e rsync and see what is running after you already did service lsyncd stop.

To see what exactly the server does, try using iostat -xzd 1 from the sysstat package. While the util% is 100% your IOPS are eaten by something and likely it still does work.

Also use multitail /var/log/lsyncd/* (package lsyncd) to see what lsyncd does. No logs are written during the initial indexing. Also almost none by the initial rsync's for initial setup.

Afterwards the logs will show entries for every new inotified and synced file. Test this by touching files and tailing the logs. :)

inotify's exhausted?

In case you get such an error, you run in the inotify limit.

To check, do cat /proc/sys/fs/inotify/max_user_watches. If it's a number like 8192, this is simply too low. (You want to watch more files than that.)

Temp fix:

echo 1048576 > /proc/sys/fs/inotify/max_user_watches

Permanent fix:

vim /etc/sysctl.conf

fs.inotify.max_user_watches=1048576

Save and exit.

proxmox: mount NFS share in proxmox
posted on 2016-04-31 20:56

preliminaries

We have two servers:

  • 10.0.0.2 is the proxmox instance
  • 10.0.0.3 is the NFS server ip

create NFS share on NFS server

On the Server you want to create the NFS share:

Create the folder you want to export:

mkdir -p /srv/export/testmount

vim /etc/exports entry:

/srv/export/testmount     10.0.0.2(rw,sync,no_subtree_check,no_root_squash)

Then:

# make the export available
exportfs -ar

# show current exports
showmount -e localhost
#or
exportfs -v

If you have a firewall, allow all traffic from 10.0.0.2.

proxmox

  1. left frame, click on 'datacenter'
  2. tab 'storage', button 'add', choose 'NFS'
  3. id: "servername_of_nfs" or whatever you like
  4. server: 10.0.0.3
  5. export: /srv/export/testmount
  6. choose everything, if you do not want to filter just disk images, apply

If you then click on your newly added storage in the left frame below a hypervisor, you should be able to use all tabs. Otherwise there would happen a 'connection timeout' error of sorts.

openvpn and GNU expect
posted on 2016-04-31 20:56

Since I often need openvpn connections and I like to start them from within terminals to see what actually happens, but dislike having to enter AD credentials everytime, here is a solution with expect to this.

Security-wise that's questionable, but I don't need a lection on that.

Replace <CONFIGFILE>, <USERNAME>, <PASSWORD>, of course.

#!/usr/bin/expect -f



## SETUP

# handle ctrl-c
proc sigint_handler {} {
  # send ctrl-c to openvpn process
  send \x03
  # wait for it to die
  sleep 1
  # quit expect session
  exit
}

# catch ctrl-c
trap sigint_handler SIGINT



## RUN

# start shell...
set timeout -1
spawn $env(SHELL)
match_max 100000

# ... and openvpn within there
send -- "sudo openvpn --config <CONFIGNAME>\r"

# username prompt
expect -exact "^[\[0;1;39mEnter Auth Username: ^[\[0m"
send -- "<USERNAME>\r"

# password prompt
expect -exact "^[\[0;1;39mEnter Auth Password: ^[\[0m"
send -- "<PASSWORD>\r"

# make expect wait so it doesn not exit immediately
expect eof

Replace the ^['s above (four times in the code above) with literal escapes. You can insert these in vim by pressing ctrl-v + Esc in linux.

I always set up openvpn to also push DNS settings (resolvconf package and stuff), so there is some CTRL-c-handling necessary so it works and everything closes cleanly.

The escape codes for the expect lines I could discern by using autoexpect.

Free Ebook: Linux Kernel in a Nutshell
posted on 2016-04-21 20:47

To get a free version of Greg Koah Hartman's 'Linux Kernel in a Nutshell', he hosts pdfs of all sections of the book on his homepage here.

There you can download the files bundled in a .zip file and merge them via pdfsam, the best PDF split/merge tool on the planet.

The assembling order of all single files is like this:

  • title.pdf
  • LKNSTOC.fm.pdf
  • part1.pdf
  • ch00.pdf
  • ch01.pdf
  • ch02.pdf
  • ch03.pdf
  • ch04.pdf
  • ch05.pdf
  • ch06.pdf
  • part2.pdf
  • ch07.pdf
  • ch08.pdf
  • part3.pdf
  • ch09.pdf
  • ch10.pdf
  • ch11.pdf
  • colo.pdf
  • part4.pdf
  • appa.pdf
  • appb.pdf
  • LKNSIX.fm.pdf
BTRFS: filesystem rebalance issues
posted on 2016-04-13 06:38

Sometimes a btrfs filesystem will show wrong usage when df -h is used. This can easily be several GB's. (In my particular case I had 1GB space left, after a rebalance it were 6GB...)

Usually a

btrfs filesystem balance /<path-if-needed>

is all you need to do.

However:

root@sjas:/# btrfs fi bal /
ERROR: error during balancing '/': No space left on device
There may be more info in syslog - try dmesg | tail
root@sjas:/#

May stop you short.

Try:

root@sjas:/# btrfs fi balance start -dusage=10 /
Done, had to relocate 0 out of 33 chunks
root@sjas:/# 

Little more explanation provided by the man page:

    usage=<percent>, usage=<range>
       Balances only block groups with usage under the given percentage. The value of 0 is allowed and will clean up
       completely unused block groups, this should not require any new work space allocated. You may want to use usage=0 in
       case balance is returnin ENOSPC and your filesystem is not too full.

       The argument may be a single value or a range. The single value N means at most N percent used, equivalent to ..N
       range syntax. Kernels prior to 4.4 accept only the single value format. The minimum range boundary is inclusive,
       maximum is exclusive.

Afterwards try re-running btrfs fi bal / for possibly even more available space, or use a bigger value for the usage flag.

openvswitch: intro
posted on 2016-04-09 23:16

This is for debian testing branch, packages installed from the repository. openvswitch is used without a SDN controller.

prerequisites

Don't use regular linux bridges on your system, you will run into troubles, as far as I heard. (Didn't feel like testing this out myself.)

install packages

apt install openvswitch-switch

setup

# init database
ovs-vsctl init
# check if initialization worked
ovsdb-tool show-log
# find out db file
ovsdb-tool --help
# emergency reset in case you need it
ovs-vsctl emer-reset

# create your virtual switch
ovs-vsctl add-br ovs0
# show your virtual switch
ovs-vsctl list-br
ovs-vsctl add-port ovs0 ovs0eth0
# show your ports on the switch
ovs-vsctl list-ports ovs0

# show current configuration
ovs-vsctl show
Linux: create verifyable disk images with dcfldd
posted on 2016-04-01 20:33

dd (destroyer of disks, haha) can create block-level image copies. But you have no possibility to verify your copies, so try dcfldd (crap name, TBH):

dcfldd if=/dev/sdX of=/dev/sdY hash=sha256 hashwindow=50M hashlog=<FILEPATH>

Don't use md5 for hashing.

arping: duplicate ip address detection
posted on 2016-03-31 22:50

Duplicate IP's within your subnet are a problem that you can detect via arping. It sends a layer2 ARP REQUEST to detect if an IP is already known within the network.

Usually only this is sufficient for usage from the shell:

arping -D <IP>`

When you simply receive a response on the commandline, the IP is in use already. If you use vlans, you have to specify your interface with -I, too.

If you want to use this from within scripts, you might want this:

arping -D -w2 -c2 -I <INTERFACE> <IP>
echo $?

arping returns zero if there's exists a duplicate IP.

One thing to keep in mind is that some linux distributions have several packages available, but only one it the arping. See on debian, for example, you got these two on jessie:

arping/stable 2.14-1 amd64
  sends IP and/or ARP pings (to the MAC address)

iputils-arping/stable,now 3:20121221-5+b2 amd64 [installed]
  Tool to send ICMP echo requests to an ARP address

You need the iputils-arping one, if you happen to use debian.

mdadm cheatsheet
posted on 2016-03-29 07:44

Since I have had too like this crap up one time too often...

# create new multiple device disk
mdadm --create MD_DEV options...
    -l1 -n2 --metadata=0.90 DEV1 DEV2

# assemble previously created multiple device disk
mdadm --assemble MD_DEV options...
    --scan / -s
    --run / -R
    --force / -f
    --update=? / -U
    --readonly / -o

# similar to --create, but...
mdadm --build MD_DEV options...
    DO NOT USE ANYMORE

# bread and butter command
mdadm --manage MD_DEV options...
    --add / -a
    --re-add
    --remove / -r
    --fail / -f
    --replace
    --run / -R
    --stop / -S
    --readonly / -o
    --readwrite / -w

# also bread and butter command
mdadm --misc options... DEVICES
    --query / -Q    (MD_DEV)
    --detail / -D   (MD_DEV)
    --examine / -E  (DEV)
    --examine-bitmap / -X (DEV)
    --run / -R 
    --stop / -S
    --readonly / -o
    --readwrite / -w
    --test / -t
    --wait / -w
    --zero-superblock (DEV)

# havent used these yet
mdadm --grow options device
mdadm --incremental device
mdadm --monitor options...
btrfs subvolume folder list
posted on 2016-03-27 12:16

A list of good folder candidates for being placed within subvolumes being separate from the root filesystem is here:

  • /boot/grub2/*
  • /opt
  • /srv
  • /tmp
  • /usr/local
  • /var/crash
  • /var/lib/{mailman,named,pgsql,mysql}
  • /var/log
  • /var/opt
  • /var/spool
  • /var/tmp
csync2 setup
posted on 2016-03-21 17:19:01

This is done without SSL, since all servers are within their intranet anyway.

install

apt install csync2 -y

generate key

csync2 -k /etc/csync2.key

/etc/csync2.cfg

nossl * *;

group MYGROUP
{
        host NODE1;
        host NODE2;

        key /etc/csync2.key;

        include /www/htdocs;
        exclude *~ .*;
}

/etc/xinetd.d/csync2

  service csync2
  {
      flags = IPv4
      socket_type         = stream
      protocol            = tcp
      wait                = no
      user                = root
      server              = /usr/sbin/csync2
      server_args         = -i
      disable             = no
  }

copy all files to all nodes

scp /etc/csync2* node2:/etc/

restart daemon

service xinetd restart

usage

# sync stuff
csync2 -xv

# show differences
csync2 -T
csync2 -TT

# dry-run
csync2 -xvd

# force sync everything
csync2 -rf /
openssl: s_client to check certificates
posted on 2016-03-18 13:47:07

In short:

openssl s_client -connect <domain.de>:443
Linux: mount LUKS / encrypted lvm btrfs subvolume partition
posted on 2016-03-13 20:37:55

When fixing more complex linux installations, you may come across LUKS partitions. Here is the workflow for a luks + lvm + btrfs setup:

# first identify your partition
lsblk -f

# open the encrypted container
# tabbing helps, if you tend to forget commands
cryptsetup luksOpen /dev/sdX1 my_encrypted_partition
# now after you entered the password, it should pop up under /dev/mapper/my_encrypted_partition

# activate all the volume groups
vgchange -aay

# create your mount destinations
mkdir /mnt/asdf
mkdir /mnt/qwer

# mount the lvm partitions, so you can work with them
# VGname = your LVM volume group
# LVname = your LVM logical volume
# SVname = your btrfs subvolume name
mount /dev/mapper/VGname/LVname /mnt/asdf
mount /dev/mapper/VGname/LVname /mnt/qwer -o subvol=@SVname

That should be all you need to fix things, in case you need it. If it is useful to have both LVM and btrfs, may be doubted. btrfs does handle volume management by itself, too.

Linux Kernel: hello world
posted on 2016-03-12 17:26

intro

Easiest this is done via kernel modules. (TBH I don't know if it is possible otherwise in a feasible way, besides building a completely new kernel?)

So for this you should know how to handle kernel modules:

  • lsmod = show loaded kernel modules

  • insmod <module> = load kernel module

  • rmmod <module> = unload kernel module

  • modprobe <module> = load kernel module and, if needed, its dependencies

  • modprobe -r <module> = unload kernel module and unneded dependencies

This guide is debian-specific

prerequisites

#install build environment
apt install build-essential

# look up your kernel version
uname -a 
apt search linux-headers | grep headers
apt install linux-headers-<YOUR_VERSION_HERE>
mkdir /lib/modules/$(uname -r)/build/

actual module

bash: add/remove leading zero to all filenames
posted on 2016-03-12 10:39:35

add leading zero to all filenames in current folder

for i in *; do mv $i 0$i; done

remove leading zero to all filenames starting with four digits in current folder

for i in $(ls -1 0{0..9}{0..9}{0..9}*); do mv $i ${i#0}; done
typo3: fix dark pictures
posted on 2016-03-10 12:39:14

If after an update, a migration or for whatever reason your typo3 installation shows pictures being too dark, your installation very likely uses the wrong color space. Like RGB instead of sRGB.

To confirm this, grep for colorspace RGB in your typo3 installation files.

You very likely have to change like three occurences in t3lib/class.t3lib_stdgraphic.php:

-colorspace RGB

is to be replaced with

-colorspace sRGB

Log into the Backend afterwards, und click on the spark symbol on top to clear all caches. (After you have chosen your site in typo3's file tree.)

If you do not have a login, create a file called ENABLE_INSTALL_TOOL in typo3conf, and comment the original line out and add this line:

$TYPO3_CONF_VARS['BE']['installToolPassword'] = 'bacb98acf97e0b6112b1d1b650b84971';

in typo3conf/localconfiguration.php, so you can access the install tool at domainname.de/typo3/install with the default password joh316. There you can add a new admin user.

After having cleared the caches and confirming everything still works as expected, remove the ENABLE_INSTALL_TOOL file and delete your newly created backend user, and fix the install tool password in localconfiguration.php back again.

To be exact there is no reason to use graphicsmagick or change any configuration variables besides the color spaces for image rendering and clearing the caches afterwards.

Linux: find deleted files with open filehandles
posted on 2016-03-09 18:47:25
lsof -nP +L1
apache: .htaccess redirect all
posted on 2016-03-08 17:11:39

To redirect every incoming request to a new URL:

RewriteEngine on 
RewriteRule ^(.*)$ http://www.mynewdomain.com/$1 [R=301,L]

This will redirect everything, try it with 302 first, instead of 301. 301 happens to be permanent, if you mess something, people have to clear their browser caches...

GNU screen: how to scroll
posted on 2016-03-03 00:31:26

Since I forgot this so very often:

CTRL+a [

use PGUP + PGDN

hit ENTER to escape again
GNU screen: log to file
posted on 2016-03-02 00:34:29

This sequence starts logging, repeat it to stop logging again. From the manpage:

C-a H       (log)         Begins/ends logging of the current window to the file "screenlog.n".

See the folder where you started screen from for screenlog.0 usually.

This can be turned on/off, will append to an existing log file.

Servers and Java WebGUI problems
posted on 2016-02-29 16:59:00

Often server and device manufacturers do ship their hardware with java-based administration hardware. Even more often, this stuff does not properly work due to your local java install not being up to date or complete. And if your installation is fine, the access to the server in place is restricted by security policies. (...)

Note: this guide is mostly linux specific. For Windows or OSX the steps may differ, even though Java is the same.

Most general steps for using these particular kinds of software are klicking a link, and getting a .jnlp file donwload offered. Afterwards you have to double klick that file, or maybe your system already knows how to open it. Java should start and lets your application run then.

If not, here are solutions to the most common problems.

no java web start application to start

  • no proper programm for running the java application

If you can donwload the .jnlp file, but cannot open it (even though you know you installed java!), it is simply lacking the proper program. Where your java binary is located, there must also exist a javaws binary ("Java Web Start"), which you need to open these files.

Either your system does not use javaws for starting them, but just download and try from the shell:

javaws <filename>.jnlp

Verify it is installed at all:

javaws -version

(All Java programs only use a single dash for their flags, only god knows why.)

app start prohibited due to security policy problems

In your shell, either one of these should work:

ControlPanel
# or
jcontrol

Then the Java Control Panel should start.

Do ist

Choose the Security tab, and add your site URL to the Exception Site List. Afterwards your app should start upon reopening the .jnlp file.

apache: set multiple origin domains
posted on 2016-02-19 13:14:51

Exchange your domain with MYDOMAIN\.DE and WWW\.MYDOMAIN\.DE:

SetEnvIf Origin "^http(s)?://(.+\.)?(MYDOMAIN\.DE|WWW\.MYDOMAIN\.DE)$" origin_is=$0 
Header always set Access-Control-Allow-Origin %{origin_is}e env=origin_is

Put these lines in a config of your choice, likely in /etc/apache2/include.d/<filename>.conf.

openvpn: dns pushed on linux
posted on 2016-02-14 22:45:53

To have openvpn push its DNS server successfully through the tunnel, you have to have these settings in your <filename>.ovpn file:

script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf

and have the package resolvconf installed:

apt install resolvconf

That way your dns servers get exhanged in /etc/resolv.conf everytime the tunnel get established or disconnected.

This was only tested on debian.

proxmox: unable to open database
posted on 2016-02-12 00:29:25

problem

After a reboot a promox hypervisor did not come back up properly...

While this may sound like quite a dumb story, it can actually be fun if you like figuring out things. Except that you maybe don't. And the system where this broke was of course a production system where the customer is waiting for you to fix things.

Anyway: You do not really need to reinstall in case you read something like this:

Restarting pve cluster filesystem: pve-cluster[database] crit: found entry with duplicate name (inode = 0000000000000160, parent = 00000000000000F2, name = 'qemu-server')
[database] crit: DB load failed
[main] crit: memdb_open failed - unable to open database '/var/lib/pve-cluster/config.db'
[main] notice: exit proxmox configuration filesystem (-1)
 (warning).

Promox stores the configuration of /etc/pve in a sqlite database: /var/lib/pve-cluster/config.db.

reason

So when this happens, there is simply a duplicate entry, which you can fix with regular sqlite foo. Due to the duplicate entry proxmox cannot read the sqlite database, and so i will not know how to create the folders and files to populate /etc/pve, as the whole information about all these folders and files in this directory node is saved within sqlite.

See the example below, dont be fooled, the data column is just shown too small. Either use .mode line or look up the size via .show to change .width of the column so you can read the file content, but nevermind for now.

Usually you change the contents of /etc/pve while it is mounted, the state will be written the the database to. (I have no idea either, but nonetheless cannot be bothered to look this up in the promox sources.)

Without a clean databse, it can't mount /etc/pve, and neither can you fix it by copying files. (...)

solution

Open the database, delete one entry if both are duplicate (But save the data in a texteditor in case you should need it later.) of fix it somehow.

It's easy:

sqlite3 /var/lib/pve-cluster/config.db

sqlite> .databases
seq  name             file                                                      
---  ---------------  ----------------------------------------------------------
0    main             /var/lib/pve-cluster/config.db                            
sqlite> .tables
tree
sqlite> 

You see, there's only a single database in there (to be honest, I do not know if sqlite can handle more.) called main with a single table called tree. There you can simply use UPDATE or DELETE statements from sql.

sqlite> .headers on
sqlite> select * from tree;

inode       parent      version     writer      mtime       type        name         data      
----------  ----------  ----------  ----------  ----------  ----------  -----------  ----------
0           0           371         0           1455194525  8           __version__            
2           0           3           0           1410521743  8           user.cfg     user:root@
4           0           5           0           1410521743  8           datacenter.  keyboard: 
6           0           6           0           1410521891  4           priv                   
8           0           8           0           1410521891  4           nodes                  
9           8           9           0           1410521891  4           my_server              
10          9           10          0           1410521891  4           qemu-server            
11          9           11          0           1410521891  4           openvz                 
12          9           12          0           1410521891  4           priv                   
13          6           14          0           1410521892  8           authkey.key  -----BEGIN
15          0           16          0           1410521892  8           authkey.pub  -----BEGIN
17          0           18          0           1410521892  8           pve-www.key  -----BEGIN
19          9           20          0           1410521892  8           pve-ssl.key  -----BEGIN
21          6           22          0           1410521892  8           pve-root-ca  -----BEGIN
23          0           24          0           1410521892  8           pve-root-ca  -----BEGIN
25          6           232         0           1455173387  8           pve-root-ca  03
28          9           31          0           1410521892  8           pve-ssl.pem  -----BEGIN
39          0           41          0           1410521892  8           vzdump.cron  # cluster 
137         6           137         0           1410903436  4           lock                   
224         8           224         0           1455173387  4           my_server_0            
225         224         225         0           1455173387  4           qemu-server            
225         224         225         0           1455173387  4           qemu-server            
226         224         226         0           1455173387  4           openvz                 
227         224         227         0           1455173387  4           priv                   
228         224         229         0           1455173387  8           pve-ssl.key  -----BEGIN
230         224         233         0           1455173387  8           pve-ssl.pem  -----BEGIN
260         225         261         0           1455173556  8           101.conf     bootdisk: 
360         6           368         0           1455194525  8           authorized_  # This fil
369         6           371         0           1455194525  8           known_hosts  |1|Z2FUpc+

sqlite> .mode insert
sqlite> select * from tree;

... in the rather longish output now search for the corresponding double line:

INSERT INTO table VALUES(225,224,225,0,1455173387,4,'qemu-server',NULL);

Then delete the double line and reinsert it, so it only appears once.

sqlite> delete from tree where inode=225;

Now, to reinsert, replace table with tree:

sqlite> INSERT INTO tree VALUES(225,224,225,0,1455173387,4,'qemu-server',NULL);

Of course, the described aproach only works if you have duplicate entries. If there is something else borked, you have to fix this, and not just delete an entry from the database.

But if you happen to know what you are doing, this should not pose a problem. After proxmox can read the database again, you can change its contents again by editing the mounted data at /etc/pve, which will be saved when you quit or so. (I don't know the exact time when this happens.)

Linux: show samba shares
posted on 2016-02-11 18:48:04

To easily show available samba shares, try this:

[ sjas@nb ~ ] 18:45:26 $ smbclient -L 10.0.0.100
Enter sjas's password:
Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.17-Debian]

    Sharename       Type      Comment
    ---------       ----      -------
    print$          Disk      Printer Drivers
    Folder1  Disk
    Folder2  Disk
    Folder3  Disk
    Folder1IPC$            IPC       IPC Service (Samba 4.1.17-Debian)
Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.17-Debian]

    Server               Comment
    ---------            -------
    VS10356              Samba 4.1.17-Debian

    Workgroup            Master
    ---------            -------
    WORKGROUP            samba-hostname

I used an arbitrary password, still worked.

SSL certificate check from shell
posted on 2016-02-05 12:53:33

This will show the complete certificate:

echo | openssl s_client -connect google.com:443 2>/dev/null | openssl x509 -noout -text

Exchange the -text flag with any other object which are present in the certificate to get different results. (I.e. -subject or -dates.)

Magento: find out version from shell
posted on 2016-02-03 13:37:57

From within the docroot of your installation:

find . -iname Mage.php | xargs grep "public static function getVersionInfo" -A10

This should do until the code gets changed, tested with the 1.9.0.1 release.

Linux: /proc overview
posted on 2016-02-02 05:32:55

This should do rather nicely:

for i in $(ls -l /proc/ | grep -e '^-' | awk '{print $9}' | grep -v -ekcore -ekallsyms -ekmsg -ekpagecount -ekpageflags); do echo $'\e[33;1m'$i$'\e[0m'; cat /proc/$i; done | less -R

All files from /proc/* are shown, except the memory maps (which won't help for browsing /proc anyway.).

Should you want to know which files/folders do exist besides the ones in /proc/*, try this:

for i in $(ls -l /proc | grep "^d" | awk '{print $9}' | grep -v -e "^[[:digit:]]"); do echo $'\e[33;1m'$i$'\e[0m'; find /proc/$i; done | less -R

It's nice to see improve, after all I dimly remember doing this in the past here, but this solution now is way better and also less brittle than that.

Besides, man 5 proc might help a little, too.

AWK
posted on 2016-01-29 23:54:30

intro

awk is one hell of a beast. It's named after it's inventors Aho, Weinberger and Kernighan, there exist different implementations. This post is to give a rough overview, there is more to it.

Either look up the official documentation for your installed implementation (i.e. mawk is just not gawk), or try heading over here, which is where I look things sometimes up when I happen to need it.

Pro's:

  • It's fast.
  • You have a programming language at your disposal in the shell.
  • One-liners are pretty quickly written.

Con's:

  • rather steep learning curve

When working with text fragments, it can truly speed up things. Since beside a lot of good tutorials always missed a proper introduction for me, this i my shot at creating one myself.

These nice things happen to exist, or can at least be created.

  • variables
  • conditions
  • loops
  • associative arrays
  • functions
  • a profiler (Yes, the software comes with its own profiler built in, depending on the implementation you use)
  • pipes (You can pipe awk arrays directly to shell commands WITHIN awk, which is a nice feature.)
  • arithmetic operators

This list is likely not complete, as this post comes almost completely out of my head.

creating scripts vs. executing statements in the shell

Both is easily possible. The shebang for scripts:

#!/usr/bin/awk -f

Within scripts statements (block stuff within the braces) are separated through newlines, whereas in the shell you need semicolons. You don't need semicolons outside the blocks.

structuring

Usually awk are based on the following structure:

#!/usr/bin/awk -f

BEGIN { ... }
BEGIN { ... }
BEGIN { ... }
CONDITION { ... }
CONDITION { ... }
CONDITION { ... }
CONDITION { ... }
END { ... }
END { ... }
END { ... }

Or on the shell:

awk 'BEGIN { ... } BEGIN { ... } BEGIN { ... } CONDITION { ... } CONDITION { ... } CONDITION { ... } CONDITION { ... } END { ... } END { ... } END { ... }'

The misleading part is, you don't need the BEGIN blocks, the END blocks or the CONDITION's themselves from the CONDITION blocks.

So the program could as well be looking just like this:

awk '{ ... }'

or

awk 'CONDITION'

So in very short:

  • awk processes input line by line usually.
  • BEGIN / END blocks are executed prior or after input processing.
  • The middle part is executed while traversing the input, depending if the condition evaluates to 'true'.
  • If no CONDITION is specified, the block is always executed.
  • several blocks can be used together, all are evaluated.
  • If a condition is true, the current line (called a RECORD, consisting of columns called FIELDS) is printed, even when no block was specified. That also happens to be the case with variable definition. (Sooner or later you will have to debug exactly this case.)

built-in variables

To repeat:

  • RECORD = a single row of your dataset
  • FIELD = a single data entry of a column from the current row

Which explains the variables a bit:

FS          field separator (delimiter for input data, usually ' ')
OFS         output field separator (delimiter for output data)

NF          number of fields = column count
NR          number of record = row count

RS          record separator (how input is delimited, usually '\n')
ORS         output record separator (how output is delimited)

FILENAME    name of the input file

There may be more, but these are mostly implementation-dependant and thus omitted.

user-defined variables

Unlike in bash variables, you omit the $ prepended to the variable names. You have input data, which are usually just strings ("like this"), so you declare variables like this:

example_var_1=""; example_var_2=""

This can either be done in BEGIN or within the main code paths. See next paragraph for some examples.

arrays

All arrays are associative, which lets you emulate regular arrays, too. For these you simply create a variable, defined with value 0. When iterating, simply increase the running index, which is the key for your array values.

Associative arrays are rather easy, one of your fields is a key, the other the value which gets set.

An example for a emulated 'normal' array is this:

awk 'BEGIN {index=0; array=""} {array[index]=$1; index++}'

An example for regular 'associative' usage:

awk 'BEGIN {array=""} {array[$1]=$2}'

CONDITIONS

These could either be simple assignments, or regexes (/ ... /)

built-in functions

This is a quick overview, so you know these exist:

next     jump to next record
exit     quit exit program, an exit code can be specified
getline  for when you need to control getting input
print    self-explanatory
printf   formatted printing like in C
++       increment
function keywork for when you explicitly need user-defined functions

one-liners

Some handy examples are provided here:

## print second column (counting starts with one, not zero)
## this is what you will use awk the most for, don't use `cut`
awk 'print $2'

## print everything EXCEPT second column
awk '$2=""'

## remove empty lines
awk 'NF>0'
# or
awk '$1'

## add a header (printf is analogous to C language)
## the regular print statement could be used as well, of course
awk 'BEGIN {printf "%s %s %s\n","1stcol","2ndcol", "3rdcol"} {print $0}'

## print several columns, use several different delimiters
## both of these work with arbitrary counts for tab, space or colon as delemiter
awk -F'[\t :]+' '{print $1 $2 $3}'
Linux and dynamic tracing rants
posted on 2016-01-27 21:46:34

This is current work in heavy progress.

tracing is just not overly accessible

When wanting to start with dynamic kernel tracing, usual problems are similar, no matter what technology you want to use:

  • "I don't know where to start."
  • "I don't know how XYZ is done in tracing tool ABC."
  • "I don't know what probes exist."
  • "I don't know what syscalls are existing."
  • "I installed the packages, but this doesn't work?"
  • "I need to copy-paste scripts to make this work?"
  • "Heck, I don't even know what syscalls are."
  • "What can I do with all this stuff?"

Usually the syntax ain't even too bad, it's the points above hindering the further spreading of these tools. There is a pattern there to be found, so this post should do this:

  • Show what tracing is and in what shape the tooling landscape is currently.
  • Provide small examples which are usable to get a proper starting point.
  • Provide one-liners for getting overviews over the currently available tools for all probes and trace-points.
  • Provide one-liners to show how to catch syscalls which took place.
  • Provide detailed install instructions where necessary, but rather search non-invasive tools. Some tools are completely integrated into the kernel and thus directly accessible, so the focus is on these.
  • Rather than running script files, statements can directly be run from the command-line when provided correctly.

The last two can be explained rather shortly:

  • Syscalls are the C functions which make up the API by which applications can access the kernel's functions. These are documented in the type (2) man pages, if you did't know yet.

Here's a list, even though they may be called a little differently at times:

SYSCALL   WHAT IT DOES

read      read bytes from a file descriptor (file, socket)
write     write bytes from a file descriptor (file, socket)
open      open a file (returns a file descriptor)
close     close a file descriptor

fork      create a new process (current process is forked)
exec      execute a new program

connect   connect to a network host
accept    accept a network connection

stat      read file statistics
ioctl     set I/O properties, or other miscellaneous functions

mmap      map a file to the process memory address space

brk       extend the heap pointer

If a complete code audit is too heavy (All branches have to be checked after all. And later you find out you overlooked something.), dynamic tracing is for you. You can either find out how many syscalls were run, or what values variables were set to, you can collect data and create graphs from it, ... Actually you can do more than you need, so providing the most use cases should do well enough.

intro to dynamic tracing

What exactly is this dynamic tracing thing? Let's start with some terms which I shamelessly rephrase from a lesser-known but very able russian guy named Sergey Klyaus and his github stuff here:

  • Looking solely at code = static code analysis, sadly this is error-prone and a damn lot of work. There's a reason not many people do kernel development.
  • Watching a system's behaviour at runtime is dynamic analysis, but there are different types of introspection.

There are several methologies:

TODO

  • instrumentalizing
  • sampling
  • profiling
  • tracing

Sergey is truly awesome and knows his stuff. His ebook, though 'it may never be finished' as he said somewhere IIRC, is an outstanding piece of work and has already over 200 pages. The best part is that it is still freely available, and besides some little typos (English is not his mother tongue.) it is a damn good read.

So what technologies are available there will be provided in a short overview. The examples are purposefully short for copy-pasting, so starting with this stuff is easier.

DTrace

After I read a lot of stuff lately from the man, the myth, the legend, @brendangregg, it looks like DTrace is plain awesome. But since adoption on linux may take forever (if it will even happen at all since the open DTrace4Linux port by Paul Fox seems to be pretty much a one-man-show and Oracle's DTrace is just a wrapper around SystemTap, sadly I have no link where I read this), going with the alternatives seems the way to go on linux.

On FreeBSD it seems: 'Just use DTrace.'

On Linux the answer is not just as simple, thus this post might grow quite a bit over the following paragraphs.

usage

For the sake of completeness, here is a bunch of dtrace scripts:

# process plus its arguments
dtrace -n 'proc:::exec-success { trace(curpsinfo->pr_psargs); }'

# files opened by a process
dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }'

# syscall count of a program
dtrace -n 'syscall:::entry { @num[execname] = count(); }'

# syscall count by the system
dtrace -n 'syscall:::entry { @num[probefunc] = count(); }'

# syscall count of a process
dtrace -n 'syscall:::entry { @num[pid,execname] = count(); }'

# used memory of a progress
dtrace -n 'io:::start { printf("%d %s %d",pid,execname,args[0]->b_bcount); }'

# count of pages which were swapped by a process
dtrace -n 'vminfo:::pgpgin { @pg[execname] = sum(arg0); }'

eBPF

eBPF is under active development within the linux kernel, latest changes in version 4.4 you can read about here, but kernel developers call these things scary stuff.

Somewhere in a presentation Brendan compared DTrace to eBPF like a kitty hawk to a jet engine, which, besides it being 'in-kernel', should be the reason why it might be most likely be the most important tracer in linux some day.

A little presentation on BPF can be found here.

SystemTap

Until Linux' extended Berkeley Packet Filter (eBPF) is real prime time material, stap should do well, Brendan thought, as could be seen here.

SystemTap has got two modes:

  • Awk/C like language, gets the job done
  • Embedded C mode aka "guru mode" in case you need it

install

Most distributions have prepackaged what you want. Well, at least Debian did, and maybe CentOS, too, IIRC. Afterwards run stap-prep, which should tell you what else you have to install. (Usually you need the debug headers for your kernel, to make systemtap work.)

usage

TODO place some useful oneliners here

# show processes opening files in realtime
# Brendan wrote in his 'Systems Performance' book: "I've never actually seen this work."
# I feel proud, it did for me. ;)
stap -ve 'probe syscall.open { printf ("%30s %-100s\n", execname(), user_string($filename)); }'

explore

# NICE FULL OVERVIEWS

# PROBE TYPES
stap --dump-probe-types | awk -F. 'BEGIN {current=""; print "\n\033[31;1mstap -ve \"global s; probe ... {...}\"\033[0m\n"} {if (current != $1) { current=$1; printf "\n\033[33;1m%s\033[0m\n",current } else {print $0}}' | less -R
# PROBE ALIASES
stap --dump-probe-aliases | awk -F. 'BEGIN {current=""; print "\n\033[31;1mstap -ve \"global s; probe ... {...}\"\033[0m\n"} {if (current != $1) { current=$1; printf "\n\033[33;1m%s\033[0m\n",current } else {print $0}}' | less -R
# PROBE FUNCTIONS
echo $'\n\e[31;1mstap --dump-functions\e[0m\n'; stap --dump-functions

# some other examples, for the sake of completeness
stap -l 'kernel.function("acpi_*")' | sort
stap -l 'module("ohci1394").function("*")' | sort
stap -L 'module("thinkpad_acpi").function("brightness*")' | sort

further stuff

A pretty new example on Heatmaps using stap can be found here and here.

Further you also can export histograms directly to console, which is a damn awesome feature.

perf

According to Brendan, they quite heavily use perf over at netflix. Interestingly neflix runs no own infrastructure anymore, but completely relies on amazon's cloud services instead, I learned somewhere last week. You really got to know how to measure your available performance when doing such stunts, so perf sure sounds like a good idea.

Most stuff which helped me with perf here in a nutshell:

usage

What syscalls are run the most?

perf top

Let's do some profiling. In short, create a baseline data-set of your system first, then start your application and collect a second set of data from your 'system under test' (SUT). Afterwards just compare both collected sets:

perf record -p <PID> -o baseline.data sleep 30
perf record -p <PID> -o SUT.data sleep 30
perf diff baseline.data SUT.data

TODO
perf report -n --stdio

If regular strace is too heavy on your system, give perf trace a try.

This is all you need if you don't want to go down the rabbit hole. If sure, just do proceed:

explore

# check what probes exist at all
perf test

# helps with exploring what is actually possible
## alphabetically, from Brendan
perf list | awk -F':' '/Tracepoint event/ { lib[$1]++ } END { for (i in lib) { printf " %-16s %d\n",i,lib[i] } }' | sort | column
## by count
perf list | awk -F':' '/Tracepoint event/ { lib[$1]++ } END { for (i in lib) { printf " %-16s %d\n",i,lib[i] } }' | sort -nk2 | tac | column
## A PROPER COLORED LIST OF ALL TRACE GROUPS PLUS ITS TRACE POINTS (UGLY AS CAN BE)
perf list | awk -F'[: \t]+' 'BEGIN {current=""} /Tracepoint event/ {if (current != $2) { current=$2; print $2, "\n\t", $3 } else {print "\t", $3}}' | sed -r ''s/^[[:graph:]]+/$(printf "\033[33;1m&\033[0m")/'' | less -R

## SIMILAR, BUT ONLY FOR SYSTEM CALLS (UGLY? HELL, THIS IS EVEN WORSE)
perf list | awk -F'[: \t]+' 'BEGIN {current=""} /Tracepoint event/ {if (current != $2) { current=$2; print $2, "\n\t", $3 } else {print "\t", $3}}' | grep -e syscalls -e sys_enter -e sys_exit | sed -r -e 's/^syscalls/& ( with prefixes: sys_enter_ \/ sys_exit_ )/' -e ''s/^[[:graph:]]+/$(printf "\033[33;1m&\033[0m")/'' -e 's/sys_enter_([[:graph:]])/\1/' -e 's/sys_exit_([[:graph:]])/\1/' | uniq | awk 'BEGIN { flag = 1; id = 0 } /with prefixes:/ { print $0; flag = 0; next; print $0 } { if (flag) {print $0} else {array[id]=$0; id++}} END { for (i in array){print array[i] | "sort" }}' | less -R
linux: systemcheck in 60 seconds
posted on 2016-01-21 00:03

This post is a completely copied from @brendangregg from here. Just in shorter and typed by me in hope I can memorize it easier that way, plus a little change with including htop.

Summary: check a linux system for problems immediatly after ssh'ing onto it.

  1. htop - uptime, core diversity, load, swap on first sight via a TUI.
  2. uptime - for load checking, likely unnnecesary after htop
  3. dmesg | tail - check for errors like out of memory
  4. vmstat 1 - check amount of processes (r) and kernel/userland distribution and swap
  5. mpstat -P ALL 1 - check for a single hot core
  6. pidstat 1 - check for high load on single process
  7. iostat -xz 1 - high r/w load? awaits? util%?
  8. free -m - memory available, likely unneded after htop
  9. sar -n DEV 1 - rxkb/s or txkb/s is 125mbytes max for 1G NICs, util% ok?
  10. sar -n TCP,ETCP 1 - act = egress, pasv = ingress traffic, retransmits = bad, usually
  11. top - zxcV and 1 and < and > are your best friends, along with knowing status indices.

Sidenotes:

At 7. buffers = block device caching, cache = page cache for file system.

At 10. just switch columns through the angle brackets keys and have a look the waits (wa) to see if there are disk related issues, after having pressed 1 to show all available cores. d with a number after changes the refresh time to x seconds. In general everything concerning top can be found in the manual.

Lastly, a list of the process states from the mentioned top man page:

D = uninterruptible sleep <<-- waiting for disk
R = running
S = sleeping
T = traced or stopped
Z = zombie
nmap: show available ssl ciphers of a server
posted on 2016-01-04 19:39:00

command

nmap --script ssl-enum-ciphers -p <PORT> <URL>

example

Starting Nmap 6.47 ( http://nmap.org ) at 2016-01-04 15:37 CET
Nmap scan report for sjas.de (78.47.176.149)
Host is up (0.0047s latency).
rDNS record for 78.47.176.149: static.149.176.47.78.clients.your-server.de
PORT    STATE SERVICE
443/tcp open  https
| ssl-enum-ciphers: 
|   SSLv3: 
|     ciphers: 
|       TLS_DHE_RSA_WITH_AES_128_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_AES_256_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA - strong
|       TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA - strong
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong
|       TLS_RSA_WITH_AES_128_CBC_SHA - strong
|       TLS_RSA_WITH_AES_256_CBC_SHA - strong
|       TLS_RSA_WITH_CAMELLIA_128_CBC_SHA - strong
|       TLS_RSA_WITH_CAMELLIA_256_CBC_SHA - strong
|     compressors: 
|       NULL
|_  least strength: strong

Nmap done: 1 IP address (1 host up) scanned in 30.54 seconds
RBPI: cpu temp and memory reading
posted on 2016-01-02 17:38:40
# shows the temperature of the cpu
vcgencmd measure_temp

# shows the memory split between the cpu and gpu
vcgencmd get_mem arm && vcgencmd get_mem gpu
flashrom tutorial
posted on 2016-01-02 16:24:55

To directly dump contents of a NOR flash chip directly via the serial peripheral interface bus (SPI), a tool called flashrom will help.

disclaimer

If you read this and want to do what is described, you dont need a disclaimer to know you can kill your hardware through electrostatic discharge or whatever. Else you should not be doing this anyway, except you can afford grilling things and/or insist on learning things. This is likely the only disclaimer on this site for quite some time.

reasons

Why would you want to do that at all?

Flashing new content onto flash chip usually takes place after the chips contents (also containing either the operating system or at least the bootloader or some part of it) were loaded into the RAM. With that OS running, the flash content gets exchanged with a new image. So if the image is faulty, or the flashing process gets interrupted through power loss, you won't have a bootable system anymore. A simple live disk or bootable USB stick won't help much if you can't even find the USB bus (or your other devices with the bootable operation system image) can be found.

In other words, your computer (or if you do stuff with your smartphone, your device) is bricked.

Basically, it becomes a very expensive paper weight.

If you however use the SPI bus directly for ISP (in-systems programming/in-situ programming), you do not have to care for breaking things through faulty images if you have a working one already. This enables you to test things without having to fear you will render your hardware unusable. Which leaves room for trying out things which were impossible prior.

Like fiddling directly with proprietary software which wants to prohibit you from booting a proper operating system on some hardware of choice from you. I don't know when this hobby project will be finished, but I sure learned a lot about electronics within the last half year.

needed tools

  • raspberry pi (revision does not matter, but just get a B 2 in case you don't have one yet.)
  • sd flash card (this is where you will dd your OS image onto)
  • soic clip (google that, in case you want to work on NOR flash chip, so you don't have to solder wires onto the chip directly which is ugly)
  • female-to-female jumper cables (six ones minimum for working with SPI, maybe more)

In my case a debian installation was put onto an SD card of a raspberry pi (which is ARM based, as one might know), only to find out that the existing flashrom package exists for intel architecture based processors only.

Bummer.

compile and install

Ok, so lets install a proper environment and build stuff by hand then, as root user:

apt install build-essential
apt install libusb-dev
apt install pciutils-dev
apt install bzip2

wget http://download.flashrom.org/releases/flashrom-0.9.8.tar.bz2
tar xjvf flashrom-0.9.8.tar.bz2
cd flashrom-0.9.8

make -j4
make install

wiring

Google the chip you want to work on, and look after a description of its pins. (Chances are you already did this, which told you that you could use the SPI bus at all.) Put the SOIC clip on the chip.

Google a raspberry pinout table, and connect the SPI pins (MISO, MOSI, CE0, CLOCK, GND, 3.3V) accordingly.

Use short cables, long ones may cause connection problems.

usage example

All the following was done without a power supply being connected to the board, as the chip got the power from the raspberry's 3.3V Vcc pin.

As I had no prior knowledge on how to use flashrom ('i dont even know what im doing here'[TM]), this is what I tried:

# go to $HOME and create a temp folder
cd
mkdir flashromming
cd flashromming

# show help
flashrom -h

# try directly
flashrom

# try using the programmer which might work
flashrom --programmer linux_spi

# search for spi device
ls -alh /dev/spidev0.*

# use appropriate programmer, which then found my chip
flashrom --programmer linux_spi:dev=/dev/spidev0.0

# look up help to find out how to dump the flash content into a file
flashrom
flashrom -h

# actual dumping (-r = READ flash content into file)
flashrom --programmer linux_spi:dev=/dev/spidev0.0 -r nas-flash-original.bin

# always work on copies, not originals!!!
cp nas-flash-original.bin nas-flash-copy.bin

# have a look at the dumps contents
dd if=nas-flash-copy.bin | hexdump -vC | less

For starters, this worked. There is more:

## OTHER STUFF:
# flash new content onto chip (-w = WRITE file to chip)
flashrom --programmer linux_spi:dev=/dev/spidev0.0 -w newimage.bin

# erase chip contents (-E)
flashrom --programmer linux_spi:dev=/dev/spidev0.0 -E

# verify chip contents against file (-v)
# this is only needed when in doubt which file got flashed, verifying is done automatically after each flashing procedure
flashrom --programmer linux_spi:dev=/dev/spidev0.0 -v newimage.bin

issues

The motherboard which was used also had a serial interface (UART/RS232) which I used have a look at the boot process and for console access. When the SOIC clip was connected to the chip, it just would not boot.

wget: download all linked files from URL
posted on 2016-01-01 21:57:55

This will also create a lot of index files...

wget -m -p -E -k -K -np http://website.tld/path/

... which can be removed like this aftertwards, for example:

find . -iname '*index*' -exec rm {} \;

Don't delete files containing the string 'index' which you still need, check what you do before blindly copying commands from sites you don't know. :)

linux: show all cronjob files' contents
posted on 2015-12-31 11:50:22

Why didn't I think of that earlier???

for i in $(find /etc/cron*); do echo $'\e[33;1m'$i$'\e[0m'; cat $i; done | less -R

Or, if in doubt and you suspect evil doings happening:

for i in /var/spool/cron/* $(find /etc/cron*/); do echo $'\e[33;1m'$i$'\e[0m'; cat $i; done | less -R
openwrt snippets
posted on 2015-12-29 06:24:03

From here I just stole all these for further reference:

# generate 100% load
cat /dev/urandom | gzip > /dev/null

# cmdline arguments
cat /proc/<PID>/cmdline

# show available entrophy
echo " Entropy:" $(cat /proc/sys/kernel/random/entropy_avail)/$(cat /proc/sys/kernel/random/poolsize)
LXDE: shortcuts
posted on 2015-12-15 15:32

To set shortcuts for a user in your LXDE environment:

Edit /home/<user>/.config/openbox/lxde-rc.xml:

<keyboard>

...

  <keybind key="W-r">
      <action name="Execute">
    <command>lxterminal</command>
      </action>
  </keybind>

...

<keyboard>

You simply need a keybind section within the keyboard section like the one above. The example above will open an lxterminal upon pressing win-r.

sudo: Restart tomcat with tomcat user
posted on 2015-12-11 08:14:40

Just put this into /etc/sudoers: (Thou shalt use visudo command!)

tomcat7 ALL=(ALL) NOPASSWD: /usr/bin/service tomcat7 restart

This of course assumes you have a user called tomcat7 which is responsible for running your tomcat installation. :)

linux: ipmitool
posted on 2015-12-04 20:17:00

This was tested on Debian 7.

install

apt install ipmitool -y
modprobe ipmi_si
modprobe ipmi_devintf

usage

For testing:

# locally
ipmitool -I open sdr elist all

# remote
#http
ipmitool -I lan -H <ip> -U <user> -P <PASSWORT> sdr elist all

#https
ipmitool -I lanplus -H <ip> -U <user> -P <PASSWORT> sdr elist all

troubleshooting

  • check ipmi ip
  • check netmask for your ipmi network
  • check gateway
  • ping should work, too, instead of using ipmitool for a reachability check
Debian: NIC bonding config
posted on 2015-12-02 22:14:55

Additionally to the bonding config, there is also a bridge setup, as this was for a proxmox setup.o

The needed packages:

apt-get install ifenslave bridge-utils

ifenslave is for bonding, bridge-utils for bridging.

The actual config: (replace the 10.0.0.x IP Stuff)

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# external bond
auto bond0
iface bond0 inet manual
    bond_mode 802.3ad
    bond_xmit_hash_policy layer2+3
        bond_lacp_rate fast

    slaves eth0 eth2
    bond_miimon 100
    bond_downdelay 200
    bond_updelay 200


# crosslink / internal bond
auto bond1
iface bond1 inet static
    address 192.168.100.2/24
    network 192.168.100.0
    broadcast 192.68.100.255

    slaves eth1 eth3
    bond_mode balance-rr
    bond_miimon 100
    bond_downdelay 200
    bond_updelay 200


# bridge extern
auto vmbr0
iface vmbr0 inet static
    address 10.0.0.2/24
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1
    dns-nameservers 8.8.8.8

    bridge_ports bond0
    bridge_stp off
    bridge_fd 0
Linux: find folders with many files
posted on 2015-11-09 15:59:26

To easily (and FAST!) get an overview of the filecount of all folders in the current working directory:

for i in *; do echo -e "$(find $i | wc -l)\t$i"; done | sort -n
expect: update linux passwords through passwd from a list
posted on 2015-11-04 14:42:48

I know this is hacky, but I was in dire need:

IFS=$'\n'; for i in `cat pw`; do NAME=`echo $i | awk '{print $1}'`; PASS=`echo $i | awk '{print $2}'`; if getent passwd $NAME &>/dev/null; then expect -D0 -c "spawn passwd $NAME; expect \"Enter new UNIX password: \"; send \"$PASS\\r\"; expect \"Retype new UNIX password: \"; send \"$PASS\\r\";exit"; sleep 1; fi; done

Credentials were contained in a single .txt file, username and passwords, nothing else existed in there, both were separated by whitespaces. The file was called pw and laid in the same folder where the line above was run.

A watch -n1 -d /etc/shadow is helpful to see what happens. Still check your passwords afterwards!

UPDATE:

Try chpasswd. I feel stupid for not googling enough, at least a little.

X2X: An alternative to synergy
posted on 2015-10-20 23:22:45

When for whatever reason synergy quits working, try x2x:

apt-get install x2x -y

(On the machine where you want to direct your keyboard/mouse output to.)

ssh -XC <user>@<host> x2x -west -to :0.0

If -X (for X forwarding) is not working, enable it via X11Forwarding yes in your /etc/ssh/sshd_config.

All cardinal directions are fine, this is enough to use it. For everything else refer to the manpage. There may be minor glitches, i.e. when having monitors with different resolutions, but this is not a problem usually.

iptables: list installed modules
posted on 2015-10-18 23:47:45

I will get some proper output for that when I revisit that posting.

For now:

echo; echo Available Modules:; \ls -1 /usr/lib*/xtables | \grep -v -e '[A-Z]\+'; echo; echo Available Actions:; \ls -1 /usr/lib*/xtables | \grep -e '[A-Z]\+'
bash prompt deluxe
posted on 2015-10-12 00:31:04

For quite a long time I have had the same prompt on and off, like:

[user@host ~/folder]$ 

This one was already colored. However quite a while ago I read about Steve Losh and his ZSH prompt, where he also used to show git or mercurial repository information.

After quite a while (making the exit code colored depending on wether it is zero or not is harder than it seems...), this was also added. Without further ado (or any explanation how the colors look like, here are some exmples:

REGULAR, GIT, SVN:
0 [256] 1 [ sjas@nb.dyn.sjas.de ~] 00:06:39 $ cd repo/gitolite-admin/
0 [257] 2 [ sjas@nb.dyn.sjas.de ~/repo/gitolite-admin git:[master] ] 00:06:45 $ cd ../non-modal-swing-dialog-read-only/
0 [258] 3 [ sjas@nb.dyn.sjas.de ~/repo/non-modal-swing-dialog-read-only svn:[Rev 41] ] 00:06:50 $ 

ERROR CODE as first number:
0 [258] 3 [ sjas@nb.dyn.sjas.de ~/repo/non-modal-swing-dialog-read-only svn:[Rev 41] ] 00:07:05 $ asdf
bash: asdf: command not found
127 [259] 4 [ sjas@nb.dyn.sjas.de ~/repo/non-modal-swing-dialog-read-only svn:[Rev 41] ] 00:07:07 $

The second number is the history count altogether like in the history file, the third one the count of the current session. Everything is colored, and for me it is not too long due to the colors.

This goes into the ~/.bashrc:

promptfunction() {
    local EXIT="$?"
    local VCS=""
    PS1=""
    if git branch &>/dev/null
    then
        VCS=" git:$(git show-branch | awk '{print $1}') "
    else
        if svn info &>/dev/null
        then
            VCS=' svn:[Rev '"$(svn info | \grep -i revision | awk '{print $2}')"'] '
        fi
    fi
    PS1="\[\e[3$(if [ $EXIT = 0 ]; then echo '2'; else echo '1'; fi);1m\]\$?\[\e[0m\] [\!] \# \[\e[31;1m\][\[\e[37;1m\] \u\[\e[33;1m\]@\[\e[37;1m\]$(hostname -f) \[\e[32;1m\]\w\[\e[36;1m\]$VCS\[\e[0m\]\[\e[31;1m\]]\[\e[0m\] \[\e[33;1m\]\t\[\e[0m\] \[\e[36;1m\]\\$ \[\e[0m\]"
}
export PROMPT_COMMAND=promptfunction

I could have changed the coloring such that i'd have used variables for the coloring, but by now I can read them just as well. If you want to know more about the coloring, google 'ansi escape codes'. :)

IP over serial connection / RS232 via SLIP
posted on 2015-10-10 03:31:04

As of 2015, this is very likely stuff which is needed anymore. Still, for documentation reasons:

slattach /dev/ttyUSB0 -p slip -s 9600 -dL &
# interface 'sl0' just got created now
ifconfig sl0 <IP>

Repeat this on the other host, and you should be able to send ping over your serial connection.

openvswitch: installation for the impatient
posted on 2015-10-04 20:15:52

There is a lot of information out there concerning openvswitch, but a universal installer does not seem to exist.

For testing purposes, all this is done in a fresh virtualbox VM, with nothing else configured. Used virtualbox network type is NAT. Also these settings will not stick, unless you persist them in your network configuration afterwards. You have been warned.

install

Back to basics, openvswitch has a big download button.

cd ~/Downloads
mkdir ovs
mv openvswitch-2.4.0.tar.gz ovs
cd ovs
tar xzvf openvswitch-2.4.0.tar.gz
cd openvswitch-2.4.0
./configure
make -j4 # depends on the number of cores you have in your system
make install
rmmod bridge
modprobe openvswitch
modprobe brcompat

Then this suff will have been put to /usr/local hierarchy afterwards. Now make sure that /usr/local/bin and /usr/local/sbin are also part of your $PATH environment.

setup

Then:

ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
ovsdb-server -v --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --pidfile --detach --log-file
# ovs-01 will be our switch name, its arbitrary and is the shown name of the network interface in linux
ovs-vsctl add-br ovs-01

Then you can add other interfaces to the switch. However, if you do things wrong, you might have no more network connectivity, so either first try this in a virtual machine, or have a notebook at hand so you can keep on googling.

configuration theory

First some notes on the IP's:

eth0 is our default interface, and it will usually have 10.0.2.15 which is the default ip for a single vbox VM. The hypervisor (the machine which runs your virtualbox) usually gets the 10.0.2.2 for whatever reason, it least from the virtual maching. You will not be able to see or ping this IP on the host itself.

Second on basic OVS switch usage:

Add all interfaces to your new OVS instance, wether they are virtual or physical. (It's all layer2, baby!) Then assign the switch the actual IP you'd have given your external NIC usually.

actual configuration

ip addr / ip link / ip route are abbreviated ip a / ip l / ip r for brevity. Also ovs-vsctl is better shortened to just ovs via alias ovs=ovs-vsctl, but that is a matter of taste. In the following I will use the complete command name, so noone gets confused more than needed.

Armed with that kind of knowledge, the configuration should work like this:

# take interface down (ssh tunnels will die!)
ip l s eth0 down
# clear ip from current interface
ip a d 10.0.2.15/24 dev eth0
# flush all routes
ip r f all

# add physical interface to the switch, it was created already above at 'setup'
ovs-vsctl add-port ovs-01 eth0

# add ip back to it and create default route with the hosts gateway
ip a a 10.0.2.15 dev ovs-01
ip r a default via 10.0.2.2

testing

Now you should be able to ping google.com.

troubleshooting

In case the test fails, try these steps:

  1. ping 10.0.2.2 to see if you can reach the gateway. (Else your vbox network is somehow broken.)
  2. ping 8.8.8.8 to see if you have internet connectivity.
  3. ping google.com to see if your DNS works. Else try setting a dns server.

Use echo nameserver 8.8.8.8 >> /etc/resolv.conf for testing purposes.

persisting

If all that works and you want to make your changes persistant, put these informations into your interface configuration:

Make your new interface ovs-01 get an ip via DHCP (instead of eth0) and set eth0 to manual. No need to fix the nameserver entry, as this should be handled automatically.

linux: chroot and reinstall grub2
posted on 2015-10-02 01:53:04

First, while in the live disk (i.e. grml) you just booted, mount everthing to a folder, which will be the chroot root. I.e. ~/asdf.

cd
mkdir asdf
mount /dev/sda1 asdf
cd asdf

After cd'ing into there you have to mount some special folders:

  • /proc
  • /sys
  • /dev
  • /dev/pts
  • /run

Like this:

mount  -t      proc   proc       ./proc
mount  -t      sysfs  sys        ./sys
mount  --bind         /dev       ./dev
mount  -t      devpts devpts     ./dev/pts
mount  --bind         /run       ./run

Possibly you need to mount /boot and /boot/efi, too, if your boot partition is separate and if you have a UEFI setup.

followed by:

chroot .

Should you use a grml live disk and it is complaining about a missing zsh shell:

chroot . /bin/bash

Then reinstall grub:

<!-- grub2-install --recheck --no-floppy /dev/sda -->
<!-- grub2-mkconfig -o /boot/grub2/grub.cfg -->
<!-- mkinitrd -->
grub-install /dev/sda
update-grub

Exit the chroot and reboot.

LDAP: linux ldap test with ldapsearch
posted on 2015-10-01 07:37:23
ldapsearch -vvvv -LLL -H ldap://<domain-or-hostname>:<port> -b '<OU's-and-DC's-to-start-from>' -D '<domain>\<username>' -w '<password>'

-W instead of -w will prompt interactively for password. -y will read the password from a file.

-s will be for limiting scope.

awk: show postfix mailq mail ID's for specific mail
posted on 2015-09-28 00:46:44

In short, replace <searchterm> with a regex for the adress you want:

mailq | awk 'BEGIN { RS = "" } /<searchterm>/ {print $1} '
Linux: create bootable DOS on USB stick
posted on 2015-08-31 23:29:13

create

When needing a bootable DOS installation (which is something you should rarely, if ever, need), do you need windows to create it?

unetbootin will help.

  1. wipe the stick, and create a single partition with a FAT filesystem on it.
  2. install unetbootin
  3. install FreeDos by using it.

use

Once you put the stick into your computer and reboot, you should be provided with a menu where you can choose different DOS options.

When one is chosen, you should end up with a A:\> prompt and be ready to go.

If you need additional files, just throw them to the file root of the stick.

troubleshoot

If things do not work as planned, here are some hints as why they might refrain from working properly:

  1. Mainboard is set to UEFI or is UEFI-only: you cannot boot a bios stick then. Go to setup and fix it.
  2. Boot order is wrong: Either choose the boot menu option (if possible), or fix the boot order in BIOS options.
  3. No files? - Try using B: or C: to change the drive. For me, files were to be found at C:\>.
How to learn linux in three months - 1
posted on 2015-08-09 10:08:14

how to learn linux in three months

It just so happened that a colleague need to get serious linux skills for a new job... fast. Timeline is like three months.

What to do about that?

I have used linux for years, what have things were the most important during all this time?

This is the problem at hand, and there might be a series of posts as a rant and some kind of exploratory research on what I'd learn in what order with all my knowledge today, if I had to relearn things from scratch.

The underlying theme is to get the basics right and literally all the rest will follow.

Oh, and forget what university told you, this is practice, baby. University is worth jack, you know less than an apprentice at an ISP once you are finished, no matter what your master's average was.

a little more differentiation

What are essential skills in 2015? How about these as a braindump, importance-wise roughly sorted in that order:

  • linux and operating systems knowledge WRT booting the system up, process and user management
  • storage knowledge essentials, from hardware over partitioning to filesystems, even covering RAID's and (logical) volume management
  • ipv4 and networks, especially switching / routing, dns, vpn
  • server virtualization technologies

This is what I'd consider essential as of this moment, likely this is not perfect. :)

To learn things, knowing how to use a virtualization technology like virtual box is key, but from the start you will not know what you are doing anyway. Get a grip on the other things first, you will see it is way easier after the basics.

All in all this will give a solid base to start from, and give a proper perspective for working with 'this so-called internet'.

the ideas in more depth

linux, operating systems, running things

For real learning purposes and the best bang-for-buck, don't bother with windows.

Seriously. Why?

unix systems are everywhere

Windows may be a neccessity as you will have to have to work with it, but in the long run you will get much more out of unix-based knowledge. It is used in appliances, switches, stable servers, 'real' clusters. There are things you just cannot build with microsoft stuff directly, from what my windows colleagues explained to me.

Read that? You. Simply. Cannot.

If you doubt that, build a real multinode cluster without paying your ass off for some crappy VMWare stuff where the licensing fees will eat your lunch, revenue-wise.

Using linux as a base to get this knowledge is the easiest way, just know there are the BSD's and other unices out there, too.

linux vs. ENTERPRISE competitioners

Even if you have the money for VMware licenses, it's 'enterprise'.

"ENTERPRISE, FUCK YEAH."

This translates to 'build a product, get market share, earn $$$ for crappy software and use your vendor-lock-in to quench as much money out of your loyal customers as you can'. All the while using these customers as guinea pigs, as software development is expensive and rigorous testing cannot be covered by your budget.

Using open source software I get the absolute same results without paying shitloads of money. This is why redhat thrives on being successful with just providing support, even when CentOS serves the same functionality as RHEL.

appliances vs. self-built systems

Also having an accessible operating system at hand for your hypervisor or 'appliance' ('appliance' is a swearword for me, TBH) instead of some crappy busybox is immensely useful. Don't let you tell otherwise from the naysayers.

But uptime requirements become more harsh nowadays, and cluster solutions by most vendors are just not there. (I have bled a lot with a SOPHOS cluster in the past. Discussions with their support were absolute crap, too. But really all vendors are the same, some more, some less.)

Once you have an 'appliance' (which is just a firewall+IDS/IPS external and in front of your production systems), you are covered for the 80% of use cases. Which is nice, if you just need a single box for securing a small network.

Not wanting to run a custom-built OS on generic hardware, but buying a dedicated box?
Don't bother, it costs a lot of time, will not work reliably, is just expensive in every way.

I was going to write more here on some real-life examples with examples, but this will be stuff for another posting.

knowledge can easily be adapted to windows

Once having an idea how things work, this knowledge can be adapted easily to windows systems from my experience, whereas windows users often have really hard times with the CLI. But often CLI is all you have, and I dimly remember microsoft's hyper-v has soon (if not yet already) an install mode where you only have a console, and no gui anymore. This in combination of the advent of the windows powershell just screams, 'get comfortable on the commandline'. Powershell is just bloated, unintuitive and complex from my perspective, but at least you can automate things an order of magnitude better with it. Also SSH finds it way into windows.

Man might see a pattern here. :)

there are many flavours of unixoid OS's out there

Out of neccessity, a word from the wise:

To get some perspective for the linux fanboys out there: Even linux is not the pinnacle of everything. Lots of doubled functionality and applications doing the same things and you have to know like three applications where a good single one would cut it, too. Open-source driven without a real single paying customer behind it, development-wise it is like PHP. There sure are companies investing money in the kernel and userland development, but there are just to many directions at once. It is not like a single concentrated effort behind everything, spreading the development power thin.

Do you think it is just a coincidence Netflix and Whatsapp run on BSD systems?

But for things beside the standard use cases, BSD's are not the wisest choice, too. If you need esoteric stuff, linux is faster up and running, has the better drivers (or has at least even drivers at all) and thus runs on much more hardware more easily (NetBSD? whatever.). At the end of the day there exist simply way more userland tools on which more work is done. It is not just about the core system, you BSD guys. Just compare linux' top with bsd' top, functionality wise.

I know, that this just a question of manpower. But how about shifting the BSD focus from servers to the becoming THE operating system for the internet of things? Gaming consoles are already more likely bsd-based, for example.

storage knowledge essentials

There are several layers, in short:

  • hardware

Starting with hardware or software raid setups, giving you redundancy. Over the different discs (HDD vs. SSD and the available types, block size stuff, and interfaces like SATA vs. SAS). To network technologies, but these are not of interest in a 'basic course', just know you can access storage via networks, too.

  • raids

Software vs. fake raids vs. hardware raids. What you usually use, what types exist with which tradeoffs. How much you can rely upon them really.

What happens when things break? What issues can arise, and how can these show up?

This is basic essential stuff. It's needed for redundancy, and HARDWARE DOES FAIL. Period.

You need it, and you need to know what you are doing.

  • partitioning

There are BIOS based systems, but the transition it currently to UEFI. The hardware brain of the computer is only indirectly linked to you partitioning your disks, but there are reasons MBR and GPT's exist. Also the knowledge from how your system boots from the last section comes handy here.

Also you have to know about the almighty LVM, which lets you do things you would not be able to do otherwise.

  • filesystems

How the actual files are stored on disk, which is just a stream of magnetic (or other techniques) information, what are the differences. How you can use these, what are advantages of one over the other.

ipv4 and networking

V(X)LANs, LACP, QoS, traffic shaping are rather less important concepts but you should have heard at least what they are.

IP protocol is everywhere.

From the datacenter at work to your router at home and in the 'internets of things' (tm) within the next 5 to 10 years.

IPv6 is not there yet

Sure, the networks are depleted, but things will keep on for a little longer while, NAT will help until larger adoption. IP v6 will not see huge adoption rates unless the carriers and telcos agree on a switch and consumer hardware sees nationwide rollouts. As long as the old consumer routers do not speak ipv6, there is no point in doing a grand scale switch. Adoption rates in every country of the world were usually way below 10 percent the last time I checked.

It is nice to know that ipv6 exists and getting a dual stack setup up and running is nice, but not something to learn if you have only three months.

NAT, PAT, subnetting, bridging, VLAN

Concepts are universal, and you can try them out easily at home. (Exception here are the VLAN's, you need a switch being able to do these as well as working NIC's.)

No need to go complete lowlevel, but you should know what the difference between network devices is and what a how a broadcast domain is different from a collision domain. No need to know what multiplexing really is, just know ethernet exists and this is what switching is about, whereas routing is the "ip stuff".

Linux will help there, as the kernel can do a lot of things so you can play with networks.

DNS

In 30% of all cases when things break, it's DNS stuff. (At least that how it feels for me, the guess may be off.)

It's easy, it's simple. And sometimes people running a web agency for over 10 years are too stupid to set up an A and a PTR record properly?

You got to be fucking kidding me, but I am not making this up. It's easy, just noone bothers ever to tell new people how to do things right.

VPN

Virtual private networking.

Three words, endless hours of unfruitful troubleshooting and disconnectivity, if you are lacking your network basics. Still essential in everyday business work.

When it is just so simple, if you have roughly an idea what you are doing with networks. For openVPN sprinkle in a 'little' PKI / SSL/TLS certificates knowledge.

But cert knowledge is an absolute MUST in the long run, no matter what you do. You have to blindly know how to use openssl, how certificate files can and have to look, how to they are actually created.

server virtualization technologies

There exist several layers of virtualization, and there is no really good differentiation out there between some of them. But they exist and are important as they help you a lot with your work, they let you try things without having to reinstall servers completely, are just faster then playing with regular hardware and thus enable way faster feedback loops.

Using snapshots is just damn easy:
Need to try an update, have no test environment at hand?
If it goes wrong you are in deep shit?
Operating system virtualization has got you covered.

There also exists storage virtualization, like DRBD, which is essential for budget clustering without a dedicated shared storage. It's basically a RAID 1 setup over a network connection.

Of course you can get an EMC or dothill storage or whatever. But that is spending $$$ again, and often you do not need the extra performance through premium hardware (except for virtualization cluster environments) or just cannot afford it.. These SATA 6g platters don't pay themselves, and waiting weeks for a new harddisk due to delivery issues does not help your damaged RAID or your nerves. And when not using original hardware you may void your warranty, and are just as bad off if you built the box yourself in the first place..............

summing everything above up

Hand in hand with virtualization go storages and storage technologies.
Be it local or network storage.
And non-local storages need network connections.
And network connections need to be setup on the operating system.

This is the full circle.

Once you need to get an understanding on clusters, simply build on all of the foundations above and be amazed how easy all this falls into place.

And be left wondering why others seem to have such a hard time with it, or cannot seem to know where to start fixing when things break down.

This was written in a single session, lets see how it holds up over the next three months.

LVM: shrink volume
posted on 2015-08-07 18:34:25

To shrink a LVM partition, there are several steps to be reproduced:

  • the volume has to be unmounted
  • activate the LVM volumes, so linux can handle them
  • check that the filesystem is error free
  • shrink filesystem, a little more than needed
  • shrink LVM partition
  • expand filesystem to full LVM partition size
  • fsck again, if you are anxious :)

If the volume is mounted, you will not be able to filesystem-check it, or even shrink it. So you can not simply shrink the root partition of your running live system. For this you will need a live disk (google for 'grml linux') and boot from this to make the changes.

So here something to copy paste from:

vgchange -a y
e2fsck -f /dev/<volume_group>/<logical_volume>
resize2fs /dev/<volume_group>/<logical_volume> <size-in-GB-MINUS-1GB>G
lvreduce -L <size-in-GB>G /dev/<volume_group>/<logical_volume>
resize2fs /dev/<volume_group>/<logical_volume>
e2fsck -f /dev/<volume_group>/<logical_volume>

Voila.

Usually you'd want to do this in order to create another volume / partition, but this is stuff for another blogpost.

iptables: sole config
posted on 2015-08-03 17:21:27

DISCLAIMER: This is almost a complete ripoff of this answer here.

Usually when ending a iptables rule with something like -j LOG --log-prefix "dropped:", this information will go straight to the general syslog file. This creates quite some clutter, depending on the rules your firewall has in place.

/etc/rsyslog.d/10-iptables:

if ( $msg contains 'IN=' and $msg contains 'OUT=' ) 
then { 
    /var/log/10-iptables.log
    stop
}

& ~ is deprecated in the new rsyslog, you should use stop instead.

/etc/logrotate.d/iptables:

/var/log/iptables.log
{
        rotate 30
        daily
        missingok
        notifempty
        delaycompress

        postrotate
                service rsyslog rotate > /dev/null
        endscript
}

Note: The prefix is set to 10- to catch it before it reach the default rules (i.e. named 50-defaults).

MySQL: Check used storage engine
posted on 2015-07-13 15:32:01

Something to copy paste, in case you already have a .my.cnf for your root user with his password.

This only for tables you created:

less < <({ for i in $(mysql -e "show databases;" | cat | grep -v -e Database -e information_schema -e mysql -e performance_schema); do echo "--------------------$i--------------------";  mysql -e "use $i; show table status;"; done } | column -t)

This will show all tables, including the mysql ones:

less < <({ for i in $(mysql -e "show databases;" | cat | grep -v -e Database); do echo "--------------------$i--------------------";  mysql -e "use $i; show table status;"; done } | column -t)

To make it a little more readable, hitting -S in less turns or wordwrapping in less. Thus the lines which are too long are simply cut.

In a little more detail, this cannot be copy pasted in this form as it's missing the line break escapes, sorry this time not:

less < <(
            { 
                for i in $(mysql -e "show databases;" | 
                cat | 
                grep -v -e Database -e information_schema -e mysql -e performance_schema);
                do echo "--------------------$i--------------------"; 
                    mysql -e "use $i; show table status;";
                done 
            } | 
            column -t
        )

The cat piping is needed so the output will be without borders. I honestly have no idea why this cat here works the way it does. :)

SSH: tunnel and port-forwarding howto
posted on 2015-07-10 07:56:07

To create ssh tunnels there are a lot of explanations out there, and the most are not worth much. Let's see if I can do better.

some facts against common misconceptions

one

A tunnel involves only two endpoints.

Ok, fair enough. But you need to specify minimum three host locations for a working tunnel.

Where two can point to the same machine, just from different views.

Which is your local host (or at least it's port), the gateway (the machine which will be the other tunnel endpoint) and the machine you are targetting. localhost, if the target/destination host is the same machine as the gateway host.

More on that later, if this does not make sense yet.

two

Another misconception which is often prevalent: "How do I get the server port so I can access it locally?"

Actually the direction may seem unnatural:
Things depend on the source host, where the request (of whichever protocol being used) will originate.

three

There exist directions, which is what the -L and -R flags are for.

four

The order in which the ssh arguments are specified can actually be changed. And changed it is quite easier to grok.

tunnel 101

This is basic tunnelling knowledge, where SSH tunnels differ from SSL/IPSEC VPNs comments will indicate so.

Tunnelling connects non-routable networks with each other. (This is the case when one or both sites are behind a NAT.)

A tunnel is created between two enpoints, often called gateways. Encrypted pipes are created for securing traffic by crypting packets between the endpoints.

On each side, other hosts can be reached. Depending on the tunnel type, you may or may not have access to the remote gateway. (SSH lets you access the remote gateway, with an IPSEC VPN (virtual private network) where application and endpoint run on the same box you are in for some trouble. It works, but is ugly to do so.)

You also have to specify the hosts behind the endpoint. This can happen via subnets, or you can specify single hosts. (With SSH we will specify only single hosts here, no networks. Further only one side behind the tunnel has to be specified, the other side's host 'behind' the tunnel endpoint, is always located on the same machine as the gateway in question. The tunnel, it being of local or remotely forwarded port type, lets you specify the host not being locate on the gateway. Don't worry, this will come later with a better explanation.)

On general VPN's:
If you would not specify the local and remote network, how could the remote party possibly know to which ip packets should have to be directed, after the data packets exit the tunnel? (For SSH as already stated, only one host, either remote or local, which is not located on a gateway, can be specified. The other 'end' outside of the tunnel endpoint, lies always on the the gateway.)

ssh tunneling howtos

preface

A regular ssh tunnel is like the above mentioned tunnels, except that the gatways and the networks after the ends (/32 networks to be exact) reside on the same host (read: the gateway). This guide assumes that you already know how to do this, its the basic ssh <hostname-or-ip> stuff.

chained tunnels

To connect to a remote host, but hopping over a few other hosts in the process, simply chain the tunnels:

ssh <host1> ssh <host2> ssh <host3>

Since you will want proper terminals, use the -t flag when doing so. And use -A if you need agent forwarding, when wanting to copy files between hosts directly.

ssh -t -A <host1> ssh -t -A <host2> ssh -A <host3>

This chaining stuff will also work for port forwardings described below, but you really have to watch your ports, so things fit together.

local tunnelling / port forwarding

-L will forward a port on your side of the tunnel to a host on the other one. That way you can reach over into the remote network.

The first use case here will be 'local' tunneling with the -L flag. The port specified on the local site will be forwarded to the remote site. This will be done so the webinterface of a remote NAS behind a router with NAT will be made externally accessible. NAS means Network Attached Storage, a small data server consuming not much energy providing file-level data.

For this to work, the router has to be configured such that it does port forwarding of requests on its port 12345 to the ssh host you want to connect to, by knowing its IP and the port on which the ssh server on this machine runs. (Usually on port 22.)

Usually you see specifications like this one:

ssh -L 1337:192.168.0.33:443 <user>@<domain-or-ip> -p12435

Easier to grasp should be this:

ssh <domain-or-ip> -l <user> -p 12345 -L localhost:1337:192.168.0.33:443

You ssh to the host at <domain-or-ip>, with the user specified by -l as <user> on port specified with -p which is 12345. The port only has to be specified if SSH is not running on standard port 22. This is the gateway part.

Then you pass the information from on the local and the remote host, connected via a :.

localhost is the bind address, on which the SSH server instance is running, and 1337 is the port which will be used for accessing the webinterface. Which is what you have to type into your browser. (https://localhost:1337) If it were running with a different bind address, you'd have to use this one here, but then I likely would not have to tell you that. :) localhost does not have to be specified, this is done just for illustration purposes.

What another bindaddress does, is allowing others to use the tunnel if GatewayPorts is enabled on the local SSH server. See man sshd_config for more info.

192.168.0.33:443 is the ip of the NAS system on the remote network behind the remote gateway and the port where the webserver is running on there.

remote tunneling / port forwarding

-R will forward a port from the remote site to your side of the tunnel. That way hosts from your network can be reached remotely.

Going along with the example above, from within the LAN where the NAS is located:

ssh <domain-or-ip> -l <user> -p 12345 -R localhost:1337:192.168.0.33:443

Here <domain-or-ip> -l <user> -p 12345 is again the gateway information for the remote machine. Depending on -L or -R the local or remote port (and bindaddress!) are specified.

localhost here talks about the bindaddress on the remote server. If it is explicitly set, ssh's GatewayPorts directive/option has to be enabled on the server's /etc/ssh/sshd_config.

192.168.0.33:443 is just the location of the NAS again.

tunnel chains with port forwardings

A local example:

ssh -t <host1> -L 1337:localhost:1337 ssh -t <host2> -L 1337:localhost:1337 ssh <host3> -L 1337:192.168.0.33:443

Local browser can reach the far far away NAS via https://localhost:1337, which is on the same network as <host3>. If the NAS were SSH accessible, the complete path could be encrypted. Since we can't (at least in my made up example), we will hop from <host3> to it at its IP 192.168.0.33, and this is the only part of the connection, that cannot be encrypted. (This is just provided for educational purposes, such complex setups are usually unlikely in sane reality.)

Use -t for all hops prior to the last one.

a tunnel in a tunnel - port forwarding for ssh to reach locally bound services

This is for services bound to the loopback / 127.0.0.1 interface, and which are thus only locally available:

ssh <host1> -L 1336:<host2>:22
ssh localhost -p 1336 -L 1337:localhost:3306

NAS is again a bad example here, as usually these boxes do not have ssh daemons installed/running.

What we did above was simply building a tunnel to the host we want to hop onto, and then creating the port forward by connecting to the locally existing SSH tunnel. This may be useful for remote connections to mysql instances that usually can just be reached locally.

Usually I have no use for this, but it might come in handy some day.

dynamic tunnelling

To create a SOCKS proxy via SSH:

ssh <domain-or-ip> -l <user> -p 12345 -D 192.168.0.2:1337

Here a specific bindaddress was used (192.168.0.2, which is our local ip within our LAN. Do you remember the Gatewayports thing?). Any host connecing to our ssh tunnel running on port 1337 will straight be forwarded to the remote gateway.

The application has to know how to handle SOCKS connections, else this will not work.

To keep up with our NAS example, I'd do:

ssh <domain-or-ip> -l <user> -p 12345 -D 1337

Then set up my web browser to use a SOCKS proxy, with address localhost (since no bindaddress was given, unlike in the prior example) and port 1337.

Afterwards https://192.168.0.33:433 can be entered into the adressbar and the NAS is reachable. Just keep in mind, that other Websites will not work.

PPP-over-SSH

When having to use software which is unaware of SOCKS proxies, the Point-to-Point Protocol (PPP) comes to help.

Also this is a poor man's VPN, when used to transfer all traffic through it and not just a sole host or network.

Since I have not had this put to use yet, I cannot write much about it.

So far:

  • Routing may be an issue and thus reaching DNS servers, when its just used to partially tunnel network connections.
  • When tunnelling everything, OSPF (open-shortest-path-first, a routing protocol) can be used to fix this, as I read, see the second link for more info.
  • Well, here are the links.

One link was on BSD, but I guess this helps with enlightenment. The shortest howto is the last one from the Arch wiki. Best may be the second one.

bash: check MTU
posted on 2015-06-29 17:30:20

To check which MTU works, here's a one-liner. Will have colored output

for (( i=1520; i>1400; i=i-2 )); do if ping -c 1 -M do -s "$i" 8.8.8.8 &>/dev/null; then echo $'\e[32m'; else echo $'\e[31m'; fi; echo "$i ($(( $i + 28 )))"; done

Or easier to read:

for (( i=1520; i>1400; i=i-2 ))
do
    if ping -c 1 -M do -s "$i" 8.8.8.8 &>/dev/null
        then echo $'\e[32m'
        else echo $'\e[31m'
    fi
    echo "$i ($(( $i + 28 )))"
done
systemd: custom init script from scratch.
posted on 2015-06-29 09:35:19

This suffices to start a custom script as a system service in the background as a non-root-user:

[Unit]
Description=My service. Change This! :)
After=syslog.target network.target

[Service]
Type=simple
User=etherpad
ExecStart=<path to my application or shellscript, change me :)>

[Install]
WantedBy=multi-user.target

This is located at /etc/systemd/system/my-custom.service

Then system restart my-custom will work. Which is actually way easier than in the past. Also it happened to work better, out of the box. \ o /

Linux: uname
posted on 2015-06-21 21:23:39

To get a proper overview on the hardware architecutre of the system used, uname helps.

[sjas@lynsjas ~]% uname -a
Linux lynsjas 2.6.32-504.16.2.el6.x86_64 #1 SMP Wed Apr 22 06:48:29 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

This is basically: (information type, uname flag to get just this info, example output)

kernel name (-s):               Linux
host name (-n):                 lynsjas
kernel release (-r):            2.6.32-504.16.2.el6.x86_64
kernel version (-v):            #1 SMP Wed Apr 22 06:48:29 UTC 2015
machine architecture (-m):      x86_64
processor architecure (-p):     x86_64
hardware architecture (-i):     x86_64
operating system (-o):          GNU/Linux
Linux: website migration guide
posted on 2015-06-19 19:53:32

Migrating a website can be a tedious task, if you have problems keeping several things at once inside your head. This aims to solve this problem by presenting some proper guidelines.

Here we have a standard dynamic website with a mysql backend, served through an apache httpd.

For other databases/webservers the steps may differ in particular, but essentially this is the same theory everytime.

Mailmigration will as of now not be a part of this here, since it's gonna be long enough anyway.

Read this completely prior, as alternative ways are suggested sometimes.

preparations

This part is almost the most important, actual copying is usually not that hard if you know what you are doing. It's often harder to remember everything.

Before we start, the server can serve data of three kinds which are handled all the same way.

web data, just copy the website code
database, copy the database dump file
emails, copy the mailfiles

The server is accessed via the globally available...:

dns

Basically these are the things you have to copy/adjust so things will go smooth.

preparations

open questions

Putting most of these questions plus the answers to them into a spreadsheed is not the worst idea. Maybe I will come up with a shell one-liner to create a .csv later.

Also it is helpful if you are able to do FXP (transfer files from one host directly to the other, without temporary saving the data/files locally), if you do not have SSH access.

  • server access via ssh is possible?

  • ssh works via key? or password only?

  • root account? (a lot of this guide assumes root privileges, I might have missed points there are no alternatives)

  • if not, do you have all necessary account credentials for all folders etc.?

  • DO THESE WORK?

  • if no ssh, do you have ftp credentials?

  • do the credentials actually work?

  • do you get a database dump you can transfer? (If you cannot access the server, you can't make a dump.)

  • are the folder accurately named?

  • how BIG is the webfolder? (so how long will copying take?)

  • which database management system is used? (i.e. mysql or postgres)

  • database credentials for it are?

  • what is the database the site is using actually called?

  • just how BIG is the database? (and so how long will copying take?)

  • what domains are pointing to the server?

  • are these actually active?

  • and can you change the DNS RR?

  • what are the DNS TTL times?

  • is mailing configured?

  • don't forget the DNS MX RR/RR's while at the last point

DNS: aquiring information active resource records

For finding out about the dns, if you have several virtual hosts on the same machine, try grepping them all there.

When having an apache, grep all vhost files for ServerName and ServerAlias. Here's a kind-of snippet, which will work if your apache vhost configs are in default locations and indented:

\grep -e '^\s\+Server' /etc/apache2/sites-enabled/*

This shows only active sites, check sites-available if you have to migrate sites which are currently turned off, too.

The resulting list, if sanitized, can be piped on the shell and used with something like host/nslookup/echo + dig +short, to easily check which domains are still running. Check all the records, not just the A/AAAA (quad-A is ipv4, single-A is ipv4) records, also MX and whatever is set. If the exit code is non-zero, no dns anymore and less work for you. Providing a script here would not help much, since you should know what you are doing here anyway and it would most likely not help you much.

and maybe prepare the webserver, too

In case the apache config is, lets say, 'adventurous', do apache2ctl -S (Debian/Ubuntu) or httpd -S to see which domains are hosted, and in which file these are defined. Then search there for ServerName/ServerAlias directives.

If the webserver happens to have all vhosts defined in one huge file (which ist just... very not great), remove the configuration and place them into a separated file. In Debian-based Linuces you can use a2ensite <vhost-config-filename> / a2dissite <vhost-config-filename> to enable/disable single websites easily. On Redhat-based ones you create the symlinks to the configfolder apache is configured to load manually and delete them also by hand. (This isn't any different from what a2en/dissite do.) All this only for the sites you want to migrate.

Of course, you can just comment out the information on your vhosts from the config, but just... don't.

For other webservers all this is different, of course, but you get the idea.

DNS: get the domains and the website together, information-wise

Refer to the website via its main link. (apache ServerName from above.) But make sure to note all other aliases there, too. (apache ServerAlias from above.) Since you can only migrate one site after another, this helps to keep track. Write all this down, each alias in another row. Maybe put the inactive ones into an extra column there, too. Could be that these should be prolonged again, or were incorrectly set. (I.e. it did not point to the webserver when you checked.)

Write the set TTL into the next column, along with the current date. (Usually TTL is 86400, which means 24 hours, which is exactly how long it will take until your change to 1800 seconds becomes finally active. If the TTL was longer than 86400 for whatever reason, note that into your list, too!)

DNS: lower TTL the day before the migration

After having created a list and checked which domains are currently active, set the default TTL time to 1800. (Just don't go below, 30 mins are short while you do the migration. Also the registrar might prefer you not to.)

DNS: plan b in case you have dozens of websites to migrate

If you have A LOT of websites that should go from one server to the next, try migrating and testing everything (via entries in the hosts file). Then switch the ip's of the servers with each other. That way no dns changes are needed (except if you have dead domains), because this shit can become tedious, too.

TBD / todo

Nothing more here now, until i am motivated again to write more stuff up.

Linux: speedy LXC introduction
posted on 2015-06-15 23:12:20

Since the official LXC manual is just bollocks, here is the quick and dirty version to get something up and running for people with not overly much time who wish for something that 'just works (TM)':

some notes first

Depending on the kernel you are using, you might have to create containers as root user, since unprivileged containers are a newer feature of LXC.

Also not all funtionalities or flags are present, depending on your luck. Consult the manpage of the command in question to see if the things you are trying are available at all.

More often than not, the availability of programs / feature / options is package-dependant, read:
It just depends what version you get from your package management (If you don't get the source directly.), and what is listed as available in the corresponding manual page.

install

Install lxc package via your package management. lxctl might be nice, too, although it will not be discussed here, as at least my version still had quite some bugs. Where it will definitely help, is with configuring the config which you will not have to edit by hand.

Also these packages will help, do not bother if they are not all available for your distro, it still might work, even though your OS does not know or cannot find them:

lxc-devel
lxc-doc
lxc-extra
lxc-libs
lxc-templates
lxc-python3-lxc
debootstrap

check system

Use lxc-checkconfig. It easily tells you if you have trouble with running containers. Could be due to kernel version or missing userland tools.

have some throwaway space ready

This section can be skipped.

If you bother:
Easiest it'd be if you have a spare hdd at your disposal, but an USB stick will do just nicely. Use LVM to prepare the disk, so the containers can be created with a present volume group, the logical volume will be created during container creation.

Mountpoint would be /var/lib/lxc. The folder which will be used can be passed on the commandline, too, at lxc-create.

You do not have to do this, but it is kind of a security measure. When toying around with LVM, you will not as easily make your desktop go broke, just the USB stick will be wiped.

usage

create / start to container

create / get templates

Containers can be created via lxc-create.
I.e. lxc-create -n <containername> -t <templatename> The list of available templates can be found under /usr/share/lxc/templates, just omit the lxc- prefix:

\ls -Alh /usr/share/lxc/templates | awk '{print $9}' | cut -c5-

(Or wherever man lxc-create tells you to look described at the -t flag.)

If the containers shall not be saved at the default location, use the -P / --lxcpath parameter.

Creating a container off the download template prompts you with a list of operating systems from which you can choose. (lxc-create -n <containername> -t download is all you need to do.) If you do not have the template which you chose, it will be downloaded automatically. The internet will be consulted on how to create the container by LXC and it might take a little, initially.

When the next container is created from the same template, it goes MUCH faster.

Don't forget to note the root password at the console output after lxc-create is finished. Depending on the OS template, the root pw is sometimes 'root', sometimes a random one, sometimes you have to chroot into the container's file system (see file in the container folder) and set the pass by hand first. It 'depends'.

clone

Created containers can be duplicated with the lxc-clone command, i.e.:

lxc-clone <containername> <new_containername>

Look up lxc-clone --help, you can pass the backingstore to use (folder where containerfiles are saved) or the clone method (copy vs. snapshot).

start

Started are containers via lxc-start -n <containername>. That way you will get to the user login prompt.

Else start the container with the -d flag, meaning daemonized... in the background.

There also exists lxc-autostart... That is if you have to start several containers in a certain order.

lxc.start.auto = 0 (disabled) or 1 (enabled)
lxc.start.delay = 0 (delay in second to wait after starting the container)
lxc.start.order = 0 (priority of the container, higher value means starts earlier)
lxc.group = group1,group2,group3,… (groups the container is a member of)

It will also autostart 'autostart'-flagged containers at boot of the host OS, as far as I understood it.

list/watch available containers

lxc-ls will do. There are some options, but just use lxc-ls --fancy, if your version has this functionality. Otherwise you will have to stick to lxc-ls for all containers, and lxc-ls --active for the running ones.

Specific infos on a particular container can be obtained via lxc-info -n <containername>.

lxc-monitor will work like tail -f and tell the status of the container in question. (RUNNING / STOPPED)

connect to / disconnect from container

Connecting to daemonized containers will work via lxc-console -n <containername>

Exit via CTRL+a q. Be cautionous, if you put screen to use the shortcut to escape will not work. Either close the terminal then, or shutdown the container.

pause / unpause containers

lxc-freeze -n <containername>

and

lxc-unfreeze -n <containername>

will do.

stop / delete container

stop

Either turn of the linux (e.g. issuing poweroff or shutdown -h now from within the container). Or use lxc-stop -n <containername>

destroy

Simply lxc-destroy -n <containername>.

snapshots!

Snapshotting VM's does work, somehow. Usually you seem to need LVM for it. See lxc-snapshot for more info.

networking

This is a little hairy if you have never worked with bridges in linux before. You will almost certain have to reconfigure your network settings by hand to let the container access the internet.

Sample settings:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

Either put these directly into the container config (but change the xx pair to HEX values), or, to have this set automatically for all containers, put it into the global lxc config (no HEX needed, will be replaced accordingly during container creation). (/etc/lxc/default.conf)

scripting

Container usage can be scripted, i.e. in python. This opens up quite a lot of possibilities for development/deployment/testing workflows. Things run fast due to fast startup times, in a clean environment, which will lower the bar to using proper testsetups quite a lot.

#!/usr/bin/python3

import lxc

c = lxc.Container("<containername>")
c.start()

config

The list of available config options is best looked up in the manpages directly:

man lxc.conf
man 5 lxc.conf
man 5 lxc.system.conf
man 5 lxc.container.conf
man 5 lxc-usernet
man lxc-user-nic

web GUI

See LXC-webpanel, if you're on ubuntu, that is. I haven't tested it, tough. But the pictures for it on the internet look rather nice. :)

closing notes

Well, now you might have a running container, with or without network, depending on your host OS. If you put VLAN's to use, you will have no luck without further work. ;)

For more information, there's some nice documentation over at IBM.

Linux: install most recent kernel on CentOS 7
posted on 2015-06-15 21:25:04

Proceed at your own risk. You should have good reasons to use a server distribution with the most recent kernel in production.

To keep this sweet and short, do as root:

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-ml 

The downloading part might take a while.

Afterwards update grub:

grub2-mkconfig -i /boot/grub2/grub.cfg
grub2-install

Good Luck. Regression errors may lurk out there, waiting for you.

sudo: sorry, you must have a tty to run sudo
posted on 2015-06-15 12:41:49

When trying to run sudo commands via ssh, the error mentioned above might occur.

Either try this.

Or go to /etc/sudoers and enter:

Defaults !requiretty
KDE: revert desktop to folder view
posted on 2015-06-13 14:09:32

To revert the current KDE Desktop to 'Folder View', where the contents of the users ~/Desktop folder are shown, the following steps will help:

  • On the Desktop (with no windows shown probably), click on the upper right Desktop button.
  • Click 'Default Desktop Settings' (This may differ if you changed this already in the past.)
  • In the 'view' "tab" change 'type' to 'Folder View'.
  • Apply and be done.

An outdated .gif which helped me find this can be seen here.

find: multiple wildcards
posted on 2015-06-10 11:51:51

When looking out for all files in a folder and its contained subfolders, find . -iname '*.py' might for example give you all python files. But what if you all the .pyc files, too?

Coupling several types with iname will not work!

Use the -regex flag instead:

`find . -regextype egrep -regex '.*\.py|.*\.pyc'`

By default find uses the emacs regex syntax which is very likely counter intuitive. Besides emacs and egrep there are others available:

- findutils-default
- awk
- egrep
- ed
- emacs
- gnu-awk
- grep
- posix-awk
- posix-basic
- posix-egrep
- posix-extended
- posix-minimal-basic
- sed
tmux: write to all panes simultanously
posted on 2015-06-04 14:34:15

I find myself working in tmux quite often with split panes, and wanting to work on all panes at once. An alternative that I have put to use in the past was cssh / clustershell, but this uses xterm and does not look pretty.

So simply put this into ~/.tmux.conf:

bind e setw synchronize-panes

Aftwards C-b e will toggle the function which lets you write to all panes simultanously.

Linux Software RAID: revisited
posted on 2015-05-13 14:55:51

Having done a linux install based on a software raid and LVM some time ago, with the help of the debian installer, I found out the hard way that booting from it can be possible, but only if the first disk works. Maybe I did something wrong, but I wasn't able to fix the install or find the point where I erred, a reinstall from hand will be a nice learning experience, so here we go.

get a livedisk

To partition the disks manually, you need a livedisk. There are many of these out there, google for the one of your choosing. I ended up using the kali live disk from last time, but have had to manually install mdadm everytime, like described in the last blog post. You need mdadm and the LVM tools for the following.

Usually you will get an .iso file, where dd will help you to put the ISO onto the stick. If for some reason a stick will not work, you might also try burning a CD and run it from there.

boot from the live disk

Depending on your BIOS setup (UEFI booting will no be covered here.), you might have to readjust the boot order, so your system will boot. After having a running OS, open a shell.

overview

TBD

Linux: Wake-On-LAN
posted on 2015-05-11 22:04:35

To get a computer to start via remote, without having someone to push the powerbutton, can easily be achieved via the NIC's wake-on-lan feature. Only prerequisites are access to a computer within the same LAN and a WOL able computer and proper setup.

NOTE: In some BIOSes or UEFIs the WOL / wake on lan feature has to be enabled explicitly.

First check if your NIC is able to do it, and which NIC you need.

Use ip a in shell, and look up your active NIC, the one containing an IP not being 127.0.0.1. :) This should be the cabled ethernet connection, as, aside from newer Mac's (Snow Leopard / OSX 10.6 and above) the trigger will not work via WIFI.

check for functionality

Then have a look at the capabilities and the current setting:

ethtool <NIC> | grep Wake

which may give you something like:

[root@jerrylee /home/jl]# ethtool eno1 | \grep Wake
        Supports Wake-on: pumbg
        Wake-on: g

If the line with Wake-on is set to d, WOL is disabled. From the manpage:

          p   Wake on PHY activity
          u   Wake on unicast messages
          m   Wake on multicast messages
          b   Wake on broadcast messages
          a   Wake on ARP
          g   Wake on MagicPacket™
          s   Enable SecureOn™ password for MagicPacket™
          d   Disable  (wake  on  nothing).  This option
              clears all previous options.

Here I have 'Wake on MagicPacket' already enabled.

enable it

ethtool --change <NIC> wol g

use it

At another host within your network, you only have to know the IP or MAC address of the machine in question, and have the wakeonlan package (debian via apt-get) or wol package (redhat derivates, via yum) installed.

Have a look at ip n, which is short for ip neigh, so you get the MAC:

root@pi:~# ip n
10.10.10.1 dev eth0 lladdr 34:31:c4:1b:1e:b7 REACHABLE
10.10.10.20 dev eth0 lladdr 70:71:bc:9d:bd:e1 STALE

You can also put a .txt file on the host, containing the MAC.

If I wanted to start the machine with the IP 10.10.10.20, I'd have to use:

wol 70:71:bc:9d:bd:e1

And the machine will boot.

This will also persist, even when using ifup / ifdown on the interface in question.

overview

To see what can trigger a boot of your machine, see here:

cat /proc/acpi/wakeup
tmux primer
posted on 2015-05-11 21:19:05

Like an earlier post on screen, this is a primer on tmux to get up and running as fast as possible.

tmux 'feels' faster, and has according to rumors, cleaner code, thus crashes not as easily. Also the shortcuts, manpage, everything felt more natural and easier to memorize. As well as ctrl-b being a better shortcuts than screen's ctrl-a which is often needed for jumping to the beginning of the line in bash. And the pane borders are only a pixel wide, which is just great.

In short, on a server, use screen, tmux otherwise. Why? Most likely your peers will know screen already, but do not want to have to do anything with tmux. :)

Further, 'tmux' has sessions containing windows containing panes, whereas 'screen' only has sessions containing panes, as far as I remember.

In the following, every command that does not start with tmux is a hotkey, the former are shell commands. For hotkey commands you have to be within a running tmux session.

global hotkey

# needed for every command you will want to enter inside tmux
CTRL+b

general handling

help

## general help overview, bindings via tmux
?

## bindings via shell
tmux lsk

detach

d

suspend

CTRL-z

show tmux messages

~

session management

start a named session

tmux new -s <session-name>

kill session

tmux kill-session -t <session-name>

list available sessions

tmux ls

reattach named session

tmux a -t <session-name>

# if only one session is runningï
tmux a

choose session via menu

s

window management

open new window

c

close current window

&

choose window via menu

w

rename current window

,

search for text in all windows

f

moving around windows

# go to previous window
l

# jump to window by id
0, 1, 2, 3, 4, 5, 6, 7, 8, 9

# next/previous window
n / p

choose window via menu

w

pane management

split / open panes

# vertical
%

# horizontal
"

close current pane

x

break current pane out of current window (into new window)

!

moving around panes

just use the arrow keys, the will work, too

# next pane in current window
o

# rotate panes forwards / backwards (so next pane is put where the current was)
CTRL-o / M-o

# show pane id's
q

# jump to pane by number
q <number>

# go to previous pane
;

resizing panes

# by one character
CTRL-<arrow key>

# by five characters
M-<arrow key>

rearranging panes

# swap current with next with pane
}

# swap current with previous with pane
{

scrolling

pgup
pgdn
or use the mousewheel

misc

show time

t
Linux: iftop manual
posted on 2015-05-07 14:33:55

Linux iftop is a nice tool when watching traffic in realtime. Sadly, the base settings are not the most helpful.

So try these for a change, after starting iftop:

p  -  toggle port display
L  -  logarithmic traffic scale 
s  -  hide source host
N  -  port resolution off
t  -  toggle sent, received, sent+received, send and received display

which will give you something like this here: (sadly, the traffic bars are not shown)

         10b        100b        1,00kb     10,0kb      100kb      1,00Mb 10,0Mb
└──────────┴──────────┴───────────┴──────────┴───────────┴──────────┴───────────
 * :443                   <=>  * :53269                     0b   37,9kb  11,0kb
 * :80                    <=>  * :21400                  4,79kb  20,0kb  5,00kb
 * :80                    <=>  * :20141                  27,7kb  19,6kb  4,89kb
 * :80                    <=>  * :50604                  52,4kb  17,9kb  4,47kb
 * :80                    <=>  * :58073                  16,3kb  16,3kb  6,05kb
 * :22                    <=>  * :27883                  19,0kb  14,8kb  12,3kb
 * :80                    <=>  * :50086                     0b   14,8kb  3,69kb
 * :80                    <=>  * :52441                   480b   14,4kb  4,88kb
 * :80                    <=>  * :50581                  71,5kb  14,3kb  3,58kb
 * :80                    <=>  * :49450                  11,3kb  13,9kb  5,05kb
 * :80                    <=>  * :57972                     0b   13,8kb  3,44kb
 * :80                    <=>  * :37680                     0b   13,7kb  3,42kb
 * :80                    <=>  * :49312                  6,93kb  13,6kb  3,41kb
 * :80                    <=>  * :49723                  13,5kb  13,6kb  6,09kb
 * :80                    <=>  * :4442                   15,5kb  13,6kb  3,39kb
 * :80                    <=>  * :53240                  13,4kb  13,4kb  6,69kb
 * :443                   <=>  * :51954                  13,4kb  13,4kb  5,15kb

────────────────────────────────────────────────────────────────────────────────
TX:             cum:   28,0MB   peak:   3,18Mb  rates:   2,75Mb  2,86Mb  2,79Mb
RX:                    28,5MB           3,01Mb           2,83Mb  2,87Mb  2,84Mb
TOTAL:                 56,5MB           6,18Mb           5,58Mb  5,73Mb  5,63Mb

The bars are the actual traffic taking place, the logarithmic bar on top help with understanding.

To move down/up, use j/k.

The columns to the left are chosen via 1, 2, 3 and show traffic averages over 2s, 10s and 40s.

The bars can also be toggled, to reflect the 2s, 10s and 40s aggregation.

Linux and VNC
posted on 2015-05-03 11:39:03

previous VNC problems

Linux and VNC was a pain point in the past for me, as a regular VNC (read vncserver) will give you headaches when trying to view the current display. You can open a second session, but you will not see the currently running Xsession.

Enter x11vnc

For the next steps, root privileges are assumed, and that you are in the same network as your VNC machine.

Install x11vnc package for your OS via its package manager and create a startup script like this one:

cat << EOF > ~/vnc && chmod a+x ~/vnc
#!/bin/bash
x11vnc -env FD_XDM=1 -auth guess -ncache 10 &
EOF

Now you can just ssh into the machine in question, run ./vnc and have a proper vnc server running. It will even work at the login screen of the display manager, even before a user is logged into the desktop environment of the target machine.

On your maching (not the vnc server) do vncviewer (assuming you have an installed package providing the viewer), do:

vncviewer <hostname-or-ip-of-server>:5900

If it connects Fullscreen, try pressing F8, for getting a menu so you can get into windowed mode or exit the session, when done.

This also will kill the running x11vnc instance, so if you want to connect with vnc again, ssh into the maching and rerun the ./vnc script above.

security considerations

This setup does only suffice for an internal network, as no authentication measures are in place. You aren't even asked for a password when connecting.

Also running the application as root should considered bad practice.

For further securing your setup in case you need it, you might have to create an auth cookie:

xauth generate :0

and use it accordingly, as well as running it with a proper user and user rights.

Linux: Which display manager do I run?
posted on 2015-05-03 11:09:32

To easily determine the display manager you are running, this should usually siffice to a pretty high degree:

ps auxf | awk '{print $11}' | \grep -e "^/.*dm$" -e "/.*slim$" 
linux software raid, raid levels, LVM, btrfs and Kali Linux
posted on 2015-05-02 16:27:18

preface and setup layout

After having had installed a fresh system based on Kali Linux with software raid and LVM, I had some fun. The setup consisted of four hdd's, partitions for /boot, /, /var , swap and some others for testing purposes, mostly btrfs, /boot was on a ext2. First two harddisks were designated as the system RAID, second two were to be the data RAID.

The harddisks were plugged into the SATA ports 1 to 4 in the right order (ports can always be identified via the prints on the motherboard), which was a good idea as we will see later. Out of habit I also took a photo of the partitioning scheme during the install when I was done, as this was a more complex setup. Both RAID levels were RAID1, nothing fancy.

Each of the RAID devices was in turn used as a LVM volume group, and each of the partitions mentioned above were a single logical volume. So /boot was a LVM partition on top of a software raid.

Well, I simply hoped this would boot after this setup was chosen. ;)

excourse on used RAID levels

On a sidenote, usually I only put RAID1 (mirrored) and RAID10 (striped sets of mirrors) levels to use. RAID5 allows one disk to fail in the array, RAID6 two. With the current sizes of two, three or even four terabytes, and six also being already shipped, just think of the amount of time needed to rebuild a RAID5 with 10TB, which should take quite a while, when two TB already take days to finish.

Considering most people do not mix harddisks but just take them one after another out of the box the mailman sent them, these are very likely quite similar. Same model, from the same production unit or time slot, with likely similar life expectancies. Rebuilds further take their toll on the hardware, as they impose an intense workload upon the disks. Besides, in a RAID 10 data is copied straight from one disk to its partner, whereas in a RAID5 ALL disks are read, plus parity has to be calculated. This fucks up the performance of the drives during rebuilds.

I do not feel good about a rebuild stressing the array over a time span of like weeks which it takes the system until it finishes, sitting only on top of a lousy RAID5 during the process, where another missing disk means all is lost.

A RAID5, where the failure probability increases with each disk, as does the time to rebuild. RAID6 will mitigate this somehow, but just think of the time and work the rebuild takes. And if your data goes down the drain, think of what the customer will tell you when he's missing 20TB?

A RAID10 with two failed disks is already among my experiences, both went out in quick order in that case, like within two days.

Lucky me, their were on different legs of the RAID0. So what did the situation feel like?

All data was backupped. The backups are actively being tested and thus working most of the time. All storage capacity summed up to just six TB. And these were only 2TB drives, which were synced within days, not within two weeks, compared to if it had been a RAID5/6 setup.

I still dread the memory, it was a Hypervisor for several customers. Brave new virtualized world.

You may ask, why no external storage? Getting a dothill or an EMC2 storage is simply several thousand euros, and why not use an already existing 8bay Server with local storage? RAID1 for the system, leaves six disks for data, with two TB drives sums up to 6TB capacity, which is a nice use case for slightly aged hardware.

Besides, these setups can also be sold more easily, they are simply cost efficient. Plus you do not have two digit terabyte amounts of data to sync.

Here's a link from Jan 2014 to show the level of importance storage already had last year.

UPDATE:
Some time after this posting I found some additional info on this, from someone I have never heard of:

zless /usr/share/doc/mdadm/RAID5_versus_RAID10.txt.gz

(You may have to have mdadm installed, though I do not know for sure.)

But I disgress, back to the story.

boot failed, ofc

Booting the system afterwards failed with errors, and the root partition was formatted as btrfs, too.

That the RAID status was not ok was a minor issue, as the RAID was just not synced yet.

But

fsck: fsck.btrfs: not found
fsck: error2 while executing fsck.btrfs for /root/rundev
fsck: died with exit status 8
failed (code 8).

was really a problem.

Sadly, the btrfs-tools package being missing was the culprit. This could be found out through having a look at the fsck tools, seeing that not btrfs stuff is present, and googling the problem. Google also helps for finding the right package name, we have to install.

IFIXEDIT!

get to know the storage geometry

Reboot with a live disk, and having written down / photographed my layout previously, I knew where to start.

Figure this out, in case you have not watched your cabling or no info on partitions or software raids:

root@kali:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 233.8G  0 disk
`-sda1   8:1    0 232.9G  0 part
sdb      8:16   0 233.8G  0 disk
`-sdb1   8:17   0 232.9G  0 part
sdc      8:32   0 298.1G  0 disk
`-sdc1   8:33   0   298G  0 part
sdh      8:112  0   3.8G  0 disk
|-sdh1   8:113  0   2.9G  0 part /lib/live/mount/medium
`-sdh2   8:114  0  61.9M  0 part /media/Kali Live
sdi      8:128  0 372.6G  0 disk
`-sdi1   8:129  0   298G  0 part
sr0     11:0    1  1024M  0 rom
loop0    7:0    0   2.6G  1 loop /lib/live/mount/rootfs/filesystem.squashfs
root@kali:~#

Knowing we have two software raids, sda1 and sdb1 seem to be related, as are sdc1 and sdi1. The actual device sizes don't help you much, as mixed hardware was used. Something you'll also encounter out there in the wild, standard procedure.

This may lead to interesting situations:

Like at 4am in the night, with you of course being on call:
You already pulled out and set up new hardware and then you realize the system just won't boot. You can either restore too-many terabytes from backup, or just get the system back in order. This is your problem at hand, 'GO! I cannot tell you anything, I have no clue of the setup either...'

Fun times. ;) But back to the broken install.

Mounting a RAID drive directly won't work:

root@kali:~# mkdir asdf
root@kali:~# mount /dev/sda1 asdf
mount: unknown filesystem type 'linux_raid_member'

mdadm helps:

root@kali:~# mdadm -E /dev/sda1
bash: madadm: command not found

When it is installed, that is. After all, this is the live stick I used to setup the installation, so it must be somewhere:

root@kali:~# find / -iname mdadm
/lib/live/mount/rootfs/filesystem.squashfs/usr/share/bash-completion/completions/mdadm
/lib/live/mount/medium/pool/main/m/mdadm
/usr/share/bash-completion/completions/mdadm
root@kali:~# /lib/live/mount/medium/pool/main/m/mdadm
bash: /lib/live/mount/medium/pool/main/m/mdadm: Is a directory
root@kali:~# ls -alh /lib/live/mount/medium/pool/main/m/mdadm
total 749k
dr-xr-xr-x 1 root root 2.0K Mar 12 18:26 .
dr-xr-xr-x 1 root root 2.0K Mar 12 18:26 ..
-r--r--r-- 1 root root 192K Mar 12 18:26 mdadm-udeb_3.2.5-5_i386.udeb
-r--r--r-- 1 root root 553K Mar 12 18:26 mdadm_3.2.5-5_i386.deb

Lets install the debian package:

root@kali:~# dpgk -i /lib/live/mount/medium/pool/main/m/mdadm/mdadm_3.2.5-5_i386.deb 

Now back to the problem:

root@kali:~# mdadm -E /dev/sda1
/dev/sda1:
             Magic : a92b4efc
           Version : 1.2
       Feature Map : 0x0
        Array UUID : ab74df56:e0745791:d5cc011e:3792070a
              Name : vdr:0
     Creation Time : Sat May  2 15:32:29 2015
        Raid Level : raid1
      Raid Devices : 2

Available Dev Size : 488017920 (232.71 GiB 249.87 GB)
        Array Size : 244008768 (232.70 GiB 249.86 GB)
     Used Dev Size : 488017536 (232.70 GiB 249.86 GB)
       Data Offset : 262144 sectors
      Super offset : 8 sectors
             State : active
       Device UUID : 6f44a60f:d035d2d9:643a3f9c:a5bb21ef

       Update Time : Sat May  2 16:24:42 2015
          Checksum : 5a0897b6 - correct
            Events : 12


       Device role : Active device 0
       Array State : AA ('A' == active, '.' == missing)

This looks certainly better. For fun, you can look up the others if you know your layout, if you don't the layout you will have to anyway.

Try this, copy-paste, its the easiest way:

mdadm -E /dev/sd?? | grep -i -e /dev/ -e name -e device\ role -e raid\ devices -e state

Gives me this nice overview:

mdadm: No md superblock detected on /dev/sdh1.
/dev/sda1: 
              Name : vdr:0
      Raid Devices : 2
       Device role : Active device 0
       Array State : AA ('A' == active, '.' == missing)
/dev/sdb1: 
              Name : vdr:0
      Raid Devices : 2
       Device role : Active device 1
       Array State : AA ('A' == active, '.' == missing)
/dev/sdc1: 
              Name : vdr:1
      Raid Devices : 2
       Device role : Active device 0
       Array State : AA ('A' == active, '.' == missing)
/dev/sdh1: 
/dev/sdi1: 
              Name : vdr:1
      Raid Devices : 2
       Device role : Active device 1
       Array State : AA ('A' == active, '.' == missing)

Name is the array name, by the way, followed by the number of the array.

get the raid up so you can work on it

-A will assemble the raid, -R makes it available as soon as it has enough drives to run, -S stops it again. You can only assemble fitting parts anyway:

root@kali:~# mdadm -A -R /dev/md0 /dev/sda1 /dev/sdi1
mdadm: superblock on /dev/sdi1 doesn't match others - assembly arborted

So:

root@kali:~# mdadm -A -R /dev/md0 /dev/sda1 /dev/sdb1
mdadm: /dev/md0 has been started with 2 drives.
root@kali:~# mdadm -A -R /dev/md1 /dev/sdc1 /dev/sdi1
mdadm: /dev/md1 has been started with 2 drives.

This is better. Now we have the raids back up:

root@kali:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 233.8G  0 disk
`-sda1   8:1    0 232.9G  0 part
  `-md0  9:0    0 232.7G  0 raid1
sdb      8:16   0 233.8G  0 disk
`-sdb1   8:17   0 232.9G  0 part
  `-md0  9:0    0 232.7G  0 raid1
sdc      8:32   0 298.1G  0 disk
`-sdc1   8:33   0   298G  0 part
  `-md1  9:1    0 232.7G  0 raid1
sdh      8:112  0   3.8G  0 disk
|-sdh1   8:113  0   2.9G  0 part /lib/live/mount/medium
`-sdh2   8:114  0  61.9M  0 part /media/Kali Live
sdi      8:128  0 372.6G  0 disk
`-sdi1   8:129  0   298G  0 part
  `-md1  9:1    0 232.7G  0 raid1
sr0     11:0    1  1024M  0 rom
loop0    7:0    0   2.6G  1 loop /lib/live/mount/rootfs/filesystem.squashfs
root@kali:~#

In my case, I'd only need the md0 device, as I know that root is on there. But this is handled as if we knew nothing about, for illustration purposes.

Now have a look at what pandora's box has in store for you:

root@kali:~# mkdir asdf0
root@kali:~# mount /dev/md0 asdf0
mount: unknown filesystem type 'LVM2_member'

Oh well. Deja vu.

get LVM back up so you can work on it

Get an overview with pvscan, vgscan and lvscan:

root@kali:~# pvscan
  PV dev/md1   VG vg_data     lvm2 [297.96 GiB / 111.70 GiB free]
  PV dev/md0   VG vg_system   lvm2 [232.70 GiB / 34.81 GiB free]
  Total: 2 [530.66 GiB] / in user: 2 [530.66 GiB] / in no VG: 0 [0   ]

root@kali:~# lvscan
  inactive          '/dev/vg_data/lv_data_var_backup' [93.13 GiB] inherit
  inactive          '/dev/vg_data/lv_data_var_nfs' [93.13 GiB] inherit
  inactive          '/dev/vg_system/lv_system_boot' [476.00 MiB] inherit
  inactive          '/dev/vg_system/lv_system_root' [46.56 GiB] inherit
  inactive          '/dev/vg_system/lv_system_var' [74.50 GiB] inherit
  inactive          '/dev/vg_system/lv_system_var_test' [74.50 GiB] inherit
  inactive          '/dev/vg_system/lv_system_swap' [1.86 GiB] inherit

For more information there are also pvdisplay, vgdisplay and lvdisplay, which are like this:

root@kali:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_data/lv_data_var_backup
  LV Name                lv_data_var_backup
  VG Name                vg_data
  LV UUID                f0C3o2-XUB1-5xkq-om5W-w0Kh-YwcX-752gkE
  LV Write Access        read/write
  LV Creation host, time vdr, 2015-05-02 15:38:48 +0000
  LV Status              NOW available
  LV Size                93.13 GiB
  Current LE             23841
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---

  ...

There is also pvs, vgs and lvs, providing rather short output, too, so you have even more options to chose from.

In our case life is easy, since I have the habit to name the Logical Volumes like

lv_<volumegroup>_<folder>_<subfolder>

so there is less to keep in mind and to mix up. Plus you know which LV is home to what mountpoint. As far as I am concerned, there do no conventions exist?

Above all the logical volumes were marked as 'inactive', so we first have to activate them:

root@kali:~# vgchange -a y
    2 logical volume(s) in volume group "vg_data" now active
    5 logical volume(s) in volume group "vg_system" now active

root@kali:~# lvscan
  ACTIVE            '/dev/vg_data/lv_data_var_backup' [93.13 GiB] inherit
  ACTIVE            '/dev/vg_data/lv_data_var_nfs' [93.13 GiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_boot' [476.00 MiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_root' [46.56 GiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_var' [74.50 GiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_var_test' [74.50 GiB] inherit
  ACTIVE            '/dev/vg_system/lv_system_swap' [1.86 GiB] inherit

To disable would be, in case you'd need it: vgchange -a n.

These commands can also be used for single volume groups, this is done by passing the VG name as a parameter.

mount and chroot into the installation to repair it

Now lets just mount the LV's needed to fix the install:

mkdir asdf-root
mount /dev/vg_system/lv_system_root asdf-root
chroot asdf-root

When trying to install the btrfs tools, another error occurs:

root@kali:/# apt-get install btrfs-tools -y
E: Could not open lock file /var/lib/dpkg/log -open (2: No such file or directory)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

Well, after a quick look at /var via ls -alh and seeing it was empty, we might have to mount another LV.

Exit the chroot via exit, and then lets mount the other missing LV, and chroot into it again:

mount /dev/vg_system/lv_system_var asdf-root/var
chroot asdf-root

Now ls -alh /var shows us something, and we should be able to apt-get install btrfs-tools -y.

Followed by an exit and reboot plus removing the boot stick, the system should work now.

If still there were problems, mounting the 'system' partition might help. Since Kali is debian-based, see this:

mount -o bind /dev /mnt/rescue/dev
mount -o bind /dev/pts /mnt/rescue/dev/pts
mount -o bind /proc /mnt/rescue/proc
mount -o bind /run /mnt/rescue/run 
mount -o bind /sys /mnt/rescue/sys

another test

Reboot actually worked, only problem was, after the Grub welcome message on top, and before the grub menu:

error: fd0 read error.

... several times. Grub still boots, so this is not really an issue, not yet at least.

Grub can natively boot of raids, but I strongly suspect if /dev/sda dies, the system will not boot, as it seems the bootloader is just installed on one partition.

grub and booting off software raid devices

Verifying this is easy: Turn off the computer and remove the SATA cable to the first hdd. Sure enough more things broke:

GRUB loading...
Welcome to GRUB!

error: out of partition.
Entering rescue mode...
grub rescue>

Awwww. To double test, lets power off the machine, plug in the first disk again and pull the plug on the second disk, and sure enough again, it worked this time.

So, fixing this is probably not as easy as the other stuff until now?
You cannot google much, or rather google much but not have good hits, its gross with this problem.

A solution I found, here: (Beware, its in German.)

grub-mkdevicemap -n
update-grub
grub-install /dev/sda
grub-install /dev/sdb

Maybe just installing grub to the second disk might have been enough after all, but now sadly this doesn't cut it either.

Looks like I should not trust debian-based installers any more than I trust redhat-based ones. (Which I absolutely don't, since any slightly more complex setup will fail in anaconda...)

final result: all is in vain

Now it seems, the problem can only be fixed with a manual reinstall, as there are several caveats when running a bootable software raid.

The RAID superblock, which contains all information on how the RAID is constructed, and which is written on each of the member disks, will be created in the v1.2 of the metadata, which will be written to the head of the disk. This creates problems with grub.

When doing a manual install, mdadm even asks if metadata version 0.90 should be used for a bootable device. Oh well, fuck installers.

There will be another post coming, where the partitioning will be done by hand.

Raspberry Pi: Seafile installation from scratch plus WebDAV access
posted on 2015-04-21 21:34:33

the use case

After dropbox cut a program I took part in my free space went down from 25gb to 3gb, I had a reason to get an alternative up and running. iOS apps should work with it, too, as long as the can work via WebDAV.

There are a lot of comparisons of owncloud, pydio, seafile and all the alternatives, but I am somehow suspicious of pydio, owncloud has problems once you start having too many files it seems, so I ended up with a seafile test. The results were great, so a pi was bought and this guide is the result.

what will be covered

Here a rather detailed setup howto is given, to get a seafile install on a brand new raspberry pi. It will contain side info's on the networking stuff. These tend to be not covered in almost all the other guides usually since all this is in general considered 'trivial'. Which just means everybody was to lazy to give hints where that stuff just is NOT trivial, and actual work to explain properly.

Seafile will run with a mysql backend, and with an nginx webserver so I can get some education on it myself, having worked almost only with apaches until now.

However there are no guarantees, as once again, this is partly written just from memory.

prerequisites

You need these items:

  • a raspberry pi (get the latest model and be happy)
  • a case for the rasp, so it won't lie around in the open
  • a micro SD card with 4gb, with an SD card adapter
  • a cardreader (to write the pi's OS onto the micro SD with the SD adapter)
  • an AC adapter for the pi with a micro USB B connector
  • an ethernet cable (just a network cable with rj45 plugs)

And one of these:

  • a HDD/SSD with own power connection
  • or a usb-only HDD/SSD plus an USB hub with own power supply

If you try an external drive without an extra power supply, it won't run. The pi simply cannot provide enough power via its USB ports. You can see this through the red LED. If the voltage drops below 4.6V or something, it will flicker or just turn off.

DynDNS

Also some kind of DynDNS service would be helpful. Since there seemed to be a lot of trouble with the free ones, either spend some money or set up your own.

Since I was bored and have already a DNS server running and have a domain, I chose to roll my own setup. In that way you either run your whole domain via your DNS server (means your DNS is the primary domain server for your domain), or you can try using 'subzone delegation', so your DNS server runs only a subdomain whereas the domain hoster will run your 'main' domain.

On how to do this, get some other tutorials, I covered it here.

setup the system

get the OS

Install the hardware, which should not pose a problem. If it does, seriously get someone with more knowledge on computers to help you!

Get the Raspbian image, which is a debian-based OS for the pi. Download and unzip it.

get the OS onto the SD card

Open one console and enter watch -d -n1 lsblk, then insert the SD card. That way you will know what the device is called on your linux box.

Open another shell window, and then put the raspbian image onto the card:

dd if=/path/to/the/<raspbian-file.iso> of=/dev/sdX

Of course, fix path and device in the line above.

If you want to know how fast the copy process runs, try:

ps aux | grep dd

And search for the process id of the dd process from above. Then do in another shell window:

watch -n5 kill -usr1 <dd-process-id>

That way the copy process stats will be shown in the dd window every five seconds, which is nice since the 3GB image takes some time to copy. But pardon, I disgress.

fix ssh and networking

Once the copying is finished, mount the card and fix ssh. That way you will not need to hook up the pi to a keyboard and monitor (I have no HDMI capable screen here, so ...) but just connect the ethernet cable and be done.

So:

mkdir asdf
sudo mount /dev/sdX asdf
cd asdf
vim etc/network/interfaces

then either hand the eth0 interface a DHCP configuration (which is stupid), ar just give it a fixed ip.

If your home network is set to the 192.168.0.0/24 net, try configuring this:
If your network is 192.168.178.0/24 or 10.0.0.0/24, fix the ip in the following examples.

allow-hotplug eth0
iface eth0 inet static
    address 192.168.0.254
    netmask 255.255.255.0
    gateway 192.168.0.1
    network 192.168.0.0
    broadcast 192.168.0.255
    dns-nameservers 192.168.0.1

If you cannot use vim, try the damned nano or whatever editor you fancy.

Also these might be a good idea:

echo pi > etc/hostname

and

vim etc/resolv.conf

where you'd enter this line:

nameserver 192.168.0.1

Save and quit.

vim etc/ssh/sshd_config

and make sure there is this:

PermitRootLogin yes

If it were PermitRootLogin without-password, you'd not be able to connect. Save, close.

cd 
sudo umount /dev/sdX

And you can pull the SD card out and put it into the pi.

Hook up your pi to the network via network cable to your home router.

Try ping 192.168.0.254 and see if something answers after the pie has booted (which should take no longer than some minutes, I never measured the time.). If it doesn't work, you either have network issues, or misconfigured something above.

If it answers your ping, get a host entry so connecting to the pi is easier: (/etc/hosts entries are basically local DNS records, this will do you no harm.)

echo '192.168.0.254 pi' >> /etc/hosts

and copy your ssh key onto it, so passwordless login will work:

ssh-copy-id root@pi

You could also use ssh-copy-id 192.168.0.254 - it will work the same, but in the further text the pi ip will be referenced by the local dns name 'pi'. Period. Try it:

ssh root@pi

And you should be connected.

disk preparation

For the following it is assumed, that you have plugged in the usb hub/hdd already, partitioned it and created a file system. Choose your tools and filesystem to use, do it and do not forget to mount the disk afterwards.

To get the disk to be permanently included (across reboots), add it to /etc/fstab.

My entry looks like this:

/dev/sdb1       /var/seafile    btrfs   defaults          0       2

(Since the harddisk is /dev/sdb, with a single partition, filesystem btrfs or what you have to specify for mount -t when mounting by hand, default mount -o options, no dump, not root. In case you doubt what disk your harddisk is, try lsblk, the size should tell you which one to use. The last number is actually about the filesystem check: 0=off, 1=first, 2=afterwards. root is set to 1.)

The directory /var/seafile was created by me for later usage via mkdir, so I have a working mountpoint.

To reload /etc/fstab, a mount -a will do. All this was done as the root user.

Sidenote:
lsblk will not show the mountpoints for btrfs volumes, so you have to use mount to check if everthing looks as expected.

actual seafile install

Install will be done with the MySQL Backend, as the installer tells about Problems when using an USB disk (which we do) and SQLite.

get the install files

Head over to the official download section, so you will get the newest install files. Here choose the raspberry package. Intel Stuff, wether 32 or 64 bit, will not work, since the raspberry has an ARM processor. See the output of uname -m if in doubt.

prepare the system

Copy the link location, for wget'ing it later. Lets also create a dedicated user, too, as it is better to run the program without root rights, security-wise.

apt-get install python2.7 python-setuptools python-imaging mysql-server python-mysqldb -y

Remember the mysql root password, you will need it later on.

mkdir /opt/seafile
useradd seafile
chown seafile.seafile /opt/seafile

Also chown the seafile folder for the seafile user else the installer will have troubles:

chown seafile.seafile /var/seafile
chmod 775 /var/seafile

installing

su - seafile
wget https://github.com/haiwen/seafile-rpi/releases/download/v4.1.2/seafile-server_4.1.2_pi.tar.gz
tar xzvf seafile-server_4.1.2_pi.tar.gz
mkdir installed
mv seafile-server_4.1.2_pi.tar.gz installed/
cd seafile-server-4.1.2/

seafile setup

./setup-seafile-mysql.sh

Enter information:

NAME: is just a label
IP / DOMAIN: enter the pi's ip if you use seafile only on LAN / via VPN, or the dynamic dns
CCNET PORT: default on 10001, since there isn't anything running besides
PATH: /var/seafile/seafile-data, since /opt/seafile is on the SD card, and the data should go to the harddisk's mountpoint
SERVER PORTS: defaults, respectively 12001, and 8082
DATABASE INIT: 1, create new tables
MYSQL HOST: default, localhost
MYSQL PORT: default, 3306
MYSQL ROOT PASSWORT: the one you gave to mysql during install
MYSQL USER: seafile
MYSQL SEAFILE PASSWORT: use a new one
DATABASE NAMES: all default

After all this, the configuration is almost done. Basically the server can be started and run:

Since we want to use the services like any other ones usually, we will just link the scripts into /etc/init.d/. The following stuff is again done as root user:

ln /opt/seafile/seafile-server-latest/seafile.sh /etc/init.d/seafile
ln /opt/seafile/seafile-server-latest/seahub.sh /etc/init.d/seahub

I used hard links on purpose, but I cannot remember why I though this was a good idea.

Anyway, now you can do just:

service seafile start
service seahub start

and you will be asked to setup an seafile admin account, this time the one for the web interface, not the DB user.

Just use your email and yet another password, and remember them. You have to use this account for creating new user accounts, libraries, in short everything.

If your webinterface does not work, you might have to (re)start both services, in case you forgot one.

test seahub

Now, going into your browser, and entering the raspberry's local IP plus port 8000 should get you to the login screen. pi:8000 in the addressbar should do btw, when you set up the above mentioned /etc/hosts entry.

With the web login you just created, you should be able to login. :)

Looks promising so far.

open your firewall for external access

When using the service externally, without a VPN connection, don't forget to open port 8000 in your firewall/router. For testing the webgui, this is enough. To actually use seafile, these must be reachable: (copy-paste from the seafile install message)

port of ccnet server:         10001
port of seafile server:       12001
port of seafile fileserver:   8082
port of seahub:               8000

Later we will also open 8080 so WebDAV can be used. And 80 and 443 where the nginx will be listening to, too.

Of course, the services can be run on arbitrary ports. You might as well leave everything on default for the pi, but use different ports in the router forwarding to the actual ones on the pi, for security reasons. If you know what you are doing are bored when reading this, oh well. Just change it to your liking. Everybody else use default ports, makes the setup easier to debug.

security considerations

When using the service from the outside without a VPN or SSH tunnel, your traffic is plaintext. 'Thou shalt use thee TLS encryption!' in that case, but for that you will have to use a proper web server instead of the built-in one that comes with seafile.

Read: apache or nginx.

WebDAV

Since this is not everything which is needed, the WebDAV plugin has to be integrated.

configure webdav

Without a dedicated webserver this is rather easy.

In /opt/seafile/conf/seafdav.conf, set:

enabled = true

Save and close, plus afterwards:

service seafile restart

test webdav

Besides using webdav from the iOS app in question, you can as well us the linux command line to test, via the davfs2 package. Install it on your home computer (if you ran a linux there, else have a look at the official manual). As root do these:

In /etc/davfs2/davfs2.conf set:

use_locks 0

Save, close.

mkdir /mnt/davtest
mount -t davfs -o uid=<your linux system user> http://pi:8080 /mnt/davtest

Then you will be prompted for user credentials, you might just use the Web UI login from above. The one you entered, when you first started the seahub service.

The /mnt/davtest should now contain some more stuff, meaning WebDAV access works.

If in doubt, create a file with i.e. touch testfile in that folder, which you then can see in the web interface.

Now I have to repeat, using this externally means unencrypted data over the wire. Set up a proper webserver with TLS and configure WebDAV there, if you plan on using this setup from the outside of your home LAN without a VPN. That way you can also use proper fastcgi. ;)

a proper webserver - nginx with TLS

Since I need some nginx practice, this one will be used here.

First lets get the certificates up and running:

mkdir /etc/ssl/nginx
cd /etc/ssl/nginx
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3650 -key ca-key.pem -out ca-cert.pem
openssl req -newkey rsa:2048 -days 3650 -nodes -keyout server-key.pem -out server-req.pem
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3650 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

When prompted to enter something, do as you like. You could also hit just ENTER all the time until it's finished.

Then lets fix the domains the server is bound to:

In /opt/seafile/ccnet/ccnet.conf:

SERVICE_URL = https://www.yourdomain.com

In /opt/seafile/seahub_settings.py:

FILE_SERVER_ROOT = https://www.yourdomain.com/seafhttp

Now onto the nginx config, you just have to change your domain below. Open /etc/nginx/sites-available/yourdomain.com and change it accordingly, to have http directed to https and a working https:

server {
        listen       80;
        server_name  dyn.sjas.de;

        # force redirect http to https
        rewrite ^ https://$http_host$request_uri? permanent;
}

server {
        listen   443;
        server_name yourdomain.com;

        ssl on;
        ssl_certificate /etc/ssl/nginx/server-cert.pem;
        ssl_certificate_key /etc/ssl/nginx/server-key.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_session_timeout 5m;
        ssl_prefer_server_ciphers on;

        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_connect_timeout       300;
        proxy_send_timeout          300;
        proxy_read_timeout          300;
        send_timeout                300;

        location / {
                fastcgi_pass   127.0.0.1:8000;
                fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
                fastcgi_read_timeout 300;

                fastcgi_param  HTTPS            on;
                fastcgi_param  HTTP_SCHEME      https;
                fastcgi_param  PATH_INFO        $fastcgi_script_name;
                fastcgi_param  SERVER_PROTOCOL  $server_protocol;
                fastcgi_param  QUERY_STRING     $query_string;
                fastcgi_param  REQUEST_METHOD   $request_method;
                fastcgi_param  CONTENT_TYPE     $content_type;
                fastcgi_param  CONTENT_LENGTH   $content_length;
                fastcgi_param  SERVER_ADDR      $server_addr;
                fastcgi_param  SERVER_PORT      $server_port;
                fastcgi_param  SERVER_NAME      $server_name;
                fastcgi_param  REMOTE_ADDR      $remote_addr;

                access_log      /var/log/nginx/seahub.access.log;
                error_log       /var/log/nginx/seahub.error.log;
        }

        location /seafhttp {
                rewrite ^/seafhttp(.*)$ $1 break;
                proxy_pass http://127.0.0.1:8082;
                client_max_body_size 0;
                proxy_connect_timeout  36000s;
                proxy_read_timeout  36000s;
        }

        location /media {
                root /opt/seafile/seafile-server-latest/seahub;
        }
}

Now just create a proper link for sites-enabled and restart nginx:

ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/yourdomain.com
service nginx restart

From the browser you should be able to test things now, https://pi.

automatically start everything

Let us make them known as services to be run on startup. This should be not too hard, but turned out to be a hairy problem.

Just using update-rc.d on the already present files won't work. seahub will start, but seafile will not.

Just putting service seafile start; service seahub start into /etc/rc.local will not work either. That way seafile will start, but seahub will not. Oh my.

Also, seahub tries to find seafile. (... ... ...)

Long story short: Open /etc/init.d/seafile and comment out in about line 150 the "warning_if_seafile_not_running" function:

function before_start() {
    check_python_executable;
    validate_ccnet_conf_dir;
    read_seafile_data_dir;

    #warning_if_seafile_not_running;

That way the check is turned off, and seahub will come up. I honestly have no idea where the problem lies, but its related to no proper startscripts being provided.

In the official manual there does exist a skeleton that you can adapt... Sadly that stuff over there is pretty outdated. Also I simply chose not to put up with it, as it will just wrap the scripts we are currently using.

By now this article is finished, and you should have a raspberry with a working seafile+webdav install.

offtopic

For fun and educational purposes, some words on initialization scripts, through a very pointy little anecdote:

Wrapping scripts with another script will cause endless headaches when things go haywire.

On a legacy system of rather complex web application with borked initscripts three really great people could not find the error over the course of like 1,5 years, and not for the lack of trying. File encodings, no proper initscripts, a subcontractor playing dumb (and not really having a clue, developers just ain't sysadmins, restarting via 'their' scripts did work, after all), all for a medium-sized clustered (but partly dysfunctional, of course) production system with harsh uptime requirements. To further worsen everything, several people on customer side had nagios notifications for EVERY SINGLE SERVICE, which got checked every 5 minutes, each. You could not count the SMS arriving, when a host went down. Even rebooting a service could cause a MESS, which did of course not help for locating the error. The initscripts (several application instances on each of the machines running) wrapping scripts which wrapped a script which wrapped scripts. Encoding was set in several applications, the system, also on boot time within grub. You name it, a puzzle in a puzzle in a puzzle in a puzzle.

I love bash, but debugging bash environments from scripts referencing each other is something you might have to do, when you happen to be in hell where you have to burn for your sins. At least that is how I imagine it.

Final result somewhere was a forgotten - after a su. Once I found it, my day was over. Will never forget this moment I found the cause, even if get a hundred years old. WRITE PROPER INITSCRIPTS, PEOPLE!

TODO

On the TODO list for this system could be:

  • logrotate and proper logging, since these are written in /opt/seafile/logs on the SD card, which is bad
  • a ramdisk for the /tmp folder
  • a custom fail2ban setup using the seafile configs

But for now, this post is finished.

To the brave soul reading this:
I hope you did like this little write up.

bash: fun with programming
posted on 2015-04-11 22:37:59

While strolling around and doing some readup on FreeBSD and it's man pages, I came across the intro pages. There exist man 1 intro to man 9 intro. After having read all, I wanted to have an overview, which manpages were referenced from these, which lead to all this in the end.

With some messing around, this is what I ended up with finally:

[sjas@stv ~]$ MATCH=\\w\\+\([[:digit:]]\); MANPAGE="intro"; for (( i=1;i<10;i++ )); do echo "^[[33;1mman $i $MANPAGE^[[0m"; grep "$MATCH" <(man "$i" "$MANPAGE") | grep -v $(echo "$MANPAGE" | tr '[:lower:]' '[:upper:]') | grep --color "$MATCH"; done

Sidenote: Simply copy-pasting this will not work, see the ansi escape sequences part below on why. If you cannot wait, exchange the two occurences of ^[ characters with a literal escape. Insert via Ctrl-V followed by hitting Esc.

Since this makes use of really a lot of bash tricks, a write-up might be some fun and this post is the result. In case you don't understand something, try googling the term in question for further reference. This post is intended as a pointer on what to search at all.

As this grew quite long I could not be bothered to copy contents of man pages or insert links of wikipedia pages, so bear with me.

preface

As most people do not have a BSD installation ready, a reference the manpages of a linux command would help? A command with several pages would be needed, so how about:

man -k . | awk '{print $1}' | sort | uniq -c | grep -v -e 1 -e 2

Which will give:

  3 info
  3 open

So lets just use the 'info' man pages.

man -k will search all manpages for a given string, in our case for a literal dot which should be included in every page. Of the output only the first column is needed, which is done via awk '{print $1}'. (Do not use cut -d' ' -f1 for things like this, won't work if you have commands separated by several spaces.) sort the output, so double commands are listed in a row, followed by uniq -c which will list all the unique occurences as well as their count. grep -v excludes all occurences of either 1 or 2. (That is why -e is used for providing these, instead of piping through grep -v 1 | grep -v 2, which would work the same.)

overview

Now onto the real beef, which will look like this:

[sjas@nb ~]$ MATCH=\\w\\+\(.\); MANPAGE="info"; for (( i=1;i<10;i++ )); do echo "^[[33;1mman $i $MANPAGE^[[0m"; grep "$MATCH" <(man "$i" "$MANPAGE") | grep -v $(echo "$MANPAGE" | tr '[:lower:]' '[:upper:]') | grep --color "$MATCH"; done
man 1 info
man 2 info
No manual entry for info in section 2
man 3 info
No manual entry for info in section 3
man 4 info
No manual entry for info in section 4
man 5 info
       The Info file format is an easily-parsable representation for online documents.  It can be read by emacs(1) and info(1) among other programs.
       Info files are usually created from texinfo(5) sources by makeinfo(1), but can be created from scratch if so desired.
       info(1), install-info(1), makeinfo(1), texi2dvi(1),
       texindex(1).
       emacs(1), tex(1).
       texinfo(5).
man 6 info
No manual entry for info in section 6
man 7 info
No manual entry for info in section 7
man 8 info
No manual entry for info in section 8
man 9 info
No manual entry for info in section 9

The headlines are printed in bold yellow, the matched manpages are printed in red.

For a better explanation, the one-liner above transformed into a bash script with line numbers:

1  #!/bin/bash
2  MATCH=\\w\\+\([[:digit:]]\)
3  MANPAGE="open"
4  for (( i=1;i<10;i++ ))
5  do 
6      echo "^[[33;1mman $i $MANPAGE^[[0m"
7      grep "$MATCH" <(man "$i" "$MANPAGE") | grep -v $(echo "$MANPAGE" | tr '[:lower:]' '[:upper:]') | grep --color "$MATCH"
8  done

shebang

The shebang in line 1 consists of the magic number #!, meaning the first byte of the file represents # and the second byte !. Unix systems scan files which have their executable bit set for these. When they are found, the rest of the line is treated as the path to the interpreter with which the script should be used. Its maximum lenght is 128 characters due to a compile time restraint, at least in FreeBSD.

variable declaration, definition

Lines 2 and 3 declare and define two variables. These are arbitrarily called MATCH and MANPAGE by me. By convention, these are uppercase, but lowercase will work as well. When a not-yet-present var is introduced (the shell does not know of one with the same name already) via its name and a =, it is declared (memory is reserved and it is created) and assigned the null string. When something follows after the =, it is also defined at once, and will hold the string which follows. Bash variables are usually untyped, when used like this (it's all strings), but with the declare or typeset built-ins (see man bash and search there) you can also define a 'variable' to be an integer, an indexed or associated function, a nameref (means it's a symlink to another variable), to be read-only, to be exported, to automatically uppercase the string of it's definition and such. But I disgress...

quoting and quotation marks (or lack thereof)

"quoting" is the act of 'removing the special meaning of certain characters or words to the shell'.

The second var is just the string 'open' in double quotation marks, whereas the first is also a string, just not enclosed within any quotation marks.

There are quite some variants that can be used:

'
"
(nothing)
\'
\"

In bash, everything in between single quotes is taken literally, no EXPANSION or other substitutions will take place in between the marks. There are these kinds of expansions or substituions:

- brace expansion
- tilde expansion
- parameter and variable expansion
- command substitution
- arithmetic expansion
- word splitting
- and pathname expansion

Look them up in the bash manual, if you are not already second-guessing your decision to read this posting.

Double quotes are used for enclosing strings, but letting bash be able to recognize these:

$ = most expansions
` = command substitions
\ = escapes
! = history expansion

That way, the expansion mechanisms mentioned above are possible to create strings dynamically.

No single quote may be used within double quotation marks, and if you need a literal quotation mark (i.e. for using a string of parameters for a command which is wrapped within another command) you can use pairs of \' or \".

If quoting is omitted, escape spaces and other special signs via the already mentioned escape character alias \, to get a coherent string, like shown in the first variable.

shell escaping and special characters

Since \, ( and ) are special characters in bash, and we want to end up with this string for the regular expression to match our manpage mentions:

'\w\+([[:digit:]])'

they have to be escaped.

regular expressions and character classes

The string itself is a regexp expressing 'match one or several (\+) word characters but no whitespace (\w), followed by an opening parens ((), an element belonging to the character class of digits, which means a number ([[:digit:]]) and finally an closing parens ()). Character classes are part of the POSIX standard and nice to know, since they are easier to use than \s or \w and will just work regardless of implementation as long as your system is POSIX-compliant.

for loop

Line 4 is the header of the for loop, whereas 5 and 8 enclose its body. The header is looped for all eternity while all statements return true. Usually bash's for is used like for i in <number-sequence>; do ..., but this is not everything which is possible.

i is the control variable, which is referenced via "$i" later on, just as the other variables are. ($MANPAGE, $MATCH)

arithmetic evaluation

The (( )) parentheses trigger arithmetic evaluation for what is contained in between, which are three statements in a row. The second statement is also an expression, while it evaluates to 'true', the loop's condition is satisfied and will run. Besides, the c-style for-loop should be self-explanatory.

This is basically the same as $(( ... )) (arithmetic expansion), the difference is the missing $. In bash $ denotes most kind of expansions or substitutions, references to a variable's definition are preluded with a $, too. Whereas in regular expressions it denotes the end of the line, just for the record.

ansi escape sequences

Line 6 is for getting some color into the shell. The ^[ is a literal escape sign, and needed to get bash to recognize the usage of ANSI escape sequences. To insert it, use Ctrl-v followed by Esc, and is a single character internally, even if its representation on the screen is given via two characters. You can see this when you delete it via backspace.

Usually the ANSI sequence part goes like this: <esc>[ <some numbers> m, where the [ denotes the start and m denotes the end of the escape-numbers list. 33 happens to be the number for yellow, red would be i.e. 31. The 1 just means bold. Depending on the feature set the console/terminal emulator you use, you could use the corresponding numbercode to make text underlined or let it blink. The 0 disables all non-standard settings again, so the text afterwards is regular colored and non-bold again.

Since the next part is a little bit more complex, here line number seven from above for easier reference:

7      grep "$MATCH" <(man "$i" "$MANPAGE") | grep -v $(echo "$MANPAGE" | tr '[:lower:]' '[:upper:]') | grep --color "$MATCH"

piping

The | character denotes piping. This simply means the part left of it is executed and the part to its right takes left's output as it's input via a character stream. (I hope this is correct, no warranty on that. :)) Internally a pipe is created through linking two file descriptors of two processes together.

process substitution

In the following, xyz will denote an arbitrary linux/unix command producing some output to the shell, in hope that this will help understanding.

<( xyz ) denotes process substitution (also look it up in man bash ;)), where the output of the command xyz is written to file referenced by a file descriptor which name is passed as argument to the calling command grep.
If >( xyz ) were used, xyz would read and not write to the file referenced by the file descriptor.

Phew. This sounds way harder than it actually is.

grep <searchterm> <( xyz ) means, grep the file descriptor naming the open file where xyz has written it's output to for <searchterm>.

Process substitution and the file descriptor are used, as grep can search only within files, not within an output stream which our xyz command above being man <number> <manpage-name> usually provides.

command substitution (through a subshell)

$( ... ) denotes a sub-shell, which will pass its result to it's parent shell. An older form is to use a pair of backticks, but this form is deprecated:

` ... `

Prior to executing grep -v on the input it is given from the pipe, the subshell is executed as a forked process of the calling process (the invoking shell) which will wait, and the result is handed back to its parent process (grep -v), which will resume execution again then.

This may sound like a contradiction to 'grep can only search in files', but it ain't. The searchterm of grep can be returned from another expression's evaluation, but the location in which to search has to be a file. As the input of grep comes from the pipe, which uses the connection of two processes' file descriptors, we close the circle.

It may be also noted, that if the search term is handed from an expression which hands back a list of several results, only the first result is used and searched for.

Proof:

[sjas@stv ~/test]$ grep --color $(ls -aF | grep '/' | grep './') <(ls -alhF)
/dev/fd/63:drwxr-xr-x  2 sjas  sjas     2B Apr 12 10:40 ./
/dev/fd/63:drwxr-xr-x  6 sjas  sjas    18B Apr 12 10:40 ../

The colored part of the output is just ./, as grep won't search for ../. In case you would want to achieve something like this, you'd have to use a for loop like for i in <command>; do grep --color "$i" <file>; done.

the rest

tr is just used to change every character matched with another one, here via the character classes. Each lowercase char will be exchanged with its uppercase equivalent.

For all die-hards that see this, thank you for reading.

GREP: find ip address
posted on 2015-04-07 14:35:34

When having to have a look at all IPv4 adresses in a logfile, try this:

egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' <filename>
linux: strace basics
posted on 2015-03-31 23:16:24

In the following, <function> is the executable / your program you want to have a further look at.

strace 'traces system calls and signals'. ltrace is for getting to know about the libraries being used, but not discussed here.

write output to file

strace -o <filename> <function>

I.e.

[root@jerrylee /home/jl]# strace -o sout.log echo  

Of course, piping will work, too. But you have to redirect STDERR to the file, too. (&> will do the trick.)

show function counts

strace -c <function>

I.e.

[root@jerrylee /home/jl]# strace -c echo                                       

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 24.00    0.000090          30         3           open
 21.87    0.000082           9         9           mmap
 11.47    0.000043          11         4           mprotect
  9.87    0.000037           9         4           brk
  8.80    0.000033           8         4           fstat
  6.13    0.000023           5         5           close
  5.33    0.000020          10         2           munmap
  2.93    0.000011          11         1           write
  2.93    0.000011          11         1         1 access
  2.40    0.000009           9         1           execve
  2.13    0.000008           8         1           read
  2.13    0.000008           8         1           arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00    0.000375                    36         1 total

show timestamps

strace -t <function>

I.e.

[root@jerrylee /home/jl]# strace -t echo                                       
23:24:07 execve("/bin/echo", ["echo"], [/* 57 vars */]) = 0
23:24:07 brk(0)                         = 0x2377000
23:24:07 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff7d2efe000
23:24:07 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
23:24:07 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
23:24:07 fstat(3, {st_mode=S_IFREG|0644, st_size=124895, ...}) = 0
23:24:07 mmap(NULL, 124895, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ff7d2edf000
23:24:07 close(3)                       = 0
23:24:07 open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
benchmarking: disc access speeds
posted on 2015-03-29 21:15:08

This here is just for the record:

[root@jerrylee /home/jl]# for i in dd home/dd; do dd if=/dev/zero of=/"$i"/test bs=1M count=1024 oflag=direct; done
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.63433 s, 191 MB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 14.0452 s, 76.4 MB/s
[root@jerrylee /home/jl]# for i in dd home/dd; do dd if=/dev/zero of=/"$i"/test bs=1M count=1024 oflag=sync; done  
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 11.7655 s, 91.3 MB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 46.4223 s, 23.1 MB/s
[root@jerrylee /home/jl]# for i in dd home/dd; do dd if=/dev/zero of=/"$i"/test bs=1M count=1024; done
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.83701 s, 585 MB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 6.30389 s, 170 MB/s
[root@jerrylee /home/jl]# hdparm -t /dev/sda; hdparm -t /dev/sdb               

/dev/sda:
 Timing buffered disk reads: 756 MB in  3.00 seconds = 251.76 MB/sec

/dev/sdb:
 Timing buffered disk reads: 236 MB in  3.02 seconds =  78.02 MB/sec

One was an ordinary hdd, the other a ssd. I cannot be bothered to look up the model names currently.

curl: setting a user agent
posted on 2015-03-26 16:28:17

When trying to curl a https site, and the site is run on an apache with mod_security and the OWASP paket, you may get a HTTP 403 error.

This is due to 'them' blocking every http client, that does not seem to be a brower.

This:

curl -k https://<server> -A 'Mozilla/4.0'

will fix this for testing purposes.

upstart manual
posted on 2015-03-26 10:17:13

Ubuntu, as well as RHEL 6.6 (6.x?) use upstart for system initalization during boot up.

If you need help for creating the init scripts, see the official manual.

RHEL: configure static ip
posted on 2015-03-24 01:13:02

From somewhere on the internet I found this handy gist, which got some improvements:

## Configure eth0
#
# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"
NAME="eth0"
TYPE=Ethernet
ONBOOT=yes
HWADDR=A4:BA:DB:37:F1:04
IPADDR=192.168.1.44
PREFIX=24
BOOTPROTO=static
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03


## Configure Default Gateway
#
# vi /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=centos6
GATEWAY=192.168.1.1


## Restart Network Interface (as root)
#
### DONT!
/etc/init.d/network restart
### DO!
ifdown eth0; ifup eth0

## Configure DNS Server
#
# vi /etc/resolv.conf

nameserver 8.8.8.8 # Replace with your nameserver ip
nameserver 192.168.1.1 # Replace with your nameserver ip 

This may be expanded later on, this is just a quick post.

IBM DB/2: Introduction and .csv export
posted on 2015-03-16 11:12:00

overview

IBM DB/2 is a relational database, but sports quite a bit more features than i.e. mysql. But it differs quite a bit from the latter. This here should serve as on overview on how to use it's cli and some basic commands, when you are in dire need. ;)

structure

db2 uses linux system users. This means, to access the database you have to be logged as the right user, which has database access granted.

For finding out which user is the one you need, simply login as each one (su db2username, try looking them up in /etc/passwd/) and issue a db2 at the shell prompt.

If it was the right user, it should look like this:

[user@host root]$ db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 10.5.0

You can issue database manager commands and SQL statements from the command 
prompt. For example:
    db2 => connect to sample
    db2 => bind sample.bnd

For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE for help on the CATALOG DATABASE command
 ? CATALOG          for help on all of the CATALOG commands.

To exit db2 interactive mode, type QUIT at the command prompt. Outside 
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.

For more detailed help, refer to the Online Reference Manual.

db2 =>

Trying with the wrong user will simply end in a bash: db2: command not found or the like.

basic commands

These should be the most used db2 sql commands when using the CLI via the db2 frontend.

To start simply write db2 while being logged in as the right user.

using help

# show commands
?
# show help on command
? <command>

connecting / disconnecting

# open connection so you can use sql statements
connect to <dbname>
# disconnect, but leave db2 cli running
connect reset
# disconnect and exit db2 cli
terminate
# exit client
quit

getting information on the database and its structure

# list databases
list database directory

If this is too unwieldy, try this from a shell prompt:

# list database's name from shell prompt
db2 list database directory | grep -i 'database name' | awk '{print $4}'

Now onto the internal structure:

# show all tables from all schemas
list tables for all

# show all tables for a specific schema
list tables for schema <schemaname>

# get table structure
describe tables <tablename>.<schemaname>

# show shemas
select distinct tabschema from syscat.tables
## also, but i prefer the above for it's more terse output
select schemaname from syscat.schemata

# show users
select distinct owner from syscat.tables

In syscat.tables there is also other information you might want to know, it's like the counterpart of the mysql table in a mysql database of a mysqld installation, as far as I can tell. (The mysql table in database mysql in a mysql database management system installation is correct. If you do not get it, read up on your basics, seriously.)

export to .csv

Easiest this is done from a shellscript. Developing it may take some more time, but usually you will need it in the future again, and grepping through the shell's history ain't the way to go.

touch mydb2script.sh
chmod 755 mydb2script.sh

Open the file mydb2script.sh and edit it to look like this:

#!/bin/bash
db2 connect to <databasename>
db2 "export to <filename>-$(date +%Y%m%d-%H.%M).csv of del modified by chardel\"\" coldel; decpt. select * from <databaseschemaname>.<tablename>"
db2 terminate

Read the above like export of sql-query, so the 'strange' syntax will make sense. The delimiter stuff is just sort of changing export settings.

I'd indent this like here, no idea if this makes sense to you:

export
    filename
of
    delimiter
        modified by
            chardelimiter '""'
            columndelimiter ';'
            decimalpoint '.'
<SQL QUERY>

I honestly do not know for sure if the terminate at the end is neccessary, but it does not hurt either, I guess. (Always close your resources if you do not need them anymore...) Since this is intended to be used as a cronjob, testing this without the conn reset is not an option since the system I am working on is produktive, and I sure as hell do not want to shoot it down some time in the future due to too many database connections. (When I have forgotten about the cron already, of course, or a colleague of mine will have to hunt it down without knowing anything about the changes.) There are quite a lot connections to the DB already, so troubleshooting this one-connection-at-a-time is also... NOT an option. :)

Redirect the commands output to /dev/null in case you want this as a cron job.

That should be about enough to start working with a db2 install you do not know much about. :)

S.M.A.R.T.: Monitoring
posted on 2015-03-10 02:44:23

Being able to access the S.M.A.R.T. status, give you a better overview on the health of your hardware.

That way you can change failing hardware before it ultimately fails, and prevents it from causing even more havoc.

install

# rhel-based
yum install -y smartmontools
# debian-based
aptitude install -y smartmontools

usage

smartctl -a /dev/sda

This gives you all info on the disk. In case you need something more specific, use man smartctl. There is a lot more info than here.

If you have ever had the case of two failing disks on a six-disc raid10 array, you might get the idea why this could help you, and why you should include a check into your nagios / icinga / whatever monitoring. ;)

If your HDD's are of same age, a rebuild of the new disk could due to it's hard work nature (lot of r/w operations) make another disk fail. As Harddisks are usually of the same age when a disk replacement occurs the first time in that system, this case is more likely than you would like.

ssh for remote backups
posted on 2015-03-09 12:32:56

To backup a system's file, usually you employ scp. This is fine, as long as you want to backup only regular files.

If you want to backup non-regular files, this won't work and you will need ssh.

Especially:

tar cvJ <folder> | ssh -T -c blowfish -e none <user>@<host> "cat > /backup.tar."

Here are some hacks contained within:

  1. -T to prevent allocation of a pseudo-terminal so redirection works
  2. -cblowfish to not use 3DES encryption, which is faster
  3. -enone so no escape sequence is used. That way the transfer can not kill the connection if <escapesequence>. is found. (Usually it is this one: ~.)

If this stuff is not done, your transfer may or may not work.

Thanks to Jan Engelhardt of inai.de for this gem.

linux: force fsck on reboot
posted on 2015-03-09 02:03:13

To force a file system check after rebooting now:

shutdown -rF now

To force a file system check on next reboot:

sudo touch /forcefsck
iptables: definitive basics
posted on 2015-03-07 16:12:02

introduction

Most of this is from the manpage anyway (man iptables), this write-up is simply aimed at getting the topic better into my head.

iptables and alternatives

iptables is the basic firewall solution on all linux-systems. (To be exact, it is the frontend for the netfilter part in the kernel, but you do not need to know that.) ipchains does also exist, but you can only choose one of both, so do yourself a favour and use the former. ipchains can also only do stateless firewalling, where each packet is looked at independently. Opposed to this is stateful firewalling which iptables can do. Stateful packet inspection, or dynamic packet inspection can also do work based on connection states, see next part on some more explanations.

Discussing anything besides iptables currently is more or less moot:

  • 2.4.x kernels and above run iptables
  • 2.2.x kernels run ipchains
  • 2.0.x kernels run ipfwadm.

This will change with nftables, which should arrive with kernel 3.13 AFAIK. By then another posting like this one will become necessary, I fear. :)

connection states

iptables can switch packets by ip data, as well as connection (stream) states. 'connection', 'connection stream' and 'stream' are synonyms in the following. Easiest these are explained with parts of TCP's three-way handshake, but keep in mind there is also UDP and ICMP. See here.

NEW
    the first packet of a connection stream, i.e. a SYN packet
    stream is classified as NEW
ESTABLISHED
    a connection was initiated through a SYN packet
    SYN/ACK'd through a second packet in reverse
    then all following packets of this stream are of this state
RELATED
    if an already ESTABLISHED connection stream spawns a new connection
    the new connection will be RELATED
    example is FTP's data channel set up by an ESTABLISHED control channel
INVALID
    packets having no state and being unidentifiable
UNTRACKED
    packets marked with the raw's table NOTRACK target show up as UNTRACKED
    i.e. for traffic on port 80 of a highly frequented webserver, to save resources.
    Sidenote: 'related' streams cannot be tracked either!

fwbuilder

If you have absolutely no idea on how to build an iptables FW by yourself, try fwbuilder, which is a GUI where you enter your rules. The result can be compiled afterwards into an iptables script. Do not forget to install the fwbuilder-ipt package, too, which you need to compile the iptables rules. There does also a backend exist, to create a pf FW script, along with others.

iptables system structure

There exist three building blocks:

  1. tables
  2. chains
  3. rules

Each table contains a set of chains, where each chain is an assortment of rules. The chains are parsed rule after rule, if no rule matches the default policy will be applied. If all rules are parsed or not, depends on rule design.

The basic tables are filter, nat and mangle. There also exist raw and secure. Usually you can forget everything besides filter (which is the default table, if you choose none it will be used) and maybe nat sometimes.

The mangle tables is interesting for marking packets and rule-based routing, to implement traffic engineering for QoS. If you have no idea what this is about, leave that stuff alone. :)

default tables and chains, ordering

Here's a list of all tables with all default chains along with an explanation which chain will be active on which packets.

filter = default table
    INPUT - packets destined locally
    FORWARD - routed packets
    OUTPUT - packets with external destination

nat = looked up when packets initiate a new connection
    PREROUTING - alters packets ASAP at arrival
    OUTPUT - alter locally generated packets before routing
    POSTROUTING - alter packets just before they go out

mangle = packet alteration 
    INPUT - alter incoming packets
    PREROUTING - alter incoming packets before routing
    OUTPUT- alter locally generated packets before routing
    FORWARD - packets being routed through the box
    POSTROUTING - alter packet after routing applied

raw = add exemptions from connection tracking, table looked up prior to anything else
    PREROUTING - all packets arriving on all interfaces
    OUTPUT - packets generated by local addresses

security = MAC networking rules, selinux stuff, called after filter table
    INPUT - incoming packetsj
    OUTPUT - alter locally generated packets before routing
    FORWARD - alter packets routed through the box

If this is rocket science, you can try the wikipedia graph here.

default commands / flags

These are to be used as presented in order here.

select your table

# omitting means implicit '-t filter'
-t <table>
    specify table

day-to-day commands

-L [<chain>]
    LIST chains + rules for current table

-S [<chain>]
    SHOW rules' code being active for current table

-I <chain> [<rulenumber>] <rule>
    INSERT rule at rulenum, prepend if no rulenum given

-A <chain> <rule>
    APPEND rule to given table
    (most often -I is needed, as append rules often don't even get hit)

-D <chain> <rule>|<rulenumber>
    DELETE rule for current table and given chain
    (--line-numbers for lookup helps a lot here)

-Z [<chain> [<rulenumber>]]
    ZERO packet counts

Lesser used:

-R <chain> <rulenumber> <rule>
    REPLACE command at line <rulenumber> (remember --line-numbers?)

cleanup commands

These are needed, in this order, to create a new, clean layout:

-F
    FLUSH all rules
-X
    delete all chains (flush previously!)
-P
    set default POLICY (DROP? REJECT? ACCEPT?)
-N
    create a NEW user-defined chain

After FLUSHING, deleting and setting INPUT and OUTPUT to default POLICY -j ACCEPT, you have effectively deactivated iptables.

parameters for rule creation

Here a lot could be written, but that is better left for googling. Be it on the -p, -s, -d flags, all you need is the internet.

However there is not a lot to be found on the -m documentation or which modules are present at a system at all.

To get some sort of overview what can be done with the netfilter modules being present on your linux system:

for i in /lib/modules/$(uname -r)/kernel/net/netfilter/*; do echo "\e[33;1m$(basename "$i")\e[0m"; strings "$i" | \grep -e description -e depends| sed -e 's/Xtables: //g' -e 's/=/: /g' -e 's/depends=/depends on: /g'; echo; done

That is ugly, but worth a look.

Further, if you wonder if a specific module / match / -m flag is possible on your system, try this:

iptables -m <modulename> --help

I.e. limit is present, as can be seen at the end of the help output:

[sjas@nb ~]$ iptables -m limit --help
iptables v1.4.21

Usage: iptables -[ACD] chain rule-specification [options]
       iptables -I chain [rulenum] rule-specification [options]
       iptables -R chain rulenum rule-specification [options]
       iptables -D chain rulenum [options]
       iptables -[LS] [chain [rulenum]] [options]
       iptables -[FZ] [chain] [options]
       iptables -[NX] chain
       iptables -E old-chain-name new-chain-name
       iptables -P chain target [options]
       iptables -h (print this help information)

Commands:


...


[!] --version   -V              print package version.

limit match options:
--limit avg                     max average match rate: default 3/hour
                                [Packets per second unless followed by 
                                /sec /minute /hour /day postfixes]
--limit-burst number            number to match in a burst, default 5
[sjas@nb ~]$ 

Whereas iplimit is not:

[sjas@nb ~]$ iptables -m iplimit --help
iptables v1.4.21: Couldn't load match `iplimit':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.
[sjas@nb ~]$ 

That way you also get an easy overview on how to use a module in question, since info on the -m flags is basically non-existant on the iptables man page.

actions on packets

What happens to a packet is chosen through these:

-j <target>
    move packet to chain which is specified as JUMP target
    or use ACCEPT, DROP or REJECT targets
    RETURN used in a built-in chain tells that the chain policy decides the packet fate
    RETURN used in a user-defined chain tells to proceed in the superior chain with the next rule
    (after the one which let us jump into this user-defined chain in the first place)

-g <chain>
    if a packet is RETURNed from the GOTO chain accessed via -g, it will jump to the last chain before accessed with -j
    if you end up in a built-in chain, and no rule can be found, the default policy will hit

<nothing>
    if no action is specified, the rule is still nice to have for debugging: (and 'watch'-ing iptables output)
    although nothing happens, the packet counter is active, showing you if it matches or not

additional parameters

--line-numbers
    show rulenumbers in first column, helps when using -D
-v
    verbose mode
-n
    numeric mode: ip's/ports are shown without DNS or service resolution
-x
    exact numbers, means no kilo or mega sizes

These can also be specified i.e. -L -vnx.

Or -vnxL.

a working example

A sample configuration with some sane defaults can be found here now. I have also included colored/noncolored output and a watch shortcut for checking chains for activity easily.

Place the following into /etc/init.d/firewall, if you do not use systemd.

#!/bin/bash
#### BEGIN INIT INFO
## Provides:          firewall
## Required-Start:    mountall
## Required-Stop:
## Default-Start:     2 3 4 5
## Default-Stop:      0 1 6
## Short-Description: start firewall
#### END INIT INFO
#
#### required packages: libnetfilter-conntrack3 libnfnetlink0
## /etc/sysctl.d/iptables.conntrack.accounting.conf
## -> net.netfilter.nf_conntrack_acct=1

# aliasing
IPTABLES=$(which iptables)
# set IF to work on
O=eth0
I=eth0


# load kernel modules
modprobe ip_conntrack
modprobe ip_conntrack_ftp

case "$1" in

    start)
        echo 60 > /proc/sys/net/ipv4/tcp_fin_timeout
        echo 0 > /proc/sys/net/ipv4/tcp_ecn

        echo -n "Starting stateful paket inspection firewall... "

        # delete/flush old/existing chains
        $IPTABLES -F
        # delete undefined chains
        $IPTABLES -X

        # create default chains
        $IPTABLES -N INPUT
        $IPTABLES -N OUTPUT

        # create log-drop chain
        $IPTABLES -N LOGDROP

        # set default chain-actions, accept all outgoing traffic per default
        $IPTABLES -P INPUT LOGDROP
        $IPTABLES -P OUTPUT ACCEPT
        $IPTABLES -P FORWARD ACCEPT

        # make NAT Pinning impossible
        $IPTABLES -A INPUT -p udp --dport 6667 -j LOGDROP
        $IPTABLES -A INPUT -p tcp --dport 6667 -j LOGDROP
        $IPTABLES -A INPUT -p tcp --sport 6667 -j LOGDROP
        $IPTABLES -A INPUT -p udp --sport 6667 -j LOGDROP
        $IPTABLES -A OUTPUT -p tcp --dport 6667 -j LOGDROP
        $IPTABLES -A OUTPUT -p udp --dport 6667 -j LOGDROP
        $IPTABLES -A OUTPUT -p tcp --sport 6667 -j LOGDROP
        $IPTABLES -A OUTPUT -p udp --sport 6667 -j LOGDROP

        # drop invalids
        $IPTABLES -A INPUT -m conntrack --ctstate INVALID -j LOGDROP

        # allow NTP and established connections
        $IPTABLES -A INPUT -p udp --dport 123 -j ACCEPT
        $IPTABLES -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
        $IPTABLES -A INPUT -i lo -j ACCEPT

        # pings are allowed
        $IPTABLES -A INPUT -p icmp --icmp-type 8 -m conntrack --state NEW -j ACCEPT

        # drop not routable networks
        $IPTABLES -A INPUT -i $I -s 169.254.0.0/16 -j LOGDROP
        $IPTABLES -A INPUT -i $I -s 172.16.0.0/12 -j LOGDROP
        $IPTABLES -A INPUT -i $I -s 192.0.2.0/24 -j LOGDROP
        #$IPTABLES -A INPUT -i $I -s 192.168.0.0/16 -j LOGDROP
        #$IPTABLES -A INPUT -i $I -s 10.0.0.0/8 -j LOGDROP
        $IPTABLES -A INPUT -s 127.0.0.0/8  ! -i lo -j LOGDROP




        # OPEN PORTS FOR USED SERVICES

        ## SSH
        $IPTABLES -A INPUT -i $I -p tcp -m conntrack --ctstate NEW --dport 22 -j ACCEPT

        ## HTTPD
        #$IPTABLES -A INPUT -i $I -p tcp -m conntrack --ctstate NEW --dport 80 -j ACCEPT
        #$IPTABLES -A INPUT -i $I -p tcp -m conntrack --ctstate NEW --dport 443 -j ACCEPT

        ## OVPN
        #$IPTABLES -A INPUT -i $I -p udp -m conntrack --ctstate NEW --dport 1194 -j ACCEPT

        ## MySQL
        #$IPTABLES -A INPUT -i $I -p tcp -m conntrack --ctstate NEW --dport 3306 -j ACCEPT






        # Portscanner will be blocked for 15 minutes
        $IPTABLES -A INPUT  -m recent --name psc --update --seconds 900 -j LOGDROP

        # only use when ports not available from the internet
        $IPTABLES -A INPUT ! -i lo -m tcp -p tcp --dport 1433  -m recent --name psc --set -j LOGDROP
        $IPTABLES -A INPUT ! -i lo -m tcp -p tcp --dport 3306  -m recent --name psc --set -j LOGDROP
        $IPTABLES -A INPUT ! -i lo -m tcp -p tcp --dport 8086  -m recent --name psc --set -j LOGDROP
        $IPTABLES -A INPUT ! -i lo -m tcp -p tcp --dport 10000 -m recent --name psc --set -j LOGDROP

        ### drop ms specific WITHOUT LOGGING - because: else too much logging
        $IPTABLES -A INPUT -p UDP -m conntrack --ctstate NEW --dport 137:139 -j DROP
        $IPTABLES -A INPUT -p UDP -m conntrack --ctstate NEW --dport 67:68 -j DROP

        # log packets to be dropped and drop them afterwards
        $IPTABLES -A INPUT -j LOGDROP
        $IPTABLES -A LOGDROP -j LOG --log-level 4 --log-prefix "dropped:"
        $IPTABLES -A LOGDROP -j DROP

        echo "Done."
    ;;

    stop)
        echo -n "Stopping stateful paket inspection firewall... "
        /etc/init.d/fail2ban stop
        # flush
        $IPTABLES -F
        # delete
        $IPTABLES -X
        # set default to accept all incoming and outgoing traffic
        $IPTABLES -P INPUT ACCEPT
        $IPTABLES -P OUTPUT ACCEPT
        echo "Done."
    ;;

    restart)
        echo -n "Restarting stateful paket inspection firewall... "
        echo -n
        /etc/init.d/firewall stop
        /etc/init.d/firewall start
        /etc/init.d/fail2ban start
    ;;

    status)
        $IPTABLES -L -vnx --line-numbers | \
        sed ''/Chain[[:space:]][[:graph:]]*/s//$(printf "\033[33;1m&\033[0m")/'' | \
        sed ''/^num.*/s//$(printf "\033[33m&\033[0m")/'' | \
        sed ''/[[:space:]]DROP/s//$(printf "\033[31m&\033[0m")/'' | \
        sed ''/REJECT/s//$(printf "\033[31m&\033[0m")/'' | \
        sed ''/ACCEPT/s//$(printf "\033[32m&\033[0m")/'' | \
        sed -r ''/\([ds]pt[s]\?:\)\([[:digit:]]\+\(:[[:digit:]]\+\)\?\)/s//$(printf "\\\1\033[33;1m\\\2\033[0m")/''| \
        sed -r ''/\([0-9]\{1,3\}\\.\)\{3\}[0-9]\{1,3\}\(\\/\([0-9]\)\{1,3\}\)\{0,1\}/s//$(printf "\033[37;1m&\033[0m")/g'' | \
        sed -r ''/\([^n][[:space:]]\)\(LOGDROP\)/s//$(printf "\\\1\033[1;33m\\\2\033[0m")/'' | \
        sed -r ''/[[:space:]]LOG[[:space:]]/s//$(printf "\033[36;1m&\033[0m")/''
    ;;

    monitor)
        if [ -n "$2" ]
            then $(which watch) -n1 -d $IPTABLES -vnxL "$2" --line-numbers
            else $(which watch) -n1 -d $IPTABLES -vnxL --line-numbers; fi
    ;;

    *)
        echo "Usage: $0 {start|stop|status|monitor [<chain>]|restart}"
        exit 1
    ;;

esac

exit 0

See the services section on how to enable things like enabling HTTP traffic, just uncomment the lines in question.

The colors only work for IPv4 currently.

nmap: examples
posted on 2015-03-05 11:08:48

Here is a list of nmap examples which I intend to have a much closer look at (with the manpage right beside me). It was stolen from here:

# Save output to a text file
nmap 192.168.1.1 > output.txt
nmap -oN output.txt 192.168.1.1

# Scan a single ip address or hostname
nmap <ip or hostname>

# Scan an IP range and exclude ips
nmap 192.168.1.0/24 --exclude 192.168.1.5,192.168.1.254

# OS and version detection scanning
nmap -v -A 192.168.1.1

# Discover if a host/network is protected by a firewall
nmap -sA 192.168.1.254

# Scan a host when protected by the firewall
nmap -PN 192.168.1.1

# Scan an IPv6 host/address
nmap -6 <IPv6 address>

# Scan a network and discover which servers and devices are up and running
nmap -sP 192.168.1.0/24

# Fast scan
nmap -F 192.168.1.1

# Display the reason a port is in a particular state
nmap --reason 192.168.1.1

# Only show open (or possibly open) ports
nmap --open 192.168.1.1

# Show all packets sent and received
nmap --packet-trace 192.168.1.1

# Show host interfaces and routes
nmap --iflist

# Scan TCP port 80
nmap -p T:80 192.168.1.1

# Scan UDP port 53
nmap -p U:53 192.168.1.1

# Scan top ports i.e. scan <number> of most common ports
nmap --top-ports 5 192.168.1.1

# Fastest method of scanning all your devices/computers for open ports
nmap -T5 192.168.1.0/24

# Identify a remote host apps and OS
nmap -O  --osscan-guess 192.168.1.1

# Detect remote services (server / daemon) version numbers
nmap -sV 192.168.1.1

# Scan a host using TCP ACK (PA) and TCP Syn (PS) ping
nmap -PS 192.168.1.1

# Scan a host using TCP ACK (PA) and TCP Syn (PS) ping
nmap -PA 192.168.1.1

# Scan a host using IP protocol ping
nmap -PO 192.168.1.1

# Scan a host using UDP ping, bypasses firewalls and filters that only screen TCP
nmap -PU 192.168.1.1

# Stealth scan
nmap -sS 192.168.1.1

# Discover the most commonly used TCP ports using, TCP connect scan (not stealth scan)
nmap -sT 192.168.1.1

# Discover the most commonly used TCP ports using TCP ACK scan
nmap -sA 192.168.1.1

# Discover the most commonly used TCP ports using TCP Window scan
nmap -sW 192.168.1.1

# Discover the most commonly used TCP ports using TCP Maimon scan
nmap -sM 192.168.1.1

# Discover UDP services:
nmap -sU 192.168.1.1

# Scan for IP protocol
nmap -sO 192.168.1.1

# TCP Null Scan to fool a firewall to generate a response, Does not set any bits (TCP flag header is 0)
nmap -sN 192.168.1.254

# TCP Fin scan to check firewall, Sets just the TCP FIN bit
nmap -sF 192.168.1.254

# TCP Xmas scan to check firewall, Sets the FIN, PSH, and URG flags, lighting the packet up like a Christmas tree
nmap -sX 192.168.1.254

# Scan a firewall with packet fragments to make it harder for packet filters, intrusion detection systems to detect what you are doing
nmap -f 192.168.1.1
# Set your own offset size
nmap --mtu 32 192.168.1.1

# Cloak a scan with decoys
nmap -n -Ddecoy-ip1,decoy-ip2,your-own-ip,decoy-ip3,decoy-ip4 remote-host-ip

# Spoof your MAC address
nmap --spoof-mac MAC-ADDRESS-HERE 192.168.1.1
# Add other options
nmap -v -sT -PN --spoof-mac MAC-ADDRESS-HERE 192.168.1.1

# Use a random MAC address
nmap -v -sT -PN --spoof-mac 0 192.168.1.1
Linux: 'top' explained
posted on 2015-03-04 12:54:59

To get a fast overview on what is running on your linux box, use top. (If you want some fancy graphics, try htop, but it has less intuitive shortcuts and is not always installed.)

Sad thing is, at first you don't really know what you are doing. So some guidance:

start and sane defaults

After starting top, press: z, x, c. This will color top (z), show current sort column (x) and the full application path (c).

1 will show stats for all individual cpus.

If you have no idea, use h for getting the help shown.

If you have a newer version of top, V will also work:
This gives you a nice process-tree view.

d changes the update delay, which is at three seconds per default.

cpu stats explained

Straight from the manpage, the CPU statistics show the times spent in:

us = user mode
sy = system mode
ni = low priority user mode (nice)
id = idle task
wa = I/O waiting
hi = servicing IRQs
si = servicing soft IRQs
st = steal (time given to other DomU instances)

If you have low cpu and ram usage but the system is unresponsive, have a look at the wait times.

sorting and searching

Changing the sort column can be done via < and >.

Also available: (not shown in help)

N sort by PID
P sort by CPU usage
M sort by memory usage
T sort by time

R will reverse the output.

u to choose user name, show only this user's processes.

S for cululative time toggling.

columns

f will toggle a window in which you can choose the info fields to be shown. Pressing the character will toggle its state. (Shown or not shown.)

o also opens a window, in there you can reorder the columns. Press the character of the column you want to move, depending on it being upper- or lowercase it gets moved up and down.

manipulate tasks

These should be self-explanatory:

k kill task

r renice task

colored iptables output
posted on 2015-02-27 00:32:21

To get colored iptables output, try this monster:

iptables -L -vnx --line-numbers | sed ''/Chain.*/s//$(printf "\033[33;1m&\033[0m")/'' | sed ''/[ds]pt:.*/s//$(printf "\033[31;1m&\033[0m")/'' | sed ''/[ds]pts:.*/s//$(printf "\033[31;1m&\033[0m")/'' | sed -r ''/\([0-9]\{1,3\}\\.\)\{3\}[0-9]\{1,3\}\(\\/\([0-9]\)\{1,3\}\)\{0,1\}/s//$(printf "\033[36;1m&\033[0m")/g''

Ugly as shit could ever be, but only way I found out how this can be done. Also a little buggy, as some colors are a bit off, but still better than vanilla.

UPDATE: some fixes and better coloring and way more regex madness

iptables -L -vnx --line-numbers | \
sed ''/Chain[[:space:]][[:graph:]]*/s//$(printf "\033[33;1m&\033[0m")/'' | \
sed ''/^num.*/s//$(printf "\033[33m&\033[0m")/'' | \
sed ''/[[:space:]]DROP/s//$(printf "\033[31m&\033[0m")/'' | \
sed ''/REJECT/s//$(printf "\033[31m&\033[0m")/'' | \
sed ''/ACCEPT/s//$(printf "\033[32m&\033[0m")/'' | \
sed -r ''/\([ds]pt[s]\?:\)\([[:digit:]]\+\(:[[:digit:]]\+\)\?\)/s//$(printf "\\\1\033[33;1m\\\2\033[0m")/''| \
sed -r ''/\([0-9]\{1,3\}\\.\)\{3\}[0-9]\{1,3\}\(\\/\([0-9]\)\{1,3\}\)\{0,1\}/s//$(printf "\033[37;1m&\033[0m")/g'' | \
sed -r ''/\([^n][[:space:]]\)\(LOGDROP\)/s//$(printf "\\\1\033[1;33m\\\2\033[0m")/'' | \
sed -r ''/[[:space:]]LOG[[:space:]]/s//$(printf "\033[36;1m&\033[0m")/''

And something to copy paste more easily, slightly modified again:

iptables -L -vnx --line-numbers | sed ''/Chain[[:space:]][[:graph:]]*/s//$(printf "\033[33;1m&\033[0m")/'' | sed ''/^num.*/s//$(printf "\033[33m&\033[0m")/'' | sed ''/[[:space:]]DROP/s//$(printf "\033[31m&\033[0m")/'' | sed ''/REJECT/s//$(printf "\033[31m&\033[0m")/'' | sed ''/ACCEPT/s//$(printf "\033[32m&\033[0m")/'' | sed -r ''/\([ds]pt[s]\?:\)\([[:digit:]]\+\(:[[:digit:]]\+\)\?\)/s//$(printf "\\\1\033[33;1m\\\2\033[0m")/''| sed -r ''/\([0-9]\{1,3\}\\.\)\{3\}[0-9]\{1,3\}\(\\/\([0-9]\)\{1,3\}\)\{0,1\}/s//$(printf "\033[36;1m&\033[0m")/g'' | sed -r ''/\([^n][[:space:]]\)\(LOGDROP\)/s//$(printf "\\\1\033[1;33m\\\2\033[0m")/'' | sed -r ''/[[:space:]]LOG[[:space:]]/s//$(printf "\033[36;1m&\033[0m")/''| sed ''/CATCH-DROP/s//$(printf "\033[31m&\033[0m")/''
fritzbox: install ssh server
posted on 2015-02-22 17:38:27

After having enabled the telnet access to your fritz box, which involves a phone connected to the device and dialing a number as described here, connect to its ip:

connect via telnet

[jl@jerrylee ~]% telnet 10.0.0.1                                               
Trying 10.0.0.1...
Connected to 10.0.0.1.
Escape character is '^]'.
password: 


BusyBox v1.20.2 (2014-09-26 13:25:19 CEST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

ermittle die aktuelle TTY
tty is "/dev/pts/0"
Console Ausgaben auf dieses Terminal umgelenkt
disable start/stop characters and flowcontrol
#

check architecture

Depending on the architecuture of the fritzbox cpu, you need a different binary. Older fritzboxes had mipsel cpu's whereas newer ones have mips ones. You may find this here helpful. Later this check is integrated into the install script, so no real need to bother with it now.

install overview

Several steps are needed, to achieve what is desired: (is automated in next section)

  1. set a root password

  2. copy the hashed password

  3. check cpu architecture

  4. install appropriate dropbear ssh server, depending on the platform

5.

actual installation

From there on, do these steps: (tried to make these foolproof by using absolute paths)

cd /var
/usr/bin/wget http://www.spblinux.de/fbox.new/cfg_dropbear
chmod 755 /var/cfg_dropbear

In case you wondered what this 'spblinux' distro is, this is what the sourceforge page tells:

SPBLinux: 
    modular mini distribution running completely in RAM
    can be booted from USB
    based on Busybox and Midnight Commander
    optional with DirectFB and (since version 2.1) Mozilla
    it is possible to create/modify own modules inside SPB:Linux.
ASA: access console via serial port
posted on 2015-02-21 18:02:56

To connect to one of Cisco's ASA's (short for Adaptive Security Appliance), you have several options.

Either use the management ethernet port (labelled MGMT) or via the serial interface (CONSOLE), which are both rj45 outlets. This methods of access are the same for most other hardware appliances.

If the ASA was not accessed in a while and the network config was lost (or if it's a leftover from an old customer), you are likely unable to access it through the management port, because you do not know the subnet you have to be in to connect to it, anymore.

If you still happen to know your credentials, you might try the serial interface.

If your computer has a serial interface, too, you only need a rs232-to-rj45 cable for the asa. If you have a laptop its much more likely that you just lack the serial port, you need an adapter from serial to ethernet, plus an adapter from serial-to-usb.

From here the steps differ, depending on your operating system.

windows

  1. plug in the adapter, which is connected to the devices CONSOLE port, too
  2. open the device manager
  3. look up which COM port just got added
  4. open putty
  5. connection destination is i.e. COM-7, if thats the one you saw
  6. enter baud rate (9600 for cisco devices AFAIK)
  7. connect

You should be greeted by a prompt of the ASA. Hit space, in case putty does not update your console window.

linux

  1. plug in the adapter connected to the ASA
  2. ls -alh /dev/tty*
  3. You should see a device called something like /dev/ttyUSB0
  4. sudo screen /dev/ttyUSB0 9600, with baud rate of 9600 like mentioned in the windows manual above
  5. you should be connected, hit spacebar if nothing is shown.

If you happen to have problems to find out which device is added when you insert the adapter into your usb port, try:

watch --differences -n.2 ls /dev/tty*
bash: combined dns-reverse-dns-lookup
posted on 2015-02-20 12:27:15

On dns lookups at work

While working with domains, you often need to to a dns lookup, to find out the ip of the machine in question (at least when working with several hundred web servers ;)), followed by a reverse dns lookup on the ip to find out the actual hostname. The regular hostname is just easier to remember than the IP. It's bad enough with IPv4, and will become worse with IPv6.

I.e. usually you do something like this:

[sjas@ctr-014 ~]% host ix.de
ix.de has address 193.99.144.80
ix.de has IPv6 address 2a02:2e0:3fe:1001:302::
ix.de mail is handled by 10 relay.heise.de.
ix.de mail is handled by 50 secondarymx.heise.de.
[sjas@ctr-014 ~]% host 193.99.144.80
80.144.99.193.in-addr.arpa domain name pointer redirector.heise.de.
[sjas@ctr-014 ~]%

or this:

[sjas@ctr-014 ~]% dig ix.de +short
193.99.144.80
[sjas@ctr-014 ~]% dig -x 193.99.144.80 +short
redirector.heise.de.
[sjas@ctr-014 ~]%

This can be 'shortened' into a single step with proper output:

[sjas@ctr-014 ~]% echo ${$(dig -x $(dig ix.de +short) +short)%?}
redirector.heise.de
[sjas@ctr-014 ~]%

proper solution

Since this is kind of unhandy (and let's be honest, bash sucks sometimes), just place it into a function definition in your .bashrc:

rdns() {
    echo ${$(dig -x $(dig $1 +short) +short)%?}
}

An in-depth explanation of this bash 'gem' will be added here, if I do not forget to add it in the near future. :)

Which let's you do:

[sjas@ctr-014 ~]% rdns ix.de
redirector.heise.de

script explanation

In short:

echo "${$(dig -x $(dig $1 +short) +short)%?}"

Echo a string...

echo "                                      "

... which stems out from a combination of parameter expansion...

      ${               $1                  }"

... wherein also a suffix is removed, in this case ? representing a single char.

                                         %?

There a subshell is used to run the dig command in a subshell...

        $(dig                     +short)

... which in turn runs another dig call in another subshell...

                 $(dig    +short)

Use man bash to get further info on this stuff.

Linux: Find out HDD serial number
posted on 2015-02-17 15:30:39

To find out the serial number of your harddisk, look up your /dev/sdX name via lsblk:

[sjas@ctr-014 ~]% lsblk -i
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 111.8G  0 disk 
|-sda1   8:1    0   108G  0 part /
`-sda2   8:2    0   3.8G  0 part [SWAP]
sr0     11:0    1  58.1M  0 rom

The device here is /dev/sda, as it contains my root partition as can be seen from the mountpoint information.

The -i flag is for ascii-output mode. That way copy pasting works better.

Then hdparm -i will give you the needed information:

[sjas@ctr-014 ~]% sudo hdparm -i /dev/sda

/dev/sda:

 Model=Samsung SSD 840 EVO 120GB, FwRev=EXT0BB6Q, SerialNo=S1BUNSAF306489A
 Config={ Fixed }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0
 BuffType=unknown, BuffSize=unknown, MaxMultSect=1, MultSect=1
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=234441648
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4 
 DMA modes:  mdma0 mdma1 mdma2 
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
 AdvancedPM=no WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-2,3,4,5,6,7

 * signifies the current active mode
Writing udev rules
posted on 2015-02-17 12:34:16

preface

The following applies to current ubuntu and fedora installations, i.e. 14.04 and 21.

Renaming hardware, especially network interfaces, can be done via udev. Why would you want that?

biosdevnames and its friends are the latest craze: Read up on predictable network interface names.

So there are two approaches:

  1. edit grub to disable net.ifnames and biosdevname and rename the basic eth's
  2. directly rename the interface names, after the biosdevname stuff was applied

I personally prefer the first approach, but this post will cover both approaches as through the difference the udev syntax becomes clearer.

net.ifnames and biosdevname grub changes

Some example, what these and their combinations will cause:

No parameters: NIC identified as enp5s2.

only biosdevname=0: NIC identified as enp5s2.

only net.ifnames=0: NIC identified as em1.

Parameter net.ifnames=0 AND biosdevname=0: NIC identified as eth0.

approach 1

applying changes to grub

  1. edit /etc/default/grub
  2. insert GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0" (or append both vars, in case the string was not empty in your config before)
  3. save, quit
  4. grub2-mkconfig -o /boot/grub2/grub.cfg
  5. reboot the server

changes via udev

lookup the MAC address of your NIC:

Via ip a or ip l or directly in the address file in /sys/class/net/<if-name>/address, whatever you like best. ;)

[sjas@ctr-014 ~]% ip a

...

2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether AA:BB:CC:DD:EE:FF brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20e:fff:ffff:4f6f/64 scope link 
       valid_lft forever preferred_lft forever

...

The adress AA:BB:CC:DD:EE:FF is of course not my real mac address. ;) But the mac is what we need here.

edit /etc/udev/rules.d/70-persistent-net.rules

Open the file in a editor of your choosing. If it does not exist, create it. There either edit already existing entries for your NIC (look if the MAC is already used somewhere), or add a new entry.

Basically your entry looks like this:

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="AA:BB:CC:DD:EE:FF", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

Copy this line, adjust your MAC address and the interface name (if you do not want to call your IF (interface) 'eth0'). All the other entries are not touched, if the IF was called ethX or something alike already prior.

approach 2

This is almost the same like above, just edit the /etc/udev/rules.d/70-persistent-net.rules file to match your MAC address and the name your IF should be called at NAME. But depending on the name your IF has had prior, you have to change the KERNEL attribute.

If i.e. your NIC's IF was called p2p1 prior, adjust the KERNEL flag to KERNEL=="p2p*"

It could be that the KERNEL flag can be omitted, I can't provide an answer on this for now, this post is a rewrite from memory and some leftover links in my browser. If I get around to test, this post will be updated.

biosdevnames explained

Prefixes:

en -- ethernet
sl -- serial line IP (slip)
wl -- wlan
ww -- wwan

Name type: o -- on-board device index number [P]ps[f] -- PCI geographical location

If your IF's are named p2p1 or something, this means the NIC is plugged into slot 2 of your pci bus and the first rj45 slot is used. If it is a dual-port NIC, the second IF would of course be called p2p2.

Copy paste from the systemd source from here from where the information above was taken:

/*
 * Predictable network interface device names based on:
 *  - firmware/bios-provided index numbers for on-board devices
 *  - firmware-provided pci-express hotplug slot index number
 *  - physical/geographical location of the hardware
 *  - the interface's MAC address
 *
 * http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames
 *
 * Two character prefixes based on the type of interface:
 *   en -- ethernet
 *   sl -- serial line IP (slip)
 *   wl -- wlan
 *   ww -- wwan
 *
 * Type of names:
 *   b<number>                             -- BCMA bus core number
 *   ccw<name>                             -- CCW bus group name
 *   o<index>                              -- on-board device index number
 *   s<slot>[f<function>][d<dev_port>]     -- hotplug slot index number
 *   x<MAC>                                -- MAC address
 *   [P<domain>]p<bus>s<slot>[f<function>][d<dev_port>]
 *                                         -- PCI geographical location
 *   [P<domain>]p<bus>s<slot>[f<function>][u<port>][..][c<config>][i<interface>]
 *                                         -- USB port number chain
 *
 * All multi-function PCI devices will carry the [f<function>] number in the
 * device name, including the function 0 device.
 *
 * When using PCI geography, The PCI domain is only prepended when it is not 0.
 *
 * For USB devices the full chain of port numbers of hubs is composed. If the
 * name gets longer than the maximum number of 15 characters, the name is not
 * exported.
 * The usual USB configuration == 1 and interface == 0 values are suppressed.
 *
 * PCI ethernet card with firmware index "1":
 *   ID_NET_NAME_ONBOARD=eno1
 *   ID_NET_NAME_ONBOARD_LABEL=Ethernet Port 1
 *
 * PCI ethernet card in hotplug slot with firmware index number:
 *   /sys/devices/pci0000:00/0000:00:1c.3/0000:05:00.0/net/ens1
 *   ID_NET_NAME_MAC=enx000000000466
 *   ID_NET_NAME_PATH=enp5s0
 *   ID_NET_NAME_SLOT=ens1
 *
 * PCI ethernet multi-function card with 2 ports:
 *   /sys/devices/pci0000:00/0000:00:1c.0/0000:02:00.0/net/enp2s0f0
 *   ID_NET_NAME_MAC=enx78e7d1ea46da
 *   ID_NET_NAME_PATH=enp2s0f0
 *   /sys/devices/pci0000:00/0000:00:1c.0/0000:02:00.1/net/enp2s0f1
 *   ID_NET_NAME_MAC=enx78e7d1ea46dc
 *   ID_NET_NAME_PATH=enp2s0f1
 *
 * PCI wlan card:
 *   /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0/net/wlp3s0
 *   ID_NET_NAME_MAC=wlx0024d7e31130
 *   ID_NET_NAME_PATH=wlp3s0
 *
 * USB built-in 3G modem:
 *   /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.6/net/wwp0s29u1u4i6
 *   ID_NET_NAME_MAC=wwx028037ec0200
 *   ID_NET_NAME_PATH=wwp0s29u1u4i6
 *
 * USB Android phone:
 *   /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/net/enp0s29u1u2
 *   ID_NET_NAME_MAC=enxd626b3450fb5
 *   ID_NET_NAME_PATH=enp0s29u1u2
 */
grub2: Windows boot entry fix
posted on 2015-02-10 17:57:13

If you have linux along with windows installed on different partitions, and you somehow manage to lose your windows boot entry, try the following.

setup

Either edit /etc/grub.d/40_custom, or if the file does not exist, create a new one. Prefix it with a number you like, it will let grub decide where the boot menu entry will appear. If put into the 40_custom, it will appear on the end of the boot menu.

There add this:

menuentry "Windows" {
set root=(hd0,3)
chainloader +1
}

Then issue the command update-grub in the shell (which should be aliased to update-grub2, in case you wondered), to update the /boot/grub/grub.cfg. Else your changes will not have any effect.

It is however EXTREMELY likely, that hd0,3 from above will not work in your case. More on this later on.

So reboot, and try booting the new entry.

explanation

As menuentry chose whatever you like, that is just the string which will appear in the menu.

set root=... decides which partition will be loaded.
chainloader +1 tells grub to chainload the next bootlader from there if one is present, starting on the first block of the partition, IIRC, no warranty on that. It is basically the same as chainloader 0+1, for more info on the block list syntax see here.

If it won't work reboot again, and press 'e' to edit the boot entry. Choose another harddisk or partition until you 'hit ground'. (hd0,1) is for example the first harddisk, with its first partition which will be tried. From there, the numbers are simply incremented. If this is information overflow, it is more condensed int the grub manual. If you use NTFS on the windows partion, you might also try the insmod chain and insmod ntfs commands from the last link.

troubleshooting and finding the correct harddisk and partition

boot a linux for setup inspection

Use a linux (either the installed one if it still boots or a boot stick), and have a look at your existing partitions via either fdisk -l or parted, if you want to du further troubleshooting. An idea would be to search which partition was intended to be the windows boot partition (hint: it should be around 100MB in size), remember the number, it might help you.

use grub to identify the partitions

Also you can use grub's shell to list all possible harddisk/partition combos. Just boot into grub2, hit 'c' to enter the console and do ls.

This will show you something like this:

grub> ls
(hd0,msdos6) (hd0,msdos5) (hd0,msdos4) (hd0,msdos3) (hd0,msdos2)
(hd0,msdos1)
grub>

These are all the partitions you can try, either by editing the grub configs in /etc/grub.d, or when editing the menu entries directly when in grub and hitting 'e' when having chosen your just created entry.

Date in filename
posted on 2015-02-10 13:22:40

For documentation (read: work) purposes it's often neccessary to include a date in the filename.

In bash there exist several flags for the date command which come to help. The command itself is easiest used like this:

$ cp <filename>.<ext> <filename>$(date +<FLAGS>).<ext>

As <FLAGS> you usually need: (in Europe)

[sjas@ctr-014 ~]% date +%Y%m%d
20150210

[sjas@ctr-014 ~]% date +%Y%m%d%H%M
201502101337
There was an error during the CUPS operation: 'cups-authorization-canceled'.
posted on 2015-02-06 13:17:39

While trying to a add a new printer, above mentioned error popped up. Why adding a printer in the past worked without problems, I cannot say for sure. Maybe during an update CUPS' security settings got adjusted.

Well, solution was to go into /etc/cups/cups-files.conf and look up which user groups are listed somewhere here:

# Administrator user group
SystemGroup lpadmin

So, now either add your user's own group, or add your user to lpadmin or whatever group is already listed. Afterwards do a service restart cups and be glad.

Ubuntu 14.04 LAMP issues
posted on 2015-02-05 13:53:38

apache

Apache (as of version 2.4 now) has these issues.

  1. vhosts unter sites-enabled do have to end with .conf, except you fix the global setting which files are importet/loaded when restarting.
  2. *.80 in your virtualhost configuration should have the domain instead of the * wildcard.

mysql

Here two variable names got changed:

key_buffer => key_buffer_size
myisam-recover => myisam_recover_options

To fix the warnings, a workaround is to include the value of the current setting with the new variable set to it inside /etc/mysql/my.cnf.

Wine on CentOS 7
posted on 2015-02-01 00:37:41

If you happen to run into troubles while running wine, such as it is telling you 'malformed EXE' or something, don't bother troubleshooting it. Just install playonlinux:

sudo wget -O /etc/yum.repos.d/playonlinux.repo http://rpm.playonlinux.com/playonlinux.repo
sudo yum install -y playonlinux

Afterwards run playonlinux, forget about the 32bit openGL error message. Set up your wine (32 or 64 bit, depending on what you need) under Tools >> Manage Wine Version, open a console and just run the .exe you need.

Pause bash shell
posted on 2015-01-23 13:08:32

If you have a long running command with a lot of output where you just got a glimpse on something and you need a closer look but the shell won't let you scroll? (Due to new printouts appearing all the time.)

Use Ctrl-s to pause (and you can scroll up all you want, in case your terminal emulator will let you).
Afterwards Ctrl-q will 'unpause' it again.

The shell is not really put on hold, just the visual updating of the standard output is paused. After the unpausing, everything that has happened in the meantime will become updated again.

Installing Linux on a Macbook
posted on 2015-01-23 13:00:33

During booting your (U)EFI capable USB stick, press ALT. That way you can boot your stick.

A simple CD however will work directly. Do as you like, this took me literally years to find out.

bash completion shortcuts
posted on 2015-01-23 11:23

The bash shell also has more shortcuts, than just the ones like for emacs or vi movement.

The other interesting completions are:

C-x /     filename completion
C-x $     bash variable completion
C-x @     hostname completion
C-x !     command completion

Meta-~ username completion
Meta-/ filename completion
Meta-$ bash variable completion
Meta-@ hostname completion
Meta-! which does command completion
Krypton Walkthrough
posted on 2015-01-22 03:24:30

http://overthewire.org/wargames/krypton/ is just as much fun as bandit or leviathan, which I covered in earlier posts here or here.

prerequisites

Just go and have a look at the bandit post mentioned above

solutions

Here is what I have found by now.

level 0

[root@jerrylee /home/jl]# echo "S1JZUFRPTklTR1JFQVQ=" | base64 -d
KRYPTONISGREAT

This is only locally.

level 1

Here you have to login with 'krypton1'. In case you have already been on the server, you can see this here:

leviathan7@melinda:~$ grep krypton /etc/passwd
krypton1:x:8001:8001:krypton level 1:/home/krypton1:/bin/bash
krypton2:x:8002:8002:krypton level 2:/home/krypton2:/bin/bash
krypton3:x:8003:8003:krypton level 3:/home/krypton3:/bin/bash
krypton4:x:8004:8004:krypton level 4:/home/krypton4:/bin/bash
krypton5:x:8005:8005:krypton level 5:/home/krypton5:/bin/bash
krypton6:x:8006:8006:krypton level 6:/home/krypton6:/bin/bash
krypton7:x:8007:8007:krypton level 7:/home/krypton7:/bin/bash
leviathan7@melinda:~$

So, after connecting first lets see where our file is:

krypton1@melinda:~$ find / -iname '*krypton2*' | less

In less, do again the &krypton2 + Enter trick:

/games/krypton/krypton1/krypton2
/games/krypton/krypton2
/home/krypton2
~
~
~
~
~
~
~
& (END)

krypton1@melinda:~$ cat /games/krypton/krypton1/krypton2 | tr 'A-Za-z' 'N-ZA-Mn-za-m' LEVEL TWO PASSWORD ROTTEN ### level 2 krypton2@melinda:~$ ls -lah total 20K
drwxr-xr-x   2 root root 4.0K Nov 14 10:32 .
drwxr-xr-x 167 root root 4.0K Jan 12 17:44 ..
-rw-r--r--   1 root root  220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root root 3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root root  675 Apr  9  2014 .profile
krypton2@melinda:~$ cd /games/krypton/
krypton2@melinda:/games/krypton$ ls
krypton1  krypton2  krypton3  krypton4  krypton5  krypton6
krypton2@melinda:/games/krypton$ cd krypton2
krypton2@melinda:/games/krypton/krypton2$ ls -lah
total 15K
drwxr-xr-x 2 root     root     1.0K Nov 14 10:32 .
drwxr-xr-x 8 root     root     1.0K Nov 14 10:32 ..
-rw-r----- 1 krypton2 krypton2 1.1K Nov 14 10:32 README
-rwsr-x--- 1 krypton3 krypton2 8.8K Nov 14 10:32 encrypt
-rw-r----- 1 krypton3 krypton3   27 Nov 14 10:32 keyfile.dat
-rw-r----- 1 krypton2 krypton2   13 Nov 14 10:32 krypton3

So far, so nice. But the encrypt file does not work due to file permissions, it seems.

Lets hack up a really, really whacky bash script:

#!/bin/bash

## basically this converts the chars to their ascii code and back
## this is likely not the best solution, but everything else would have been even worse

## first read the file contents into an array
a=0
while read -n1 j
do
    ((a++))
    current[$a]=$(LC_CTYPE=C printf '%d ' "'$j")
done < <( cat ./krypton3 )## HERE PROCESS SUBSTITUTION IS NEEDED!
echo

## now iterate over the array we created and increment each item by 1
for i in {1..25}
do
    echo "OFFSET BY "${i}
    for l in $(seq 1 $((a-1)))
    do
        ## here is the most important part:
        ## since 'A' is 65 in ascii, substract 64
        ## such that 'A' becomes '1', and 'Z' becomes '26'
        ## then increment by one, take the modulo 26
        ## (else you have numbers bigger than 26)
        ## and aftwards add 64, so the ascii conversion can take place again
        ## the 'mod 26' trick works since we assume the pw is written in CAPSLOCK
        current[$l]=$(( $(( $((  $(( current[$l] - 64 )) + 1 )) % 26 )) + 64 ))
    done

    ## now print the current result by iterating again and converting to characters again
    for ((b=0; b<${#current[@]}; b++))
    do
        printf "\x$(printf %x ${current[$b]})"
    done
    echo
    echo
done

Uah, this was ugly. I did that just as a proof of concept, use a proper scripting language in case you want to do it yourself. But I disgress.

Lets just use this monster as a one-liner:

krypton2@melinda:/games/krypton/krypton2$ a=0; while read -n1 j; do ((a++)); current[$a]=$(LC_CTYPE=C printf '%d ' "'$j"); done < <( cat ./krypton3 ); for i in {1..25}; do echo "OFFSET BY "${i}; for l in $(seq 1 $((a-1))); do current[$l]=$(( $(( $((  $(( current[$l] - 64 )) + 1 )) % 26 )) + 64 )); done; for ((b=0; b<${#current[@]}; b++)); do printf "\x$(printf %x ${current[$b]})"; done; echo; echo; done
OFFSET BY 1
PNRFNEVFRNFL

OFFSET BY 2
QOSGOFWGSOGM

OFFSET BY 3
RPTHPGXHTPHN

OFFSET BY 4
SQUIQHYIUQIO

OFFSET BY 5
TRVJRI@JVRJP

OFFSET BY 6
USWKSJAKWSKQ

OFFSET BY 7
VTXLTKBLXTLR

OFFSET BY 8
WUYMULCMYUMS

OFFSET BY 9
XV@NVMDN@VNT

OFFSET BY 10
YWAOWNEOAWOU

OFFSET BY 11
@XBPXOFPBXPV

OFFSET BY 12
AYCQYPGQCYQW

OFFSET BY 13
B@DR@QHRD@RX

OFFSET BY 14
CAESARISEASY

OFFSET BY 15
DBFTBSJTFBT@

OFFSET BY 16
ECGUCTKUGCUA

OFFSET BY 17
FDHVDULVHDVB

OFFSET BY 18
GEIWEVMWIEWC

OFFSET BY 19
HFJXFWNXJFXD

OFFSET BY 20
IGKYGXOYKGYE

OFFSET BY 21
JHL@HYP@LH@F

OFFSET BY 22
KIMAI@QAMIAG

OFFSET BY 23
LJNBJARBNJBH

OFFSET BY 24
MKOCKBSCOKCI

OFFSET BY 25
NLPDLCTDPLDJ

Looks like offset '14' is our winner:

CAESARISEASY

This would have been quite easier if the encrypter just worked...

level 3

krypton3@melinda:~$ ls -alhF
total 20K
drwxr-xr-x   2 root root 4.0K Nov 14 10:32 ./
drwxr-xr-x 167 root root 4.0K Jan 12 17:44 ../
-rw-r--r--   1 root root  220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root root 3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root root  675 Apr  9  2014 .profile
krypton3@melinda:~$ cd /games/krypton/krypton
krypton1/ krypton2/ krypton3/ krypton4/ krypton5/ krypton6/
krypton3@melinda:~$ cd /games/krypton/krypton3
krypton3@melinda:/games/krypton/krypton3$ ls -lah
total 12K
drwxr-xr-x 2 root     root     1.0K Nov 14 10:32 .
drwxr-xr-x 8 root     root     1.0K Nov 14 10:32 ..
-rw-r----- 1 krypton3 krypton3   56 Nov 14 10:32 HINT1
-rw-r----- 1 krypton3 krypton3   37 Nov 14 10:32 HINT2
-rw-r----- 1 krypton3 krypton3  785 Nov 14 10:32 README
-rw-r----- 1 krypton3 krypton3 1.6K Nov 14 10:32 found1
-rw-r----- 1 krypton3 krypton3 2.1K Nov 14 10:32 found2
-rw-r----- 1 krypton3 krypton3  560 Nov 14 10:32 found3
-rw-r----- 1 krypton3 krypton3   42 Nov 14 10:32 krypton4

Using the contents of 'found1' to 'found3' with frequency analysis tools found on the web, I can get this: (the last column / line is the frequency in english language from most to fewest)

 s : 155 s : 243 s : 58   |    e
 c : 107 q : 186 q : 48   |    t
 q : 106 j : 158 j : 41   |    a
 j : 102 n : 135 g : 35   |    o
 u : 100 u : 130 c : 34   |    i
 b : 87  b : 129 n : 31   |    n
 g : 81  d : 119 b : 30   |    s
 n : 74  g : 111 u : 27   |    h
 d : 69  c : 86  d : 22   |    r
 z : 57  w : 66  v : 21   |    d
 v : 56  z : 59  z : 16   |    l
 w : 47  v : 53  w : 16   |    c
 y : 42  m : 45  e : 13   |    u
 t : 32  t : 37  m : 12   |    m
 x : 29  e : 34  k : 12   |    w
 m : 29  y : 33  x : 9    |    f
 l : 27  x : 33  y : 9    |    g
 k : 25  k : 30  a : 9    |    y
 a : 20  l : 27  t : 6    |    p
 e : 17  a : 26  l : 6    |    b
 f : 11  i : 14  f : 5    |    v
 o : 7   f : 12  i : 3    |    k
 h : 2   o : 3   o : 2    |    j
 i : 2   h : 2   p : 1    |    x
 r : 1   r : 2   r : 1    |    q
 p : 0   p : 1   h : 0    |    z

 SCQJUBGNDZVWYTXMLKAEFOHIRP
 SQJNUBDGCWZVMTEYXKLAIFOHRP
 SQJGCNBUDVZWEMKXYATLFIOPRH

 ETAOINSHRDLCUMWFGYPBVKJXQZ

Using this on the server:

krypton3@melinda:/games/krypton/krypton3$ cat krypton4 | tr [SCQJUBGNDZVWYTXMLKAEFOHIRP] [ETAOINSHRDLCUMWFGYPBVKJXQZ]
krypton3@melinda:/games/krypton/krypton3$ cat krypton4 | tr [SCQJUBGNDZVWYTXMLKAEFOHIRP] [ETAOINSHRDLCUMWFGYPBVKJXQZ]; echo
YELLC NSEOR ELEXE LWNFH UAIIY NHCTI PHFOE
krypton3@melinda:/games/krypton/krypton3$ cat krypton4 | tr [SQJNUBDGCWZVMTEYXKLAIFOHRP] [ETAOINSHRDLCUMWFGYPBVKJXQZ]; echo
YECCD NHEAS ECEVE CGNUO FTIIY NODRI BOUAE
krypton3@melinda:/games/krypton/krypton3$ cat krypton4 | tr [SQJGCNBUDVZWEMKXYATLFIOPRH] [ETAOINSHRDLCUMWFGYPBVKJXQZ]; echo
WEDDC SOEAR EDEKE DFSMN GTHHW SNCIH YNMAE

Well, this could be better. But by now I lost my motivation, so this stops here. If I will continue, the following steps will be put up here into this post.

Leviathan Walkthrough
posted on 2015-01-22 01:38:57

http://overthewire.org/wargames/leviathan/ is just as much fun as bandit, which I covered in eralier post here.

prerequisites

Just go and have a look at the bandit post mentioned above

solutions

Here is what I have found by now.

level 0

leviathan0@melinda:~$ ls -alh
total 24K
drwxr-xr-x   3 root       root       4.0K Nov 14 10:32 .
drwxr-xr-x 167 root       root       4.0K Jan 12 17:44 ..
drwxr-x---   2 leviathan1 leviathan0 4.0K Nov 14 10:32 .backup
-rw-r--r--   1 root       root        220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root       root       3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root       root        675 Apr  9  2014 .profile
leviathan0@melinda:~$ cd .backup/
leviathan0@melinda:~/.backup$ ls -alh
total 140K
drwxr-x--- 2 leviathan1 leviathan0 4.0K Nov 14 10:32 .
drwxr-xr-x 3 root       root       4.0K Nov 14 10:32 ..
-rw-r----- 1 leviathan1 leviathan0 131K Nov 14 10:32 bookmarks.html
leviathan0@melinda:~/.backup$ grep leviathan1 *
<DT><A HREF="http://leviathan.labs.overthewire.org/passwordus.html | This will be fixed later, the password for leviathan1 is rioGegei8m" ADD_DATE="1155384634" LAST_CHARSET="ISO-8859-1" ID="rdf:#$2wIU71">password to leviathan1</A>

pw is rioGegei8m, as can be seen in the last line.

level 1

ltrace for tracing libraries is the key here.

leviathan1@melinda:~$ ls -alhF
total 28K
drwxr-xr-x   2 root       root       4.0K Nov 14 10:32 ./
drwxr-xr-x 167 root       root       4.0K Jan 12 17:44 ../
-rw-r--r--   1 root       root        220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root       root       3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root       root        675 Apr  9  2014 .profile
-r-sr-x---   1 leviathan2 leviathan1 7.4K Nov 14 10:32 check*
leviathan1@melinda:~$ ./check 
password: 


Wrong password, Good Bye ...
leviathan1@melinda:~$ ltrace ./check 
__libc_start_main(0x804852d, 1, 0xffffd784, 0x80485f0 <unfinished ...>
printf("password: ")                             = 10
getchar(0x8048680, 47, 0x804a000, 0x8048642password: 
)     = 10
getchar(0x8048680, 47, 0x804a000, 0x8048642
)     = 10
getchar(0x8048680, 47, 0x804a000, 0x8048642
)     = 10
strcmp("\n\n\n", "sex")                          = -1
puts("Wrong password, Good Bye ..."Wrong password, Good Bye ...
)             = 29
+++ exited (status 0) +++
leviathan1@melinda:~$ ./check
password: sex
$ id
uid=12001(leviathan1) gid=12001(leviathan1) euid=12002(leviathan2) groups=12002(leviathan2),12001(leviathan1)
$ cd /                  
$ pwd
/
$ find . -iname "*leviathan*2*" | less

Then in less, use & to show just lines matching your search content, and type leviathan2 and hit enter, which will give you this:

./etc/leviathan_pass/leviathan2
./home/leviathan2
~
~
~
~
~
~
~
~
~
& (END)

So:

$ cat ./etc/leviathan_pass/leviathan2
ougahZi8Ta

level 2

:(

leviathan2@melinda:~$ ls -alh
total 28K
drwxr-xr-x   2 root       root       4.0K Nov 14 10:32 .
drwxr-xr-x 167 root       root       4.0K Jan 12 17:44 ..
-rw-r--r--   1 root       root        220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root       root       3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root       root        675 Apr  9  2014 .profile
-r-sr-x---   1 leviathan3 leviathan2 7.4K Nov 14 10:32 printfile
leviathan2@melinda:~$ ./printfile 
*** File Printer ***
Usage: ./printfile filename
leviathan2@melinda:~$ mkdir -p /tmp/sjas/
leviathan2@melinda:~$ ln -s /etc/leviathan_pass/leviathan3 /tmp/sjas/lvl2
leviathan2@melinda:~$ ls -alh /tmp/sjas/lvl2 
lrwxrwxrwx 1 leviathan2 leviathan2 30 Jan 22 01:15 /tmp/sjas/lvl2 -> /etc/leviathan_pass/leviathan3
leviathan2@melinda:~$ touch /tmp/sjas/asdf\ lvl2
leviathan2@melinda:~$ ./printfile /tmp/sjas/lvl2\ asdf 
You cant have that file...
leviathan2@melinda:~$ touch /tmp/sjas/lvl2\ asdf
leviathan2@melinda:~$ ./printfile /tmp/sjas/lvl2\ asdf
Ahdiemoo1j
/bin/cat: asdf: No such file or directory

And we get the password: Ahdiemoo1j

This is a security flaw. But neither strace nor this here...

leviathan2@melinda:~$ ltrace ./printfile /tmp/sjas/lvl2\ asdf
__libc_start_main(0x804852d, 2, 0xffffd754, 0x8048600 <unfinished ...>
access("/tmp/sjas/lvl2 asdf", 4)                 = 0
snprintf("/bin/cat /tmp/sjas/lvl2 asdf", 511, "/bin/cat %s", "/tmp/sjas/lvl2 asdf") = 28
system("/bin/cat /tmp/sjas/lvl2 asdf"/bin/cat: /tmp/sjas/lvl2: Permission denied
/bin/cat: asdf: No such file or directory
 <no return ...>
 --- SIGCHLD (Child exited) ---
 <... system resumed> )                           = 256
 +++ exited (status 0) +++

... helped my understanding much.

By using the space in the filename, this works. If used only the link, it wouldn't work. I cannot tell you more, since I googled this as I wasn't smart enough to figure this out by myself.

See https://www.gnu.org/software/libc/manual/html_node/Testing-File-Access.html for more info, if you happen to program C.

level 3

 1  leviathan3@melinda:~$ ls -alh
 2  total 28K
 3  drwxr-xr-x   2 root       root       4.0K Nov 14 10:32 .
 4  drwxr-xr-x 167 root       root       4.0K Jan 12 17:44 ..
 5  -rw-r--r--   1 root       root        220 Apr  9  2014 .bash_logout
 6  -rw-r--r--   1 root       root       3.6K Apr  9  2014 .bashrc
 7  -rw-r--r--   1 root       root        675 Apr  9  2014 .profile
 8  -r-sr-x---   1 leviathan4 leviathan3 7.4K Nov 14 10:32 level3
 9  leviathan3@melinda:~$ ./level3 
10  Enter the password> 
11  bzzzzzzzzap. WRONG
12  leviathan3@melinda:~$ ltrace ./level3 
13  __libc_start_main(0x8048450, 1, 0xffffd784, 0x8048600 <unfinished ...>
14  __printf_chk(1, 0x80486ca, 0x804860b, 0xf7fca000) = 20
15  fgets(Enter the password>                
16  "\n", 256, 0xf7fcac20)                     = 0xffffd5bc
17  puts("bzzzzzzzzap. WRONG"bzzzzzzzzap. WRONG
18  )                       = 19
19  +++ exited (status 0) +++
20  leviathan3@melinda:~$ strings ./level3 
21  /lib/ld-linux.so.2
22  libc.so.6
23  _IO_stdin_used
24  __printf_chk
25  puts
26  __stack_chk_fail
27  stdin
28  fgets
29  system
30  __libc_start_main
31  __gmon_start__
32  GLIBC_2.3.4
33  GLIBC_2.4
34  GLIBC_2.0
35  PTRhp
36  QVhP
37  [^_]
38  snlprintf
39  [You've got shell]!
40  /bin/sh
41  bzzzzzzzzap. WRONG
42  Enter the password> 
43  ;*2$",
44  secret
45  leviathan3@melinda:~$ ./level3 
46  Enter the password> snlprintf
47  [You've got shell]!
48  $ id
49  uid=12003(leviathan3) gid=12003(leviathan3) euid=12004(leviathan4) groups=12004(leviathan4),12003(leviathan3)
50  $ cat /etc/leviathan_pass/leviathan4
51  vuH0coox6m

Line 37 should be the if-clause or something, 38 the string to test against. Line 39 and 40 are the branch for true whereas 41 is the branch for false?

So much for some wild guesswork.

level 4

leviathan4@melinda:~$ ls -lahF
total 24K
drwxr-xr-x   3 root root       4.0K Nov 14 10:32 ./
drwxr-xr-x 167 root root       4.0K Jan 12 17:44 ../
-rw-r--r--   1 root root        220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root root       3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root root        675 Apr  9  2014 .profile
dr-xr-x---   2 root leviathan4 4.0K Nov 14 10:32 .trash/
leviathan4@melinda:~$ cd .trash/
leviathan4@melinda:~/.trash$ ls -lahF
total 16K
dr-xr-x--- 2 root       leviathan4 4.0K Nov 14 10:32 ./
drwxr-xr-x 3 root       root       4.0K Nov 14 10:32 ../
-r-sr-x--- 1 leviathan5 leviathan4 7.3K Nov 14 10:32 bin*
leviathan4@melinda:~/.trash$ ./bin 
01010100 01101001 01110100 01101000 00110100 01100011 01101111 01101011 01100101 01101001 00001010 
leviathan4@melinda:~/.trash$ ltrace ./bin 
__libc_start_main(0x80484cd, 1, 0xffffd754, 0x80485c0 <unfinished ...>
fopen("/etc/leviathan_pass/leviathan5", "r")      = 0
+++ exited (status 255) +++
leviathan4@melinda:~/.trash$ for i in `./bin`; do echo "ibase=2;$i" | bc; done
84
105
116
104
52
99
111
107
101
105
10
leviathan4@melinda:~/.trash$ for i in `./bin`; do j=$(echo "ibase=2;$i" | bc); printf "\x$(printf %x $j)"; done
Tith4cokei

This was some ugly stuff at the end. Once you see the binary values, and converting them to decimals, the numbers look like ascii character numbers. The decoding printf statement is from stackoverflow.com.

level 5

leviathan5@melinda:~$ ls -lahF
total 28K
drwxr-xr-x   2 root       root       4.0K Nov 14 10:32 ./
drwxr-xr-x 167 root       root       4.0K Jan 12 17:44 ../
-rw-r--r--   1 root       root        220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root       root       3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root       root        675 Apr  9  2014 .profile
-r-sr-x---   1 leviathan6 leviathan5 7.5K Nov 14 10:32 leviathan5*
leviathan5@melinda:~$ ./leviathan5 
Cannot find /tmp/file.log
leviathan5@melinda:~$ ltrace ./leviathan5 
__libc_start_main(0x80485ed, 1, 0xffffd774, 0x8048690 <unfinished ...>
fopen("/tmp/file.log", "r")                      = 0
puts("Cannot find /tmp/file.log"Cannot find /tmp/file.log
)                = 26
exit(-1 <no return ...>
+++ exited (status 255) +++
leviathan5@melinda:~$ ln -s /etc/leviathan_pass/leviathan6 /tmp/file.log
leviathan5@melinda:~$ ./leviathan5 
UgaoFee4li

No explanation here, as this one was rather easy.

level 6

leviathan6@melinda:~$ ls -lahF
total 28K
drwxr-xr-x   2 root       root       4.0K Nov 14 10:32 ./
drwxr-xr-x 167 root       root       4.0K Jan 12 17:44 ../
-rw-r--r--   1 root       root        220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root       root       3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root       root        675 Apr  9  2014 .profile
-r-sr-x---   1 leviathan7 leviathan6 7.4K Nov 14 10:32 leviathan6*
leviathan6@melinda:~$ ./leviathan6 
usage: ./leviathan6 <4 digit code>
leviathan6@melinda:~$ ltrace ./leviathan6 
__libc_start_main(0x804850d, 1, 0xffffd774, 0x8048590 <unfinished ...>
printf("usage: %s <4 digit code>\n", "./leviathan6"usage: ./leviathan6 <4 digit code>
) = 35
exit(-1 <no return ...>
+++ exited (status 255) +++
leviathan6@melinda:~$ for i in `seq 0000 9999`; do echo $i; ./leviathan6 $i; done
Wrong
0
Wrong
1
Wrong
2
Wrong
3
Wrong
4


... this takes a while.


Wrong
7120
Wrong
7121
Wrong
7122
Wrong
7123
$ cat /etc/leviathan_pass/leviathan7
ahy7MaeBo9

Bruteforcing this with a bash one-liner is the easiest option to find '7123'. Cat the PW file once you have the leviathan7 shell and you are done.

level 7

leviathan7@melinda:~$ ls -lahF
total 24K
drwxr-xr-x   2 root       root       4.0K Nov 14 10:32 ./
drwxr-xr-x 167 root       root       4.0K Jan 12 17:44 ../
-rw-r--r--   1 root       root        220 Apr  9  2014 .bash_logout
-rw-r--r--   1 root       root       3.6K Apr  9  2014 .bashrc
-rw-r--r--   1 root       root        675 Apr  9  2014 .profile
-r--r-----   1 leviathan7 leviathan7  178 Nov 14 10:32 CONGRATULATIONS
leviathan7@melinda:~$ cat CONGRATULATIONS 
Well Done, you seem to have used a *nix system before, now try something more serious.
(Please don't post writeups, solutions or spoilers about the games on the web. Thank you!)
leviathan7@melinda:~$ 

Ooooops.

Linux performance observability tools
posted on 2015-01-17 18:50:42

This is an alphabetical list which will serve as a reminder, what programs are there to be looked up for me. :)

All this started when I stumbled across a picture on the web, which was from a presentation from Brendan Gregg at LinuxCon14 as I later found out. It was called Linux Performance Tools and it's worth its words in gold, platin and whatever material you see as highly valuable. The slides are here, get your copy and study them. If you want some serious linux sysadmin skills, there is no possible excuse for not doing it.

Seriously.

DO. IT. NOW.

Another two incentives can be found here and here. These may only use a small portion of the later mentioned programs, but either walk the extra miles, or raise your hands in defeat once things get tough, everybody gets to choose man's own path.

Alphetically sorted:

blktrace (8)         - generate traces of the i/o traffic on block devices
dstat (1)            - versatile tool for generating system resource statistics
dtrace (1)           - Dtrace compatibile user application static probe generation tool.
ebpf: nothing appropriate.
ethtool (8)          - query or control network driver and hardware settings
free (1)             - Display amount of free and used memory in the system
ftrace: nothing appropriate.
iostat (1)           - Report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.
iotop (8)            - simple top-like I/O monitor
ip (8)               - show / manipulate routing, devices, policy routing and tunnels
iptraf (8)           - Interactive Colorful IP LAN Monitor
ktap: nothing appropriate.
lldptool (8)         - manage the LDP settings and status of lldpad
lsof (8)             - list open files
ltrace (1)           - A library call tracer
lttng: nothing appropriate.
mpstat (1)           - Report processors related statistics.
netstat (8)          - Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.
nicstat: nothing appropriate.
pcstat: nothing appropriate.
perf (1)             - Performance analysis tools for Linux
pidstat (1)          - Report statistics for Linux tasks.
/proc: nothing appropriate.
ps (1)               - report a snapshot of the current processes.
rdmsr: nothing appropriate.
sar (1)              - Collect, report, or save system activity information.
slabtop (1)          - display kernel slab cache information in real time
snmpget (1)          - communicates with a network entity using SNMP GET requests
ss (8)               - another utility to investigate sockets
stap (1)             - systemtap script translator/driver
strace (1)           - trace system calls and signals
swapon (8)           - enable/disable devices and files for paging and swapping
sysdig ()            - the definitive system and process troubleshooting tool
tcpdump (8)          - dump traffic on a network
tiptop (1)           - display hardware performance counters for Linux tasks
top (1)              - display Linux processes
uptime (1)           - Tell how long the system has been running.
vmstat (8)           - Report virtual memory statistics

First some more explanations on the ones listed above with "nothing appropriate":

ebpf, ftrace, ktap, lttng, nicstat, pcstat, /proc, rdmsr are usually all too new. New like either in bleeding edge, or at least not available in CentOS 7 or Debian 7. If you grab the sources, you might get along. The manpage headlines are actually from a CentOS 7. (Only exception is sysdig, which I installed via the one-liner its github page provided.) /proc is of course not a command, but mentions the /proc folder linux uses where a lot of useful information can be found.

Here are some other sortings, by 'types' now. (Maybe this improves readability, or makes it easier to remember, who knows. It's worth trying, still.)

'stat', 'top', 'trace', 'tap':

dstat      iotop      blktrace     ktap
iostat     slabtop    dtrace       stap
mpstat     tiptop     ftrace
netstat    top        ltrace
nicstat               strace
pcstat
pidstat
vmstat

the rest:

ebpf
ethtool
free
ip
iptraf
lldptool
lsof
lttng
perf
/proc
ps
rdmsr
sar
snmpget
ss
swapon
sysdig
tcpdump
uptime

This were only the 'observability' tools from the presentation. There are also some more listed on 'benchmarking' and 'tuning', and maybe 'tracing'.

Just go an read up on them. NOW.

Running bash scripts
posted on 2015-01-16 23:47:45

There are several ways, how bash scripts can be invoked.

Here are the basic ones along with some lesser known ones:

  1. If your script has a proper shebang and is executable:

    ./SCRIPTNAME.sh

  2. If its missing the x bit:

    bash SCRIPTNAME.sh

  3. Echo commands after processing:

    bash -x SCRIPTNAME.sh

  4. Syntax checking / dry-running:

    bash -n SCRIPTNAME.sh

systemd cheat sheet
posted on 2015-01-16 22:54:15
SYSVINIT COMMAND                    SYSTEMD COMMAND


Used to start a service (not reboot persistent)
service <daemon> start               systemctl start <daemon>


Used to stop a service (not reboot persistent)
service <daemon> stop                systemctl stop <daemon>


Used to stop and then start a service
service <daemon> restart             systemctl restart <daemon>


When supported, reloads the config file without interrupting pending operations.
service <daemon> reload              systemctl reload <daemon>


Restarts if the service is already running.
service <daemon> condrestart         systemctl condrestart <daemon>


Tells whether a service is currently running.
service <daemon> status              systemctl status <daemon>


Used to list the services that can be started or stopped
Used to list all the services and other units
ls /etc/rc.d/init.d/                systemctl 
                                    systemctl list-unit-files --type=service
                                    ls /lib/systemd/system/*.service /etc/systemd/system/*.service


Turn the service on, for start at next boot, or other trigger.
chkconfig <daemon> on                systemctl enable <daemon>


Turn the service off for the next reboot, or any other trigger.
chkconfig <daemon> off               systemctl disable <daemon>


Used to check whether a service is configured to start or not in the current environment.
chkconfig <daemon>                   systemctl is-enabled <daemon>


Print a table of services that lists which runlevels each is configured on or off
chkconfig --list                    systemctl list-unit-files --type=service 
                                    ls /etc/systemd/system/*.wants/


Used to list what levels this service is configured on or off
chkconfig <daemon> --list            ls /etc/systemd/system/*.wants/<daemon>.service


Used when you create a new service file or modify any configuration
chkconfig <daemon> --add             systemctl daemon-reload

To be fair, this is just ripped from the fedora manual and I reformatted it a bit.

Another gem might be:

systemd-analyze blame

This will tell you the times the assorted programs needed during booting

Linux: kill all processes of a user immediatly
posted on 2014-12-29 17:04:21

To kill all processes belonging to a particular user, combine kill with lsof:

kill -9 `lsof -t -u <username>`
Pidgin log location in Linux
posted on 2014-12-04 07:59:15

Pidgin locates it's logs in ~/.purple/logs. In case you should ever feel the need to grep them.

grep: extract different IP addresses from log
posted on 2014-12-03 14:22:16

To easily grep different IP addresses out of a given log file, pipe its contents to this:

| grep -oE '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort | uniq

Or, if you want counts of the IP's, too, how often these were found, try using uniq -c:

| grep -oE '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort | uniq -c

In example:

cat /var/log/messages | grep -oE '([0-9]{1,3}[\.]){3}[0-9]{1,3}' | sort | uniq -c

I will not show output here, because I am not in the mood to create a test file which contains no actually used IP's.

Defining shortcuts in xfce
posted on 2014-12-02 00:26:46

Having a terminal pop open on a keypress is not just nice, it's essential.

XFCE:

Applications Menu
Settings
Settings Editor
Channel: xfce4-keyboard-shortcuts

... create a new custom shortcut:
Property : /commands/custom/<Super>r
Type     : String
Value    : xterm

Save, Exit

This defines [Left Win Key] + r to open a new xterm window.

Linux 'less', advantages, disadvantages, keys, options
posted on 2014-12-01 07:46:36

Being the default pager on linux, and thus the tool you use to look manpages at usually, less is worth some more attention.

key points

Unlike editors or IDE's (vi, emacs, nano, eclipse), pagers (at least less) do not have to load a file completely into memory and thus are faster when displaying huge files. If you happen to think you will never have to open files bigger than some KB size, what about some error logs? (Once I saw a machine write like one additional GB per minute. In this case, you should maybe refrain from less and just use like tail -n1000.)

Also, compared to more, less can also scroll backwards. (!!!)

disadvantages

Pagers cannot edit text. That's what editors are for.

keys

system

q                       quit
h                       show help
= or ctrl-g             show current file name
r                       redraw screen
s                       save file (if input comes from a pipe, not a file)

v                       edit file with $VISUAL or $EDITOR

!<command>              execute <command> in $SHELL
!<mark><command>        pipe text contents between cursor and <mark> to <command>

movement

f or ctrl-f or space    move forward one page
b or ctrl-b             move backward one page 

g                       top of first page
G                       bottom of last page

<count>p                go to <count> percent line in text

d                       forward half a page
u                       backward half a page

m<char>                 mark line with <char>
'<char>                 jump to mark <char>o
''                      goto previous position 

search

/<pattern>              search forward for <pattern>
?<pattern>              search backward for <pattern>

n                       next match
N                       previous match

! or ^N                 prior to <pattern>, will search for non-matching lines
^K                      prior to <pattern>, just mark lines but don't move cursor
^R                      don't use regexes for searching

&<pattern>              SHOW ONLY MATCHES (about the best less command ever)

Especially the | hotkey might be interesting.

To pipe the complete buffer content into a file, do this:

1. g (go to top of file)
2. | (start pipe)
3. $ (pipe until the end of buffer)
4. tee [name of logfile].log

Afterwards you should have a new file. This works both with piped input as well as opened files.

options

startup options

All options with dashes can be used while running less, or as startup commands.

I.e.

+F                      same as 'tail -f', but with less
+/<pattern>             open file at <pattern>

+ is needed during startup, from within less its not needed except when you want to reset a option to its default value.

search options

-A                      search starts after target line
-g                      highlight last search result
-G                      highlight search results
-I                      completely case insensitive searching
-i                      smartcase: case-insensitive if search string contains no upper case
-J                      show status column (to mark lines with search results)
                        left of the the text, lines with matches are marked.
F                       'Waiting for data... (interrupt to abort)' (means ^C)
                        This is basically a 'tail -f' on stereoids!

system options

preface:

`-` prior sets / changes the option
`_` just shows it's current state

-e                      quit at EOF
-M                      toggle long prompt (filename, lines, line %)
-m                      toggle medium prompt (line %)
-N                      show line numbers
-Q                      quiet all terminal bells (!!!)
-R                      output raw control chars = SHOW COLORS
-s                      squeeze multiple blank lines into one
-S                      don't wrap long lines

-P                      define custom promtps
                        See last section here about further information.

custom prompts

   %bX      Replaced by the byte offset into the current input file.   
            The  b  is followed by a single character (shown as X above) 
            which specifies the line whose byte offset is to be used.  
            If the character is a "t", the byte  offset of the top line in 
            the display is used, an "m" means use the middle line, a "b" 
            means use the bottom line, a "B" means use the line  just  after  
            the  bottom line, and a "j" means use the "target" line, 
            as specified by the -j option.

   %B       Replaced by the size of the current input file.

   %c       Replaced by the column number of the text appearing in the 
            first column of the screen.

   %dX      Replaced by the page number of a line in the input file.  
            The line to be used is determined by the X, as with the %b option.

   %D       Replaced by the number of pages in the input file, or quivalently, 
            the page number of the last line in the input file.

   %E       Replaced by the name of the editor (from the VISUAL 
            environment variable, or the EDITOR environment variable 
            if VISUAL is  not  defined).
            See the discussion of the LESSEDIT feature below.

   %f       Replaced by the name of the current input file.

   %F       Replaced by the last component of the name of the current input file.

   %i       Replaced by the index of the current file in the list of input files.

   %lX      Replaced by the line number of a line in the input file.  
            The line to be used is determined by the X, as with the %b option.

   %L       Replaced by the line number of the last line in the input file.

   %m       Replaced by the total number of input files.

   %pX      Replaced by the percent into the current input file,  
            based on byte offsets.  
            The line used is determined by the X as with the %b option.

   %PX      Replaced  by  the  percent into the current input file, 
            based on line numbers.   
            The line used is determined by the X as with the %b option.

   %s       Same as %B.

   %t       Causes any trailing spaces to be removed.  
            Usually used at the end of the string, but may appear anywhere.

   %x       Replaced by the name of the next input file in the list.
XFCE: scroll background windows
posted on 2014-11-30 01:15:48

To change XFCE mouse scrolling behaviour, such that always the window under cursor is scrolled, even though it's not in the foreground, do this:

Applications Menu
Settings
Window Manager Tweaks
Tab: Accessibility
Uncheck: "Raise windows when any mouse button is pressed"

Done.

linux logrotate
posted on 2014-11-28 17:42:24

To avoid overflowing harddisks, use logrotate. It consists of two parts.

config file

First, a config entry either in /etc/logrotate.conf, or in a dedicated file in /etc/logrotate.d/<filename>. (This works since logrotate.d is refeferenced from logrotate.conf.)

Here's an example from mysql:

/var/log/mysql.log /var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log /var/log/mysql/error.log {
    daily
    rotate 7
    missingok
    create 640 mysql adm
    compress
    sharedscripts
    postrotate
        test -x /usr/bin/mysqladmin || exit 0
        # If this fails, check debian.conf! 
        MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"
        if [ -z "`$MYADMIN ping 2>/dev/null`" ]; then
            # Really no mysqld or rather a missing debian-sys-maint user?
            # If this occurs and is not a error please report a bug.
            #if ps cax | grep -q mysqld; then
            if killall -q -s0 -umysql mysqld; then
                exit 1
            fi 
        else
            $MYADMIN flush-logs
        fi
    endscript
}

In the first line the files to be rotated are specified, in the body all options are stated. This is also an example for a 'script' to run after the rotation (this is what the postrotate section is for).

Usually these are fine:

rotate 7
daily
missingok
notifempty
delaycompress

In case you get an error along the lines of '... has insecure permissions. ... . Set the "su" directive...' simply specify the user/group like this:

# like this
su mysql adm

# this is just here for illustration
rotate 7
daily
missingok
notifempty
delaycompress

cronjob

To run the logrotate regularily, a cron has to be installed.

I.e. in /etc/cron.d/my_cronjob_for_logrotate:

1 23 * * * root /usr/sbin/logrotate -f /etc/logrotate.conf > /dev/null 2>&1

apache: logrotate or rotatelogs?

The apache web server can take care of the logs himself, too. This is easiest done through an option in the vhost config. See here. That way you do not need to set up external logrotating afterwards.

Show /proc contents
posted on 2014-11-27 08:52:22

To easily look and search the contents of the /proc folder in linux, try this one-liner:

temp=`mktemp`; \
for i in \
partitions diskstats crypto key-users keys softirqs version uptime stat meminfo loadavg \
interrupts devices cpuinfo consoles cmdline locks filesystems swaps slabinfo zoneinfo \
vmstat pagetypeinfo buddyinfo modules dma timer_stats timer_list sched_debug iomem ioports \
execdomains mdstat misc fb mtrr cgroups; \
do echo -e "\n\n\n\n\n"$"\e[1;33m/proc/"$i$"\e[0m""\n\n" >> $temp; \
cat /proc/$i >> $temp; \
done; \
less -RNS $temp && rm -rf $temp
  • pasteable into shell and will work
  • it will leave no file on disk
  • will color the name of each file its contents will print
  • result is searchable as it is displayed via less
  • memory stuff files were left out, since you really should not need them usually
  • process files (all numbers) were also omitted

Running as root helps, if you are not allowed to see something.

linux hardware specs via dmidecode
posted on 2014-11-26 10:40:35

Using dmidecode is easiest used via the keywords, e.g. dmidecode -t memory. Else use the numbers: dmidecode -t 3,4.

Keyword     Types
──────────────────────────────
bios        0, 13
system      1, 12, 15, 23, 32
baseboard   2, 10, 41
chassis     3
processor   4
memory      5, 6, 16, 17
cache       7
connector   8
slot        9

Further info from the man page:

The SMBIOS specification defines the following DMI types:


Type   Information
────────────────────────────────────────
0   BIOS
1   System
2   Base Board
3   Chassis
4   Processor
5   Memory Controller
6   Memory Module
7   Cache
8   Port Connector
9   System Slots
10   On Board Devices
11   OEM Strings
12   System Configuration Options
13   BIOS Language
14   Group Associations
15   System Event Log
16   Physical Memory Array
17   Memory Device
18   32-bit Memory Error
19   Memory Array Mapped Address
20   Memory Device Mapped Address
21   Built-in Pointing Device
22   Portable Battery
23   System Reset
24   Hardware Security
25   System Power Controls
26   Voltage Probe
27   Cooling Device
28   Temperature Probe

29   Electrical Current Probe
30   Out-of-band Remote Access
31   Boot Integrity Services
32   System Boot
33   64-bit Memory Error
34   Management Device
35   Management Device Component
36   Management Device Threshold Data
37   Memory Channel
38   IPMI Device
39   Power Supply
40   Additional Information
41   Onboard Device

Additionally,  type  126  is  used for disabled entries and type 127 is an end-of-table marker. Types 128 to 255 are for
OEM-specific data.  dmidecode will display these entries by default, but it can only decode them when the  vendors  have
contributed documentation or code for them.
create SSH session via a proxy server
posted on 2014-11-24 00:23:16

If i want to connect from my computer via my workstation at work to another computer, this is how it is done:

ssh -t work_station ssh another_computer

work_station and another_computer in the above example are either IP's or aliases in the ~/.ssh/config file.

If there are more hops in between your destination workstation and your local computer, just add these via -t hop1 -t hop2 etc. into the line above.

SQLite shell
posted on 2014-11-23 17:43:19

While fixing a broken Proxmox install (where a sqlite file is used for housekeeping of the configuration), I came across using sqlite. Which is basically an SQL Database contained within a single file.

Clients can be either console based (i.e. apt-get install sqlite3) or graphical (see here).

While the GUI stuff is nice, for working on the server... CLI.

Here's the usually needed stuff if you need from the official documentation: (Commands for the sqlite CLI)

Often used commands:

sqlite3                               ## open cli
sqlite3 /PATH/TO/DB                   ## open cli and load db
sqlite3 -line /PATH/TO/DB "COMMMAND"  ## run COMMAND on DATABASE, 'line'-d output

# create dumps
sqlite DB '.dump' > dumpfile.sql
# replay dump
sqlite3 DB < dumpfile.sql

COMMANDs in particular:

.quit                                 ## exit cli
.help                                 ## show help
.databases                            ## show available db's
.schema [TABLE]                       ## table create statement, think `describe` in mysql
.open PATH                            ## open database at filesystem PATH

.show                                 ## show settings
.mode                                 ## show available output modes
.mode line                            ## should be mostly needed when working with the cli, default is 'list'
.mode csv                             ## guess what, excel is your friend now
.stats on                             ## useful for debugging, 'off' to turn off again
.width col1 col2 col3 ... colN        ## set column size when using `.mode column`

pragma integrity_ceck                 ## db consistency check
reindex                               ## might help when db is inconsistent
analyze                               ## update statistics for the indexes
pragma foreign_key_check              ## validate foreign key constraints
pragma encoding                       ## show encoding of db
pragma locking_mode                   ## show locking mode (normal or exclusive)
pragma DB.locking_mode=exclusive      ## set locking mode to exclusive, needed for simultaneous db accesses
pragma query_only                     ## is db readonly?
pragma query_only=BOOL                ## set db to RW or RO
pragma compile_options                ## show options with which sqlite was compiled


.mode insert                          ## show insert statements for data, useful when creating dumps
.output /PATH/TO/FILE                 ## define dumpfile
.dump                                 ## create dump
.read /PATH/TO/FILE                   ## replay dump into current db

A lot of commands can be given by dashed commands, see the man page for these.

IPMI cli
posted on 2014-11-21 21:29:36

Usually linux commands have a LOT of flags and options, the IPMI CLI tool being no exception.

Mostly you need these:

## hard reset, in case ipmi module hangs
./ipmicfg-linux.x86_64 -r

## show currently configured ipmi ip
./ipmicfg-linux.x86_64 -m

## clear chassis intrusion
./ipmicfg-linux.x86_64 -clrint
Linux: proper tempfiles
posted on 2014-11-20 10:24:43

mktemp creates randomly named files, a recurringly needed appliance.

create a tempfile and save name to a variable

VARIABLE_NAME=`mktemp`

(That was rather easy, wasn't it?)

Create load on a website with wget
posted on 2014-11-18 13:46:33

To create some load (i.e. to test your webserver / database settings), try wget:

wget -r --spider -l3 http://your.domain.name.here

To save the results, use the -o flag:

wget -r --spider -l3 http://your.domain.name.here -o linkliste.txt

Optionally, you may also get a linklist on the site in question, after using some cleanup.

A script, to copy paste:

echo "\nEnter URL to crawl, without http:\n"; read MYURL; echo "\n$MYURL is being crawled.\n"; MYTEMPFILE=mktemp; wget -r --spider -l3 $MYURL -o $MYTEMPFILE; egrep "^--" $MYTEMPFILE | cut -d' ' -f4 | sort | uniq

This will prompt you for a website / domain, and its output on the console are the links of the domain in question up to the third level. This is due to the -l flag in the wget part being set to 3. The maximum are five levels.

If you want a file, just pipe it into one. :)

create / delete raids with Adaptec's arcconf CLI
posted on 2014-11-17 18:11:41

When working with the CLI for the sole purpose of handling RAID's, usually these commands are needed:

  1. task
  2. create
  3. delete

This will be sort of a lazy posting, no screenpastes will find their way in here, I beg your pardon.

preparation

First make your live a lot easier:

alias asdf=/usr/StorMan/arcconf  ## or where your executable is located

Get an overview on what hardware is available:

asdf getconfig 1 pd | less

This is important, so you can locate the channels / slots of the drives you want to handle. The command is piped through less, since usually the output is too big to fit on a screen. (At least on an 19" 8-bay server, where all slots are filled.)

Similarily, you can see the already created RAID's via

asdf getconfig 1 ld

initialize drives

Once you got your information and you decided your layout, initialize the drives.

If you have nothing you need, and want to prepare all drives at once, do:

asdf task start 1 device all initialize

Else specify the channel and drive id, instead of using 'all':

asdf task start 1 device 0 0 initialize

This will erase the metadata from the drive in slot 0, if your setup is correctly assembled. (Else you are in for trouble, sooner or later, but if you do not know this, you might want to consider a different career path anyway...)

create a logical device

Lets have two examples, one raid1 spanning drives 0 0 and 0 1, and a raid10 on drives 0 4 to 0 7:

asdf create 1 logicaldrive name my_raid1 method quick max 1 0 0 0 1
asdf create 1 logicaldrive name my_raid10 method quick max 10 0 4 0 5 0 6 0 7

While the syntax is cryptical at first, this should become pretty clear once you did this several times.

create is self-explanatory, the first 1 means the controller, and in 99% of all cases you only have a setup with a single controller. logicaldrive is always a present keyword here (except you want to create a jbod), a name always helps. method quick initializing is usually also the best way to go. max specifies maximum size of the raid (that is, as big as the disks let it be).

The numbers afterwards are then:

  1. the raid level
  2. all the channel and slot number tuples

deleting a logical device

asdf delete 1 logicaldrive all

deletes all raids which were created prior.

asdf delete 1 logicaldrive 2

deletes the logical volume with the id 2. (Remember asdf getconfig 1 ld!)

That should be about it in short. modify for raid migration or online capacity expansion is reserved for another post for the time being.

Querying dd progress
posted on 2014-11-16 17:44:33

UPDATE: use pkill instead of kill: pkill -usr1 dd is all you need.


Usually dd will only show information about the transfer it did, AFTER its completion.

Or try a second shell, and sending a USR1 signal to the dd process.

First, lets startd a demo dd process:

[sjas@mb ~]$ dd if=/dev/random of=/dev/null

Then we need to find out the process id of this dd process. For this you can use pgrep, but i prefer grepping ps auxf:

[sjas@mb ~]$ ps auxf | grep dd
2:root         2  0.0  0.0      0     0 ?        S    06:25   0:00 [kthreadd]
91:sjas      3351  0.0  0.0  30588  1704 ?        Ss   06:25   0:01 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
134:sjas      6501  0.0  0.0  31980  3496 pts/1    S+   17:43   0:00          \_ vim 180-querying-dd-for-progress.post
143:sjas      3580  0.0  0.0   9228  1248 ?        S    06:26   0:00  \_ ksysguardd
169:sjas      6560  0.0  0.0   9868   636 pts/2    S+   17:46   0:00  |   \_ dd if=/dev/random of=/dev/null
172:sjas      6660  0.0  0.0   7836   892 pts/3    S+   17:49   0:00      \_ grep -i -n --color dd
[sjas@mb ~]$ 

So in this example, the PID is 6560.

From the second shell:

kill -usr1 6560

will then show additionally this in the first shell:

0+99 records in
1+0 records out
512 bytes (512 B) copied, 250.022 s, 0.0 kB/s

Of course, you could also pipe the data through pv or bar, to have a continouus status bar. But maybe you don't want that (will slow down things a bit), or you just forgot, and so you still can query the process for the current progress.

bash ranging
posted on 2014-11-13 12:05:02

Using ranges in bash, you can avoid more complicated for loop constructs (which aren't needed 99% of the time anyway...):

[sjas@mb ~]$ for i in {1..5}; do echo $i; done
1
2
3
4
5
[sjas@mb ~]$ 

This also works with characters:

[sjas@mb ~]$ for i in {z..q}; do echo $i; done
z
y
x
w
v
u
t
s
r
q
[sjas@mb ~]$ 

Even backwards!

macbook fan control
posted on 2014-11-12 01:43:17

After a while I noticed that the fans kept pretty quiet on my macbook, which seemed rather odd considering the temperature on its underside.

After some googling it seemed that this is a bug, due to nobody ever having cared enough to release proper drivers.

Thanks to the wonderful conveniences of life on github there was help to be found: https://github.com/dgraziotin/Fan-Control-Daemon

Prerequisites:

sudo aptitude install g++ make lm-sensors

git clone https://github.com/dgraziotin/Fan-Control-Daemon
cd Fan-Control-Daemon
sudo make
sudo make install
sudo make test

After this, the program is installed halfways. To have it properly set up so it starts during boot process, have a look at the readme where it is very well explained for your distro of choice.

To see what is currently up, try: (Ctrl-C to escape)

watch -n.2 sensors

Which will give you your current temps and fan speed:

coretemp-isa-0000
Adapter: ISA adapter
Core 0:       +55.0°C  (high = +105.0°C, crit = +105.0°C)
Core 1:       +54.0°C  (high = +105.0°C, crit = +105.0°C)

nouveau-pci-0200
Adapter: PCI adapter
temp1:        +60.0°C  (high = +100.0°C, crit = +95.0°C)

applesmc-isa-0300
Adapter: ISA adapter
Exhaust  :   4164 RPM  (min = 2000 RPM, max = 6200 RPM)
TB0T:         +35.5°C  
TB1T:         +35.5°C  
TB2T:         +33.2°C  
TB3T:         +35.8°C  
TC0D:         +59.2°C  
TC0P:         +53.0°C  
TN0D:         +59.2°C  
TN0P:         +53.8°C  
TTF0:         +65.0°C  
Th0H:         +57.8°C  
Th1H:         +50.5°C  
ThFH:         +50.8°C  
Ts0P:         +32.8°C  
Ts0S:         +41.5°C  

To change the settings, have a look at /etc/mbpfan.conf, here is my current one:

[general]
min_fan_speed = 3000    # default is 2000
max_fan_speed = 6200    # default is 6200
low_temp = 50           # try ranges 55-63, default is 63
high_temp = 55          # try ranges 58-66, default is 66
max_temp = 65           # do not set it > 90, default is 86
polling_interval = 7    # default is 7

Maybe the fan speeds are a little too high still, but lets see how battery life changes. Temperature is down by 15 degrees Celsius, which I consider an improvement.

Apple Fn Keys Switching
posted on 2014-11-10 23:23:25

Usually the F1 to F12 keys are reachable on a mac only via the Fn modifier key. Instead of these the brightness control keys and the like are preferred by default.

Of course I want alt+f4 (where alt is already on left cmd, of course) to work without Fn...

sudo vim /etc/modprobe.d/hid_apple.conf:

options hid_apple fnmode=2

After saving, type this into your shell:

sudo update-initramfs -u

Reboot and revel in awe. :)

Apache redirecting
posted on 2014-11-09 17:35:27

A sample for apache redirecting, very useable snippet for use from within a .htaccess:

RewriteEngine On
RewriteCond %{REQUEST_URI} ^/$
RewriteRule (.*) http://blog.sjas.de [L,R=301]

You might want to change the URL. ;)

udev in ubuntu 14.04
posted on 2014-11-08 16:19:45

If you want to rename your NIC's in linux, especially in ubuntu 14.04 (important!), you got to know that as of 11/2014 the official documentation is plain wrong. (See here.)

Edit /etc/udev/rules.d/75-persistent-net-generator.rules like:

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="0c:c4:7a:0b:67:b6", NAME="eth0"
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="0c:c4:7a:0b:67:b7", NAME="eth1"

Of course, you have to use the proper MAC address from the interface in question, get it via ip a.

Unban IP from Fail2Ban
posted on 2014-11-07 12:26:50

If you want to remove an IP from the fail2ban ban list, i.e. the second one in this excerpt:

(Output of iptables -L -n)

...

Chain fail2ban-ssh (1 references)
target     prot opt source               destination         
DROP       all  --  10.0.0.33            0.0.0.0/0           
DROP       all  --  10.0.3.234           0.0.0.0/0           
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

First do a fail2ban-client status to determine the jailname fail2ban uses:

Status
|- Number of jail:  1
`- Jail list:   ssh-iptables

It's ssh-iptables here.

Now simply unban the ip:

fail2ban-client set ssh-iptables unbanip 10.0.3.234
Linux: show all block devices with lsblk
posted on 2014-11-05 00:01:23

To see all currently connected devices like HDD's, SSD's, CD-Rom's and USB sticks, try lsblk.

Usually it looks like this:

sjas@mb:~/ISO/UBCD$ lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0  29.8G  0 disk
|─sda1                         8:1    0   487M  0 part /boot/efi
|─sda2                         8:2    0  25.6G  0 part /
`─sda3                         8:3    0   3.8G  0 part [SWAP]
sr0                           11:0    1 589.2M  0 rom
sdb                            8:16   0 596.2G  0 disk
|─sdb1                         8:17   0   200M  0 part
|─sdb2                         8:18   0   500M  0 part
`─sdb3                         8:19   0 595.5G  0 part
  |─fedora_debra-root (dm-0) 254:0    0    50G  0 lvm
  |─fedora_debra-home (dm-1) 254:1    0   542G  0 lvm
  `─fedora_debra-swap (dm-2) 254:2    0   3.5G  0 lvm
sjas@mb:~/ISO/UBCD$

For a better overview, try a better selection of -o flags. Heres an overview on the possible options on an arbitrary system:

[jl@jerrylee ~]% \lsblk --help | \grep Available -A999 | sed -e '1d' -e '$d' | sed '$d'
        NAME  device name
       KNAME  internal kernel device name
     MAJ:MIN  major:minor device number
      FSTYPE  filesystem type
  MOUNTPOINT  where the device is mounted
       LABEL  filesystem LABEL
        UUID  filesystem UUID
   PARTLABEL  partition LABEL
    PARTUUID  partition UUID
          RA  read-ahead of the device
          RO  read-only device
          RM  removable device
       MODEL  device identifier
      SERIAL  disk serial number
        SIZE  size of the device
       STATE  state of the device
       OWNER  user name
       GROUP  group name
        MODE  device node permissions
   ALIGNMENT  alignment offset
      MIN-IO  minimum I/O size
      OPT-IO  optimal I/O size
     PHY-SEC  physical sector size
     LOG-SEC  logical sector size
        ROTA  rotational device
       SCHED  I/O scheduler name
     RQ-SIZE  request queue size
        TYPE  device type
    DISC-ALN  discard alignment offset
   DISC-GRAN  discard granularity
    DISC-MAX  discard max bytes
   DISC-ZERO  discard zeroes data
       WSAME  write same max bytes
         WWN  unique storage identifier
        RAND  adds randomness
      PKNAME  internal parent kernel device name
        HCTL  Host:Channel:Target:Lun for SCSI
        TRAN  device transport type
         REV  device revision
      VENDOR  device vendor

You can of course take this listing and try it directly:

\lsblk -o$(\lsblk --help | \grep Available -A999 | sed -e '1d' -e '$d' | sed '$d' | awk '{print $1}' | tr '\n' ',' | sed 's/,$//')

But if you do not have two widescreen monitors and a PTY shell drawn across it, you wont be recognizing much. Just for the record, on a Kali LIVE the above command won't even show all the output but jsut die gracefully without even showing an error.

So you might try this:

root@mb:/home/sjas/ISO/UBCD# lsblk -i -o name,label,mountpoint,fstype,model,size,type,state,uuid
NAME                         LABEL    MOUNTPOINT FSTYPE      MODEL              SIZE TYPE STATE   UUID
sda                                                          SSDSA2SH032G1GN   29.8G disk running
|─sda1                                /boot/efi  vfat                           487M part         5604-FDAF
|─sda2                                /          ext4                          25.6G part         ded0e7a8-23af-4deb-9b9d-9d63a26904aa
`─sda3                                [SWAP]     swap                           3.8G part         b022846d-e4b5-475b-b087-c4d5b486601f
sr0                          UBCD532             iso9660     DVDRW  GS21N     589.2M rom  running
sdb                                                          Name             596.2G disk running
|─sdb1                       untitled            hfsplus                        200M part         0fdc7456-f171-3490-9d41-671b43d70db3
|─sdb2                                           ext4                           500M part         66a8ed1c-e56e-4707-bbd5-15bcde2fa5a0
`─sdb3                                           LVM2_member                  595.5G part         BIC2hD-zS3w-yvtC-oNEG-yec1-4Q7h-qb4gwN
  |─fedora_debra-root (dm-0)                     ext4                            50G lvm  running e35f5406-0cc0-4646-86f8-c4031005580a
  |─fedora_debra-home (dm-1)                     ext4                           542G lvm  running 8bc9c1a4-a5c1-4e1b-9dfc-7aeb6437c708
  `─fedora_debra-swap (dm-2)                     swap                           3.5G lvm  running 2ddddb46-8b6d-4fb4-ae19-6390e1015b76
root@mb:/home/sjas/ISO/UBCD#

Of course, I have lsblk -o name,label,mountpoint,fstype,model,size,type,state,uuid aliased in my .bashrc:

alias lsblk='lsblk -o name,label,mountpoint,fstype,model,size,type,state,uuid'

UPDATE:

lsblk -i -o kname,mountpoint,fstype,size,maj:min,rm,name,state,rota,ro,type,label,model,serial

is what i stick with.

Enable wireless module in debian on a Aluminium Macbook (non-pro)
posted on 2014-11-04 21:14:18

Hardware being used:

[sjas@mb ~]$ lspci -nn -d 14e4:
03:00.0 Network controller [0280]: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] (rev 01)
[sjas@mb ~]$

After some googling i found out which drivers work for this version of the Hardware (mind the brackets stuff and the revision!), and all to do was:

  sudo apt-cache search b43
  sudo apt-get update
  lspci -nn -d 14e4:
  sudo apt-get install firmware-b43-installer
  sudo reboot
MacBook synaptics touchpad linux settings
posted on 2014-11-04 08:21:03

To get some proper settings for your macbook's trackpad after having installed a proper OS on it, try the following synaptics settings.

#!/bin/bash

## ENABLING FUNCTIONS
# all set to '0' previously
# tapping, two fingers = mouse2, three = mouse3 ... 
synclient TapButton1=1
synclient TapButton2=3
synclient TapButton3=2
# ... and palm detection...
synclient PalmDetect=1
# ... and horizontal scrolling
synclient HorizTwoFingerScroll=1

## SETTING SENSITIVITIES AND SPEEDS
# '30' prior, less touch sensitivity
synclient FingerHigh=50
# '1.75' prior, faster
synclient MaxSpeed=2.25
# '235' prior`, faster
synclient VertScrollDelta=110

To make this work, place this into a script and make sure it will be ran during your login process. (.bashrc, .profile, whatever.) The other option is to put the settings directly into your synaptic config file.

My file in my system is located here:

[sjas@nb ~]$ find /usr/share/X11 -iname '*synaptic*'
/usr/share/X11/xorg.conf.d/50-synaptics.conf

And the changes are put in there, accordingly like the other options, i.e.

    Option "TapButton1" "1"
    Option "TapButton2" "3"
    Option "TapButton3" "2"

All this in the config section that actually references your Trackpad.

bash for loops like in C
posted on 2014-11-03 13:50:31

To have 'counting' bash loops, try the following.

Directly in a shell:

[sjas@ctr-014 ~]% for (( i=0; i<5; i++ )); do echo $i; done
0
1
2
3
4
[sjas@ctr-014 ~]%

As a script:

#!/bin/bash

for (( i=0; i<5; i++ ))
do
    echo $i
done
dd progress bar
posted on 2014-11-03 13:48:27

To get a proper progess bar when using dd, try using pv. Maybe apt-get install'ing it is needed, if yes, just go ahead.

Usage shown on the example of copying an .iso onto an usb stick:

[sjas@ctr-014 ~/Downloads]% pv -tpreb CentOS-6.6-x86_64-minimal.iso | dd of=/dev/sdc
 383MB 0:04:09 [1.53MB/s] [========================================>] 100%
 784384+0 records in
 784384+0 records out
 401604608 bytes (402 MB) copied, 265.133 s, 1.5 MB/s
[sjas@ctr-014 ~/Downloads]%

Usually you don't see the second+ lines, and would have to wait 4 minutes until you see your copying was successful.

For small devices this is fine, but when copying whole disks this behaviour becomes VERY annoying.

Another utility would be bar:

bar -if=CentOS-6.6-x86_64-minimal.iso | dd of=/dev/sdc

Same principle as pv, handing it an inputfile and piping it to dd.

Convert .rpm's to .deb's
posted on 2014-10-30 19:53:49

To use .rpm files in debian (If god likes you, then they just may.), you have to convert them to .deb formatted files. alien is the weapon of choice.

# apt-get install alien

To convert and install:

alien <packagename>.rpm
dpkg -i <packagename_converted>.deb

To convert and install in one step:

alien -i <packagename>.rpm

By default, the version number of the .rpm files will be incremented by one. If you do not want this behaviour, you might try the -k flag.

Adaptec arcconf manual
posted on 2014-10-29 18:33:01

To use Adaptec's 'uniform command line interface' on linux easily, here is a list of the most used commands.

After having connected to the server, cd /usr/StorMan. There you use the arcconf executable.

First a pro tip:

alias asdf=/usr/StorMan/arcconf

This lets you use asdf for calling the executable, instead of either ./arcconf or (god forbid) /usr/StorMan/arcconf. In the following text I refrained from using the asdf alias, but usually it's the first thing I do when connecting to a box and want to work with the raid controller's CLI. Analoguous for LSI the alias is usually alias asdf=/path/where/your/executable/is/as/MegaCli64. ;)

Usually you need getconfig, getstatus, identify, rescan when exchanging disks. RAID's are usually built prior to OS installation,

The only tricky part is not messing up the numbers with which the commands have to be used. (Usually it's the off-by-one errors.) But that's easy once you got used to it. Worse are physical problems like broken backplanes, missing wirings or malfunctioning LED's...

Hostnames are changed in the following to protect the partly innocent. ;)

overview

root@some-server:/usr/StorMan# ./arcconf 

  | UCLI |  Adaptec uniform command line interface
  | UCLI |  Version 6.50 (B18579)
  | UCLI |  (C) Adaptec 2003-2010
  | UCLI |  All Rights Reserved

 ATAPASSWORD             | Setting password on a physical drive
 COPYBACK                | toggles controller copy back mode
 CREATE                  | creates a logical device
 DATASCRUB               | toggles the controller background consistency check mode
 DELETE                  | deletes one or more logical devices
 FAILOVER                | toggles the controller automatic failover mode
 GETCONFIG               | prints controller information
 GETLOGS                 | gets controller log information
 GETSMARTSTATS           | gets the SMART statistics
 GETSTATUS               | displays the status of running tasks
 GETVERSION              | prints version information for all controllers
 IDENTIFY                | blinks LEDS on device(s) connected to a controller
 IMAGEUPDATE             | update physical device firmware
 KEY                     | installs a Feature Key onto a controller
 MODIFY                  | performs RAID Level Migration or Online Capacity Expansion
 RESCAN                  | checks for new or removed drives
 RESETSTATISTICSCOUNTERS | resets the controller statistics counters
 ROMUPDATE               | updates controller firmware
 SAVESUPPORTARCHIVE      | saves the support archive 
 SETALARM                | controls the controller alarm, if present
 SETCACHE                | adjusts physical or logical device cache mode
 SETCONFIG               | restores the default configuration
 SETMAXIQCACHE           | adjusts MaxIQ Cache settings for physical or logical device
 SETNAME                 | renames a logical device given its logical device number
 SETNCQ                  | toggles the controller NCQ status
 SETPERFORM              | changes adapter settings based on application
 SETPOWER                | power settings for controller or logical device
 SETPRIORITY             | changes specific or global task priority
 SETSTATE                | manually sets the state of a physical or logical device
 SETSTATSDATACOLLECTION  | toggles the controller statistics data collection modes
 TASK                    | performs a task such as build/verify on a physical or logical device

root@some-server:/usr/StorMan# 

./arcconf GETSTATUS

Just the controller status.

root@some-server:/usr/StorMan# ./arcconf getstatus 1
Controllers found: 1
Logical device Task:
   Logical device                 : 0
   Task ID                        : 100
   Current operation              : Rebuild
   Status                         : In Progress
   Priority                       : High
   Percentage complete            : 42


Command completed successfully.
root@some-server:/usr/StorMan# 

The 1 is the id of the first (and in this system the only) adaptec raid controller. Unlike the disks, counting starts at 1. Which is usually all you need.

Further the status tells 'rebuilding' (since a new harddisk got inserted), being at 42%.

./arcconf IDENTIFY

This helps to find a drive in question, usually by letting the front panel LED of the disk bay blink. But not only a single drive can be made blinking, also all drives of a logical drive / RAID array can be 'highlighted'.

The first number is the controller id (as mentioned above, usually always 1), the second either the logical drive id. Or if it is a number pair, its the channel id plus the drive id.

highlight single drive

root@some-server:/usr/StorMan# ./arcconf identify 1 device 0 0
Controllers found: 1
The specified device is blinking.
Press any key to stop the blinking.

Command completed successfully.
root@some-server:/usr/StorMan# 

First zero is the channel (and usually zero), second the drive id.

highlight whole array

root@some-server:/usr/StorMan# ./arcconf identify 1 logicaldrive 0
Controllers found: 1
The specified device is blinking.
Press any key to stop the blinking.

Command completed successfully.
root@some-server:/usr/StorMan# ./arcconf identify 1 logicaldrive 1
Controllers found: 1
The specified device is blinking.
Press any key to stop the blinking.

Command completed successfully.
[root@some-server:/usr/StorMan# 

This lets at first blink the first array, then the second one. Counting starts at '0' here, unlike with the controller where '1' is preferred.

./arcconf GETCONFIG

Business as usual, usually the first number is the controller id, and so, 1. (...) By omitting the one you get the available parameters:

root@some-server:/usr/StorMan# ./arcconf getconfig 

 Usage: GETCONFIG <Controller#> [AD | LD [LD#] | PD | [AL]]
 ======================================================

 Prints controller configuration information.

    Option  AD  : Adapter information only
            LD  : Logical device information only
            LD# : Optionally display information about the specified logical device
            PD  : Physical device information only
            AL  : All information (optional)
root@some-server:/usr/StorMan#

If no parameter is given, AL is used as the default.

If checking the logical devices via LD, you can also pass the id of the 'drive' in question.

adapter used - AD

root@some-server:/usr/StorMan# ./arcconf getconfig 1 AD
Controllers found: 1
----------------------------------------------------------------------
Controller information
----------------------------------------------------------------------
   Controller Status                        : Optimal
   Channel description                      : SAS/SATA
   Controller Model                         : Adaptec 5805
   Controller Serial Number                 : 1D2211A7A42
   Physical Slot                            : 5
   Temperature                              : 75 C/ 167 F (Normal)
   Installed memory                         : 512 MB
   Copyback                                 : Disabled
   Background consistency check             : Disabled
   Automatic Failover                       : Enabled
   Global task priority                     : High
   Performance Mode                         : Default/Dynamic
   Stayawake period                         : Disabled
   Spinup limit internal drives             : 0
   Spinup limit external drives             : 0
   Defunct disk drive count                 : 0
   Logical devices/Failed/Degraded          : 2/0/1
   SSDs assigned to MaxIQ Cache pool        : 0
   Maximum SSDs allowed in MaxIQ Cache pool : 8
   MaxIQ Read Cache Pool Size               : 0.000 GB
   MaxIQ cache fetch rate                   : 0
   MaxIQ Cache Read, Write Balance Factor   : 3,1
   NCQ status                               : Enabled
   Statistics data collection mode          : Enabled
   --------------------------------------------------------
   Controller Version Information
   --------------------------------------------------------
   BIOS                                     : 5.2-0 (18252)
   Firmware                                 : 5.2-0 (18252)
   Driver                                   : 1.2-1 (40700)
   Boot Flash                               : 5.2-0 (18252)
   --------------------------------------------------------
   Controller Battery Information
   --------------------------------------------------------
   Status                                   : Not Installed


Command completed successfully.
root@some-server:/usr/StorMan#

Controller used here can be seen being a 5805 without a BBU (Battery Backup Unit). No Read or Write Caching enabled.

logical drives - LD or LD

root@some-server:/usr/StorMan# ./arcconf getconfig 1 ld
Controllers found: 1
----------------------------------------------------------------------
Logical device information
----------------------------------------------------------------------
Logical device number 0
   Logical device name                      : 
   RAID level                               : 10
   Status of logical device                 : Degraded
   Size                                     : 1906678 MB
   Stripe-unit size                         : 256 KB
   Read-cache mode                          : Enabled
   MaxIQ preferred cache setting            : Disabled
   MaxIQ cache setting                      : Disabled
   Write-cache mode                         : Disabled (write-through)
   Write-cache setting                      : Disabled (write-through)
   Partitioned                              : Yes
   Protected by Hot-Spare                   : No
   Bootable                                 : Yes
   Failed stripes                           : No
   Power settings                           : Disabled
   --------------------------------------------------------
   Logical device segment information
   --------------------------------------------------------
   Group 0, Segment 0                       : Present (0,4)       JPW9K0N1224P3L
   Group 0, Segment 1                       : Present (0,5)       JPW9K0N20BRBEE
   Group 1, Segment 0                       : Present (0,6) Z1W0GP8R0000C404211V
   Group 1, Segment 1                       : Rebuilding (0,7)       JPW9K0N208AKHE

Logical device number 1
   Logical device name                      : data
   RAID level                               : 10
   Status of logical device                 : Optimal
   Size                                     : 3809270 MB
   Stripe-unit size                         : 256 KB
   Read-cache mode                          : Enabled
   MaxIQ preferred cache setting            : Disabled
   MaxIQ cache setting                      : Disabled
   Write-cache mode                         : Disabled (write-through)
   Write-cache setting                      : Disabled (write-through)
   Partitioned                              : Unknown
   Protected by Hot-Spare                   : No
   Bootable                                 : No
   Failed stripes                           : No
   Power settings                           : Disabled
   --------------------------------------------------------
   Logical device segment information
   --------------------------------------------------------
   Group 0, Segment 0                       : Present (0,0)       JK11E1B9KGVDKT
   Group 0, Segment 1                       : Present (0,1)       JK11A8B9KMHYWF
   Group 1, Segment 0                       : Present (0,2)             YGKAZYUK
   Group 1, Segment 1                       : Present (0,3)             YGKAZMNK



Command completed successfully.
root@some-server:/usr/StorMan# 

This example here is still the same server, where the 8th HDD, identified by (0,7), is rebuilding. Both logical devices are RAID 10's, with the first raid being the last four disks and being degraded due to one disk missing/being rebuilt.

physical device - PD

This is usually your best shot for fast information.

root@some-server:/usr/StorMan# ./arcconf getconfig 1 pd
Controllers found: 1
----------------------------------------------------------------------
Physical Device information
----------------------------------------------------------------------
      Device #0
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SATA 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,0(0:0)
         Reported Location                  : Enclosure 0, Slot 0
         Reported ESD(T:L)                  : 2,0(0:0)
         Vendor                             : Hitachi
         Model                              : HUA722020ALA330
         Firmware                           : JKAOA3EA
         Serial number                      : JK11E1B9KGVDKT
         Size                               : 1907729 MB
         Write Cache                        : Disabled (write-through)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off,Reduced rpm
         SSD                                : No
         MaxIQ Cache Capable                : No
         MaxIQ Cache Assigned               : No
         NCQ status                         : Enabled
      Device #1
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SATA 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,1(1:0)
         Reported Location                  : Enclosure 0, Slot 1
         Reported ESD(T:L)                  : 2,0(0:0)
         Vendor                             : Hitachi
         Model                              : HUA722020ALA330
         Firmware                           : JKAOA3EA
         Serial number                      : JK11A8B9KMHYWF
         Size                               : 1907729 MB
         Write Cache                        : Disabled (write-through)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off,Reduced rpm
         SSD                                : No
         MaxIQ Cache Capable                : No
         MaxIQ Cache Assigned               : No
         NCQ status                         : Enabled
      Device #2
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SAS 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,2(2:0)
         Reported Location                  : Enclosure 0, Slot 2
         Reported ESD(T:L)                  : 2,0(0:0)
         Vendor                             : HITACHI
         Model                              : HUS723020ALS640
         Firmware                           : A440
         Serial number                      : YGKAZYUK
         World-wide name                    : 5000CCA01CBD19CF
         Size                               : 1907729 MB
         Write Cache                        : Disabled (write-through)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off
         SSD                                : No
         MaxIQ Cache Capable                : No
         MaxIQ Cache Assigned               : No
      Device #3
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SAS 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,3(3:0)
         Reported Location                  : Enclosure 0, Slot 3
         Reported ESD(T:L)                  : 2,0(0:0)
         Vendor                             : HITACHI
         Model                              : HUS723020ALS640
         Firmware                           : A440
         Serial number                      : YGKAZMNK
         World-wide name                    : 5000CCA01CBD14E3
         Size                               : 1907729 MB
         Write Cache                        : Disabled (write-through)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off
         SSD                                : No
         MaxIQ Cache Capable                : No
         MaxIQ Cache Assigned               : No
      Device #4
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SATA 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,4(4:0)
         Reported Location                  : Connector 1, Device 0
         Vendor                             : Hitachi
         Model                              : HUA722010CLA330
         Firmware                           : JP4OA3EA
         Serial number                      : JPW9K0N1224P3L
         Size                               : 953869 MB
         Write Cache                        : Disabled (write-through)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off,Reduced rpm
         SSD                                : No
         MaxIQ Cache Capable                : No
         MaxIQ Cache Assigned               : No
         NCQ status                         : Enabled
      Device #5
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SATA 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,5(5:0)
         Reported Location                  : Connector 1, Device 1
         Vendor                             : Hitachi
         Model                              : HUA722010CLA330
         Firmware                           : JP4OA3EA
         Serial number                      : JPW9K0N20BRBEE
         Size                               : 953869 MB
         Write Cache                        : Disabled (write-through)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off,Reduced rpm
         SSD                                : No
         MaxIQ Cache Capable                : No
         MaxIQ Cache Assigned               : No
         NCQ status                         : Enabled
      Device #6
         Device is a Hard drive
         State                              : Online
         Supported                          : Yes
         Transfer Speed                     : SAS 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,6(6:0)
         Reported Location                  : Connector 1, Device 2
         Vendor                             : SEAGATE
         Model                              : ST1000NM0023
         Firmware                           : 0003
         Serial number                      : Z1W0GP8R0000C404211V
         World-wide name                    : 5000C500571785AC
         Size                               : 953869 MB
         Write Cache                        : Disabled (write-through)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off
         SSD                                : No
         MaxIQ Cache Capable                : No
         MaxIQ Cache Assigned               : No
      Device #7
         Device is a Hard drive
         State                              : Rebuilding
         Supported                          : Yes
         Transfer Speed                     : SATA 3.0 Gb/s
         Reported Channel,Device(T:L)       : 0,7(7:0)
         Reported Location                  : Connector 1, Device 3
         Vendor                             : Hitachi
         Model                              : HUA722010CLA330
         Firmware                           : JP4OA3EA
         Serial number                      : JPW9K0N208AKHE
         Size                               : 953869 MB
         Write Cache                        : Disabled (write-through)
         FRU                                : None
         S.M.A.R.T.                         : No
         S.M.A.R.T. warnings                : 0
         Power State                        : Full rpm
         Supported Power States             : Full rpm,Powered off,Reduced rpm
         SSD                                : No
         MaxIQ Cache Capable                : No
         MaxIQ Cache Assigned               : No
         NCQ status                         : Enabled
      Device #8
         Device is an Enclosure services device
         Reported Channel,Device(T:L)       : 2,0(0:0)
         Enclosure ID                       : 0
         Type                               : SES2
         Vendor                             : ADAPTEC
         Model                              : Virtual SGPIO
         Firmware                           : 0001
         Status of Enclosure services device


Command completed successfully.

./arcconf RESCAN

Rescans all drives, to find new drives which were not automatically found.

./arcconf SETALARM

Test, silence, switch a controllers sound alarm.

outlook

Further the Storage Manager can be used for live RAID level migration or online capacity expansion (./arcconf MODIFY) and other smut, but for now that's enough.

parted
posted on 2014-10-11 15:27:56

Sometimes you have to partition disks by hand. For some people, parted is the weapon of choice.

If so, keep in mind you only use it for partitioning. Filesystems are not implemented and parted may tell you so, too. Besides you won't get filesystems created by it. Just use the mkfs.xxx tools at hand. It's just sad that half finished programs are distributed... banana software - it matures in the works of the customer. :(

Back to the posts intention: There are two ways to use parted. Either via the interactive parted-shell, or directly from the commandline.

Most needed commands in the shell might be:

a   = align-check = check alignment to sectors (min or opt as params)
p   = print       = show info for chosen disk
sel = select      = choose disk (i.e. /dev/sdb)
u   = unit        = measuring of sizes (i.e. %,MB,GB,...)
mkl = mklabel     = create disk label (i.e. gpt) (mktable = mkt = the same)
mkp = mkpart      = create partitions
rm  = remove      = delete a partition
q   = quit        = exit parted

An overview on the possible sizes you'd most likely use:

MiB  Mebibyte (1048576 bytes)
GiB  Gibibyte (1073741824 bytes)
MB   Megabyte (1000000 bytes)
GB   Gigabyte (1000000000 bytes)
%    procentual (between 0% and 100%) 
s    sectors (logical sector size)

So usually you would want MiB and GiB, I guess.

Sadly, I haven't found out yet how to optimally align the partition onto the harddisks (so it matches it's sectors best). For this I actually used to use parted from the regular bash shell:

parted /dev/sdb mklabel gpt
parted -a optimal /dev/sdb unit gb mkpart primary ext4 2 100%

Nowadays I mostly use parted interactive with procentual parameters.

But for when I don't, here two example workflows

# create a single MBR  partition with an ext4 on the usb stick /dev/sdb
lsblk -f                ## make sure the stick is really on /dev/sdb
parted /dev/sdb p           ## check whats already on the stick
parted /dev/sdb mkl ms y        ## create MBR not GPT
parted /dev/sdb mkp p 0% 100%   ## single primary partition, percentages are used for proper alignment
mkfs.ext4 /dev/sdb1         ## create filesystem at last
parted /dev/sdb a opt 1     ## check alignment to be optimal for partition 1

# create the same, but with gpt
lsblk -f
parted /dev/sdb p           
parted /dev/sdb mkl g y     ## create GPT
parted /dev/sdb mkp asdf 0% 100%    ## creates partition with name 'asdf', a name must be specified, ext4 already on it

It seems that currently creating the filesystem only works when using GPT's, and only ext4's can be created. But somewhere I heard that it was officially said that you should use mkfs anyway, and not let parted create filesystems.

Static routing in linux
posted on 2014-10-07 10:46:49

All you need to know to understand for creating static routes:

ip route add {NETWORK} via {IP} dev {DEVICE}

Example when actually used:

ip route add 192.168.55.0/24 via 192.168.1.254 dev eth1

or the old way via the deprecated route command:

route add -net 192.168.55.0 netmask 255.255.255.0 gw 192.168.1.254 dev eth1
RedHat Networking Docs (Oracle Linux)
posted on 2014-10-01 12:22:53

Here is a short linklist, because Oracle's documentation is the best I have seen so far.

Oracle Linux Administrator's Guide for Release 6

Part II Networking and Network Services

Chapter 11 Network Configuration

Why is this fine for RedHat stuff?

RHEL / RedHat Enterprise Linux is the 'original' distribution from redhat. Fedora is the 'testing distribution' from the company redhat. Difference between Fedora and RHEL are the lifetimes (support, EOL, update frequencies, up-to-date packages), RHEL is focused on stability. redhat's sources for it's distributions are open to the public. CentOS, Oracle Linux and Scientific Linux are created from the redhat sources, but basically without all the RedHat logos.

Thus, the documentation of the one is sufficient for the other distributions.

list of all shell shortcuts (bash / zsh)
posted on 2014-09-30 14:09:12

zsh:

bindkey -L

bash:

bind -P

## alternative: (improved readability!)
bind -P | grep -v "is not" | sed 's/can be found on/:/' | column -s: -t
linux: configure networking temporary from shell without ifup/ifdown
posted on 2014-09-09 13:00:21

In Debian-based distros, usually you change /etc/network/interfaces accordingly, then use ifdown and ifup to bring the changes into action. (Do not think of service networking restart or similar, these approaches will most likely NOT WORK PROPERLY!)

In general ip a is short for ip addr, ip l is short for ip link, ip r is short for ip route.

NOTE:
All which is done here, will only be temporary!
All settings will be gone after the next reboot.
These steps are presented here in case you have to debug a failing install process (You actually can get to a console during a Debian install!) or when troubleshooting network problems.

Only exception are the DNS settings, these should stick.

If you need the settings to stick, just edit the config files.

overview up front

  1. add IP to interface
  2. take interface down
  3. take interface up
  4. add gateway route
  5. fix dns if needed

setup

Use these settings, how you need them. Maybe you need to flush the interface first, maybe just remove a single ip, I do not know. Without further ado, these are what you most likely will need:

Check current settings:

ip a

To just test ip's on single NIC's, add them to the interface:

ip a a 10.0.0.2/24 dev eth0

To remove a single address:

ip a d 10.0.0.2/24 dev eth0

To remove all addresses:

ip a f dev eth0

deactivating / activating the interface

This means basically deactivating and reactivating the interface in question.

ip l s eth0 down
ip l s eth0 up

Check what happened:

ip a

# also maybe helpful: ('no carrier'!)
ip l

If there is written 'UP', the interface is in it's desired state. If however there is written 'NO CARRIER', there is no networking cable attached.

routing

Also a gateway is needed. Usually like this:

ip r add default via 10.0.0.1 dev eth0

dns

Now you should be set, maybe you are missing dns resolution, if no dns servers were set prior to this. Verify this: ping 8.8.8.8 will work in this case, whereas ping google.com will not.

echo 'nameserver 8.8.8.8' >> /etc/resolv.conf

lessons learned

Instead of using /etc/network/interfaces, where the configuration from above would look like this:

auto iface eth0
iface eth0 inet static
    address 10.0.0.2
    netmask 255.255.255.0
    network 10.0.0.0
    broadcast 10.0.0.255
    gateway 10.0.0.1

and where you'd have to follow up the editing with a ifdown eth0; wait; ifup eth0, or using the already-or-about-to-be-deprecated ifconfig, you do this, purely on the shell:

ip a f eth0
ip l s eth0 down
ip l s eth0 up
ip a a 10.0.0.2/24 dev eth0
ip r a default via 10.0.0.1 dev eth0
echo 'nameserver 8.8.8.8' >> /etc/resolv.conf

which is in long form: (if you prefer to type more and need detailed explanations)

# delete all addresses from interface eth0
ip addr flush eth0

# deactivate eth0
ip link set eth0 down

# activate eth0
ip link set eth0 up

# set ip address and netmask in interface eth0
# netmask is done by specifying the /24 subnet (...)
ip addr add 10.0.0.2/24 dev eth0

# add the gateway
ip route add default via 10.0.0.1 dev eth0

# fix dns, if needed, as root
echo 'nameserver 8.8.8.8' >> /etc/resolv.conf

A list for all this abbreviations is due, it seems. And thou shalt reread this here again and again. Seriously.

vim fast execution bind
posted on 2014-09-07 12:32:37

To test quick throwaway scripts in vim try this: (linux-only, for windows this will work only via cygwin)

nnoremap <Leader>fr :w<CR>:!clear && ./%<CR>

Upon pressing the <leader>fr key combination (vim's leaderkey is \ by default), this happens:

  • the file you are currently editing will be saved
  • vim drops to a shell and clears the screen
  • the script gets executed

This works as long as the script has a shebang line and is executable (don't forget to chmod), since \% is the wildcard for the currently opened file

The best things in life are for free. :)

Fix git mergeconflicts (with vim)
posted on 2014-09-06 13:11:07

If, during a pull (or rebase), git fails to finish its currently run command, this one-liner might be helpful:

vim `git diff --diff-filter=U --name-only`

It will open all files, that git could not merge, in vim.

Fix the first one, save, :n (next window), fix the next one, and so forth.

Once you are done, git add . them all, git commit, and start again where git left off. (The rebase or push or whatever.)

Bandit Walkthrough
posted on 2014-08-30 11:39:32

http://overthewire.org/wargames/bandit/ is quite some fun, in case you are a linux user. You might even learn a trick or two along the way.

prerequisites

Helpful may be, to create a ssh host shortcut either in ~/.ssh/config or a DNS shortcut in /etc/hosts for the banditlabs url:

Host asdf
    Hostname bandit.labs.overthewire.org

That way you can access the server via ssh <username>@asdf.

solutions

These are the solutions I found so far:

level 0

Connect via ssh bandit0@asdf if you made the shortcut like above. Else use ssh bandit0@bandit.labs.overthewire.org.

bandit0@melinda:~$ ls
readme
bandit0@melinda:~$ cat readme 
boJ9jbbUNNfktd78OOpsqOltutMc3MY1

level 1

Now connect via ssh bandit1@asdf if you made the shortcut like above. Else use ssh bandit1@bandit.labs.overthewire.org.

bandit1@melinda:~$ ls
-
bandit1@melinda:~$ cat ./- 
CV1DtqXWVFXTvM2F0k09SHz0YwRINYA9

level 2

You should by now know which username to use to connnect to the server for the next level... ;)

bandit2@melinda:~$ ls
spaces in this filename
bandit2@melinda:~$ cat spaces\ in\ this\ filename 
UmHadQclWmgdLOKQ3YNgjWxGoRMb5luK

Just use Tab for auto-completion in the shell and avoid typing...

level 3

bandit3@melinda:~$ cd inhere/
bandit3@melinda:~/inhere$ ls -a
.  ..  .hidden
bandit3@melinda:~/inhere$ cat .hidden 
pIwrPrtPN36QITSp3EQaw936yaFoFgAB

level 4

bandit4@melinda:~$ cd inhere/
bandit4@melinda:~/inhere$ ls
-file00  -file02  -file04  -file06  -file08
-file01  -file03  -file05  -file07  -file09
bandit4@melinda:~/inhere$ file ./*
./-file00: data
./-file01: data
./-file02: Non-ISO extended-ASCII text, with no line terminators
./-file03: data
./-file04: data
./-file05: data
./-file06: data
./-file07: ASCII text
./-file08: data
./-file09: Non-ISO extended-ASCII text
bandit4@melinda:~/inhere$ cat ./-file07
koReBOKuIDDepwhWk7jZC0RTdopnAYKh

level 5

bandit5@melinda:~$ find inhere/ -size 1033c \! -executable
inhere/maybehere07/.file2
bandit5@melinda:~$ cat inhere/maybehere07/.file2
DXjZPULLxYr17uwoI01bNLQbtFemEgo7
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        bandit5@melinda:~$ 

level 6

bandit6@melinda:~$ find / -user bandit7 -group bandit6 -size 33c 2>/dev/null
/var/lib/dpkg/info/bandit7.password
bandit6@melinda:~$ cat /var/lib/dpkg/info/bandit7.password 
HKBPTKQnIay4Fw76bEy8PVxKEDQRKTzs

level 7

bandit7@melinda:~$ grep millionth data.txt 
millionth   cvX2JJa4CFALtqS87jk27qwqGhBM9plV

level 8

bandit8@melinda:~$ cat data.txt | sort | uniq -u
UsvVyFSfZZWbi6wgC7dAFyFuR6jQQUhR

level 9

bandit9@melinda:~$ strings data.txt | grep ^=
========== the
=qy9g
========== is
=9-5
========== truKLdjsbJ5g7yyJ2X2R0o3a5HQJFuLk
bandit9@melinda:~$ strings data.txt | grep ==
========== the
,========== passwordc
========== is
========== truKLdjsbJ5g7yyJ2X2R0o3a5HQJFuLk

level 10

bandit10@melinda:~$ base64 -d data.txt 
The password is IFukwKGsFW8MOq3IRFqrxE1hxTNEbUPR

level 11

bandit11@melinda:~$ cat data.txt | tr 'A-Za-z' 'N-ZA-Mn-za-m'          
The password is 5Te8Y4drgCRfCx8ugdwuEX8KFC6k2EUu

level 12

This is a longer one... I inserted extra newlines for better readability this time.

bandit12@melinda:~$ l
data.txt

bandit12@melinda:~$ mkdir /tmp/sjas/ && cp data.txt /tmp/sjas

bandit12@melinda:~$ cd /tmp/sjas

bandit12@melinda:/tmp/sjas$ l
data.txt

bandit12@melinda:/tmp/sjas$ cat data.txt 
0000000: 1f8b 0808 d095 b051 0203 6461 7461 322e  .......Q..data2.
0000010: 6269 6e00 013a 02c5 fd42 5a68 3931 4159  bin..:...BZh91AY
0000020: 2653 5915 d9db 2800 0017 7fff ff5d f6ea  &SY...(......]..
0000030: e98b bff6 ff7f ffbf fce3 f7fa a3fb badb  ................
0000040: f3e9 f873 b7ff fcff cffb 7bff b001 3b35  ...s......{...;5
0000050: b080 d000 0000 0000 1ea0 f534 3400 0d00  ...........44...
0000060: d1a1 a1a1 a006 8680 0006 9ea0 6868 68f4  ............hhh.
0000070: 81b5 0d34 d0c2 0d0d 3d13 47a4 cd44 01a1  ...4....=.G..D..
0000080: a007 a801 a000 d1a0 d00d 0034 0640 1ea3  ...........4.@..
0000090: 4c99 0000 d034 d1b5 3201 a0d1 a06d 4003  L....4..2....m@.
00000a0: d403 351a 00f4 2347 a801 9348 1a7a 8034  ..5...#G...H.z.4
00000b0: d340 0000 0006 690d 0000 0340 0d3d 46d1  .@....i....@.=F.
00000c0: 341a 7a86 8190 1a1a 1a34 347a 8d00 001a  4.z......44z....
00000d0: 6468 d006 8001 0403 0081 e752 1ca1 324a  dh.........R..2J
00000e0: 2d8d 2082 b927 606a 8dc4 4407 d0eb 1428  -. ..'`j..D....(
00000f0: 8782 7c75 29f4 19d4 3b6a 1f7e 147f 5636  ..|u)...;j.~..V6
0000100: 0183 4dbf 9a5d 968c 7340 d299 dd22 3024  ..M..]..s@..."0$
0000110: 8ecc 1ffe 92b3 101b ca86 20bd 47f2 7958  .......... .G.yX
0000120: 7d40 d62a 1dc8 8697 d109 66ae 1549 39df  }@.*......f..I9.
0000130: 95e2 2dad 4990 b250 9a0b f842 0ade e4fb  ..-.I..P...B....
0000140: 2717 ba73 0a60 9048 c4db 851b db3c 0e4d  '..s.`.H.....<.M
0000150: 9d04 a542 3d98 a411 65b8 116f 0710 19e3  ...B=...e..o....
0000160: 210a 11d4 b9bc 5227 c02e f8ac fab6 f541  !.....R'.......A
0000170: f934 9619 a951 6654 8482 4fd2 9ce7 af09  .4...QfT..O.....
0000180: 0ed5 e29c 3482 e515 3882 07b5 8a2b 02e7  ....4...8....+..
0000190: 5357 2cd5 c071 3d10 546c d9e2 aa49 a75c  SW,..q=.Tl...I.\
00001a0: 2ada f467 469d 4464 c20e f8f0 17d3 271d  *..gF.Dd......'.
00001b0: e3c6 ac3a 9f96 d17f 897c 04bf c445 d6bc  ...:.....|...E..
00001c0: a706 16b0 34bf 2f1b 3419 9eea 5d5a f7c0  ....4./.4...]Z..
00001d0: 1ce4 5477 832b 2258 6b29 55ec 2155 2e66  ..Tw.+"Xk)U.!U.f
00001e0: 2ad1 81d1 edd0 22fe 0f6c 9172 b0d2 3b93  *....."..l.r..;.
00001f0: 42b3 079e 8013 c6ef 1425 82fe a53b 1898  B........%...;..
0000200: c9b5 2111 5c53 eb19 6142 a8b6 480a a8eb  ..!.\S..aB..H...
0000210: 439e b18f 9269 890e dcec da54 614c 4eba  C....i.....TaLN.
0000220: fe8c 5c10 6586 1321 680b 9896 fdee b1d5  ..\.e..!h.......
0000230: 8e68 d49a 11d4 868d 7e82 3238 4e13 dd44  .h......~.28N..D
0000240: 9ad4 0081 b138 f17f e2ee 48a7 0a12 02bb  .....8....H.....
0000250: 3b65 0018 d921 743a 0200 00              ;e...!t:...

bandit12@melinda:/tmp/sjas$ file data.txt 
data.txt: ASCII text

bandit12@melinda:/tmp/sjas$ xxd -r data.txt data1

bandit12@melinda:/tmp/sjas$ file data1
data1: gzip compressed data, was "data2.bin", from Unix, last modified: Thu Jun  6 13:59:44 2013, max compression

bandit12@melinda:/tmp/sjas$ mv data1 data1.gz

bandit12@melinda:/tmp/sjas$ gzip -d data1.gz 

bandit12@melinda:/tmp/sjas$ l
data.txt  data1

bandit12@melinda:/tmp/sjas$ mv data1 data2.bin

bandit12@melinda:/tmp/sjas$ file data2.bin 
data2.bin: bzip2 compressed data, block size = 900k

bandit12@melinda:/tmp/sjas$ bzip2 -d data2.bin
bzip2: Can't guess original name for data2.bin -- using data2.bin.out

bandit12@melinda:/tmp/sjas$ l
data.txt  data2.bin.out

bandit12@melinda:/tmp/sjas$ file data2.bin.out 
data2.bin.out: gzip compressed data, was "data4.bin", from Unix, last modified: Thu Jun  6 13:59:43 2013, max compression

bandit12@melinda:/tmp/sjas$ gzip -d -S .out data2.bin.out 

bandit12@melinda:/tmp/sjas$ l
data.txt  data2.bin

bandit12@melinda:/tmp/sjas$ file data2.bin. 
data2.bin.: POSIX tar archive (GNU)

bandit12@melinda:/tmp/sjas$ tar -xvf data2.bin 
data5.bin

bandit12@melinda:/tmp/sjas$ l
data.txt  data2.bin  data5.bin

bandit12@melinda:/tmp/sjas$ file data5.bin 
data5.bin: POSIX tar archive (GNU)

bandit12@melinda:/tmp/sjas$ tar -xvf data5.bin 
data6.bin

bandit12@melinda:/tmp/sjas$ file data6.bin 
data6.bin: bzip2 compressed data, block size = 900k

bandit12@melinda:/tmp/sjas$ bzip2 -d data6.bin
bzip2: Can't guess original name for data6.bin -- using data6.bin.out

bandit12@melinda:/tmp/sjas$ file data6.bin.out     
data6.bin.out: POSIX tar archive (GNU)

bandit12@melinda:/tmp/sjas$ tar xf data6.bin.out 

bandit12@melinda:/tmp/sjas$ l
data.txt  data2.bin  data5.bin  data6.bin.out  data8.bin

bandit12@melinda:/tmp/sjas$ file data8.bin 
data8.bin: gzip compressed data, was "data9.bin", from Unix, last modified: Thu Jun  6 13:59:43 2013, max compression

bandit12@melinda:/tmp/sjas$ gzip -d -S .bin data8.bin

bandit12@melinda:/tmp/sjas$ l
data.txt  data2.bin  data5.bin  data6.bin.out  data8

bandit12@melinda:/tmp/sjas$ file data8 
data8: ASCII text

bandit12@melinda:/tmp/sjas$ cat data8 
The password is 8ZjyCRiBWFYkneahHwxCv3wb2a1ORpYL

Finally it's over...

level 13

This time all the console output is shown:

I connect to host asdf as I made the aforementioned shortcut in ~/.ssh/config.

[sjas@beckett /tmp]% ssh bandit13@asdf                                         

This is the OverTheWire game server. More information on http://www.overthewire.org/wargames

Please note that wargame usernames are no longer level<X>, but wargamename<X>
e.g. vortex4, semtex2, ...

Note: at this moment, blacksun and drifter are not available.

bandit13@bandit.labs.overthewire.org's password: 
Welcome to Ubuntu 12.04.5 LTS (GNU/Linux 3.15.4-x86_64-linode45 x86_64)

 * Documentation:  https://help.ubuntu.com/

Welcome to the OverTheWire games machine !

Please read /README.txt for more information on how to play the levels
on this gameserver.

  System information disabled due to load higher than 8.0

11 packages can be updated.
8 updates are security updates.

New release '14.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.



*** System restart required ***

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

bandit13@melinda:~$ l
sshkey.private
bandit13@melinda:~$ logout
Connection to bandit.labs.overthewire.org closed.
[sjas@beckett /tmp]% scp bandit13@asdf:sshkey.private .                        

This is the OverTheWire game server. More information on http://www.overthewire.org/wargames

Please note that wargame usernames are no longer level<X>, but wargamename<X>
e.g. vortex4, semtex2, ...

Note: at this moment, blacksun and drifter are not available.

bandit13@bandit.labs.overthewire.org's password: 
sshkey.private                                100% 1679     1.6KB/s   00:00    
[sjas@beckett /tmp]%

All here is to do is to download the ssh private key to your local machine. I moved it to /tmp since I will not need anymore after the levels. It this were different, I'd have placed it into my ~/.ssh folder.

level 14

Now the private key from level 13 is to be put to use. The next levels are to be passed on the server, connected as bandit14, anyway.

The pass the keyfile to ssh, use -i, if it should not use ~/.ssh/id_rsa or ~/.ssh/id_dsa

Also the complete output is shown here.

[sjas@beckett /tmp]% ssh -i /tmp/sshkey.private bandit14@asdf                  

This is the OverTheWire game server. More information on http://www.overthewire.org/wargames

Please note that wargame usernames are no longer level<X>, but wargamename<X>
e.g. vortex4, semtex2, ...

Note: at this moment, blacksun and drifter are not available.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0640 for '/tmp/sshkey.private' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /tmp/sshkey.private
bandit14@bandit.labs.overthewire.org's password: 

[sjas@beckett /tmp]% chmod 600 /tmp/sshkey.private                             
[sjas@beckett /tmp]% ssh -i /tmp/sshkey.private bandit14@asdf                  

This is the OverTheWire game server. More information on http://www.overthewire.org/wargames

Please note that wargame usernames are no longer level<X>, but wargamename<X>
e.g. vortex4, semtex2, ...

Note: at this moment, blacksun and drifter are not available.

Welcome to Ubuntu 12.04.5 LTS (GNU/Linux 3.15.4-x86_64-linode45 x86_64)

 * Documentation:  https://help.ubuntu.com/

Welcome to the OverTheWire games machine !

Please read /README.txt for more information on how to play the levels
on this gameserver.

System information disabled due to load higher than 8.0

11 packages can be updated.
8 updates are security updates.

New release '14.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.



*** System restart required ***

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

bandit14@melinda:~$ cat /etc/bandit_pass/bandit14
4wcYUJFw0k0XLShlDzztnTBHiqxU3b3e

chmod is used to fix permissions that were off. ssh expects RW access to the key only being possible by the owner, no other rights.

level 15

localhost? On my own computer?

[sjas@beckett /tmp]% telnet localhost 30000                                    
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused

Not so. How on the remote server?

[sjas@beckett /tmp]% telnet bandit.labs.overthewire.org 9000                   
Trying 178.79.134.250...
^C

Won't work from a remote server, most likely due to firewall rules. So Ctrl-C for ending the connection try and retry from on the server:

[sjas@beckett /tmp]% ssh -i /tmp/sshkey.private bandit14@asdf                   
.
.
.

bandit14@melinda:~$ telnet localhost 30000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
4wcYUJFw0k0XLShlDzztnTBHiqxU3b3e
Correct!
BfMYroe26WYalil77FoDi9qh59eK5xNr

Connection closed by foreign host.
bandit14@melinda:~$ 

nc (netcat) works, too:

bandit14@melinda:~$ nc localhost 30000
4wcYUJFw0k0XLShlDzztnTBHiqxU3b3e
Correct!
BfMYroe26WYalil77FoDi9qh59eK5xNr

bandit14@melinda:~$

level 16

First try:

bandit14@melinda:~$ openssl s_client -connect localhost:30001
CONNECTED(00000003)
depth=0 CN = localhost
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = localhost
verify return:1
---
Certificate chain
 0 s:/CN=localhost
   i:/CN=localhost
---
Server certificate
-----BEGIN CERTIFICATE-----
MIICpDCCAYwCCQDDVcPYicjxQjANBgkqhkiG9w0BAQUFADAUMRIwEAYDVQQDEwls
b2NhbGhvc3QwHhcNMTMwNjA2MTM1ODM3WhcNMjMwNjA0MTM1ODM3WjAUMRIwEAYD
VQQDEwlsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDF
Id/8XRfPayS8u+XfqkXQq3QawTEmxcxQw4TiLKlnqLqsl02U3dUIBqSJu1LQE8lf
/eez2KF9Nse9cXqty6M6J9/515b+xaLfkAV1q97JwO1BfciJDiWqjPerMcikUb0d
zTnDFSpWpnrTjLqlquSWrgRxMffmGi+6x7lsWp8EOAIrFhxadYgTskVUslsgtwBP
dVtDG3OZSa8uyD6LJMCgMpaIs0nI1AS8yWWRnBPP6tujJV9x5JI8GYuC1OB9hsYT
+J7ZMkQ5soOXi9TIuyQSfavH27z44uMF3g4fB6i9l2KNFeKkuq1JVM+HbrXfSlf3
+Gtm/+hO/jlKzGw+pmobAgMBAAEwDQYJKoZIhvcNAQEFBQADggEBAF0ah9QUPxRM
cCNaZ2Bb+IkBXbj1esk92Hv7H4uYIionJl2f+8M6/YgGNhBI4C1r82Hwbi9DwVYs
kztth6DQIBw3KnNLyoKfYmEz6+Azko5rIefeJoHEBdD41tMqmBd8jWi5+hUbkeLr
8A0wzwi/mVQP4xBZ5cELDEaC2MFCi20X4APFFYeXEMjEYwTwADdADiQ52ezle0zr
iSZ10PUmxqjE4NCYo8feZ7emRcAozZElyF9JjoHfXSSiEViELPU2Wb9ygljYz2Hy
AGDcpE2FIGZRiHk+PT2tHD92bp4BeyZsh52pYvItJie3owdIQoQASfperclRvcyn
mNrjHPUubAc=
-----END CERTIFICATE-----
subject=/CN=localhost
issuer=/CN=localhost
---
No client certificate CA names sent
---
SSL handshake has read 1272 bytes and written 363 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : SSLv3
    Cipher    : DHE-RSA-AES256-SHA
    Session-ID: 162F69EE481BEE8FF1AC7CCBA304284F7C7A6AF9C35743D0272D285514D8226D
    Session-ID-ctx: 
    Master-Key: 6CDD3BC45858C00FF59DD8E0C872AC96769EAC33574BD56590AA0B0A32C1DA309F32FF8C38B10AB6AFD58D44AF3CB767
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1409415184
    Timeout   : 300 (sec)
    Verify return code: 18 (self signed certificate)
---
BfMYroe26WYalil77FoDi9qh59eK5xNr
HEARTBEATING
read R BLOCK
read:errno=0
bandit14@melinda:~$

Ok, lets give the manpage a try? (man s_client)

CONNECTED COMMANDS
       If a connection is established with an SSL server then any data
       received from the server is displayed and any key presses will be sent
       to the server. When used interactively (which means neither -quiet nor
       -ign_eof have been given), the session will be renegotiated if the line
       begins with an R, and if the line begins with a Q or if end of file is
       reached, the connection will be closed down.

Awww, lets just try -quiet...

bandit14@melinda:~$ openssl s_client -connect localhost:30001 -quiet
depth=0 CN = localhost
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = localhost
verify return:1
BfMYroe26WYalil77FoDi9qh59eK5xNr
Correct!
cluFn7wTiGryunymYOu4RcffSxQluehd

read:errno=0
bandit14@melinda:~$

:D

level 17

First lets do a simple portscan:

bandit14@melinda:~$ nmap -PN localhost

Starting Nmap 5.21 ( http://nmap.org ) at 2014-08-30 16:28 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0011s latency).
Not shown: 994 closed ports
PORT      STATE SERVICE
22/tcp    open  ssh
80/tcp    open  http
113/tcp   open  auth
443/tcp   open  https
3306/tcp  open  mysql
30000/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.30 seconds

Bummer. But that's due to nmap only scanning the first 30000 ports. See:

bandit14@melinda:~$ nmap -p 30000-32000 localhost

Starting Nmap 5.21 ( http://nmap.org ) at 2014-08-30 16:41 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00086s latency).
Not shown: 1994 closed ports
PORT      STATE SERVICE
30000/tcp open  unknown
30001/tcp open  unknown
31046/tcp open  unknown
31518/tcp open  unknown
31691/tcp open  unknown
31790/tcp open  unknown
31960/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.54 seconds
bandit14@melinda:~$

30001 wasn't shown in the first scan.

Since we know the port is between 31000 and 32000, so it's one of these:

  • 31046
  • 31518
  • 31691
  • 31790
  • 31960

This can be programmatically tried by nmap, too, but I am an amateur, so I will try them by hand. It's just that five ports.

bandit14@melinda:~$ openssl s_client -quiet -connect localhost:31046
140737354065568:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:749:


bandit14@melinda:~$ openssl s_client -quiet -connect localhost:31518
depth=0 CN = localhost
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = localhost
verify return:1
cluFn7wTiGryunymYOu4RcffSxQluehd
cluFn7wTiGryunymYOu4RcffSxQluehd
^C


bandit14@melinda:~$ openssl s_client -quiet -connect localhost:31691 
140737354065568:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:749:
bandit14@melinda:~$ openssl s_client -quiet -connect localhost:31790
depth=0 CN = localhost
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = localhost
verify return:1
cluFn7wTiGryunymYOu4RcffSxQluehd
Correct!
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAvmOkuifmMg6HL2YPIOjon6iWfbp7c3jx34YkYWqUH57SUdyJ
imZzeyGC0gtZPGujUSxiJSWI/oTqexh+cAMTSMlOJf7+BrJObArnxd9Y7YT2bRPQ
Ja6Lzb558YW3FZl87ORiO+rW4LCDCNd2lUvLE/GL2GWyuKN0K5iCd5TbtJzEkQTu
DSt2mcNn4rhAL+JFr56o4T6z8WWAW18BR6yGrMq7Q/kALHYW3OekePQAzL0VUYbW
JGTi65CxbCnzc/w4+mqQyvmzpWtMAzJTzAzQxNbkR2MBGySxDLrjg0LWN6sK7wNX
x0YVztz/zbIkPjfkU1jHS+9EbVNj+D1XFOJuaQIDAQABAoIBABagpxpM1aoLWfvD
KHcj10nqcoBc4oE11aFYQwik7xfW+24pRNuDE6SFthOar69jp5RlLwD1NhPx3iBl
J9nOM8OJ0VToum43UOS8YxF8WwhXriYGnc1sskbwpXOUDc9uX4+UESzH22P29ovd
d8WErY0gPxun8pbJLmxkAtWNhpMvfe0050vk9TL5wqbu9AlbssgTcCXkMQnPw9nC
YNN6DDP2lbcBrvgT9YCNL6C+ZKufD52yOQ9qOkwFTEQpjtF4uNtJom+asvlpmS8A
vLY9r60wYSvmZhNqBUrj7lyCtXMIu1kkd4w7F77k+DjHoAXyxcUp1DGL51sOmama
+TOWWgECgYEA8JtPxP0GRJ+IQkX262jM3dEIkza8ky5moIwUqYdsx0NxHgRRhORT
8c8hAuRBb2G82so8vUHk/fur85OEfc9TncnCY2crpoqsghifKLxrLgtT+qDpfZnx
SatLdt8GfQ85yA7hnWWJ2MxF3NaeSDm75Lsm+tBbAiyc9P2jGRNtMSkCgYEAypHd
HCctNi/FwjulhttFx/rHYKhLidZDFYeiE/v45bN4yFm8x7R/b0iE7KaszX+Exdvt
SghaTdcG0Knyw1bpJVyusavPzpaJMjdJ6tcFhVAbAjm7enCIvGCSx+X3l5SiWg0A
R57hJglezIiVjv3aGwHwvlZvtszK6zV6oXFAu0ECgYAbjo46T4hyP5tJi93V5HDi
Ttiek7xRVxUl+iU7rWkGAXFpMLFteQEsRr7PJ/lemmEY5eTDAFMLy9FL2m9oQWCg
R8VdwSk8r9FGLS+9aKcV5PI/WEKlwgXinB3OhYimtiG2Cg5JCqIZFHxD6MjEGOiu
L8ktHMPvodBwNsSBULpG0QKBgBAplTfC1HOnWiMGOU3KPwYWt0O6CdTkmJOmL8Ni
blh9elyZ9FsGxsgtRBXRsqXuz7wtsQAgLHxbdLq/ZJQ7YfzOKU4ZxEnabvXnvWkU
YOdjHdSOoKvDQNWu6ucyLRAWFuISeXw9a/9p7ftpxm0TSgyvmfLF2MIAEwyzRqaM
77pBAoGAMmjmIJdjp+Ez8duyn3ieo36yrttF5NSsJLAbxFpdlc1gvtGCWW+9Cq0b
dxviW8+TFVEBl1O4f7HVm6EpTscdDxU+bCXWkfjuRb7Dy9GOtt9JPsX8MBTakzh3
vBgsyi/sN3RqRBcGU40fOoZyfAMT8s1m/uYv52O6IgeuZ/ujbjY=
-----END RSA PRIVATE KEY-----

read:errno=0
bandit14@melinda:~$

Well, this looks like ssh privatekey? :)

For fun the last port:

bandit14@melinda:~$ openssl s_client -quiet -connect localhost:31960
140737354065568:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:749:
bandit14@melinda:~$

By the way:
-PN flag used in the first scan is for skipping the host discovery stage. Use it, if you know for sure that the host is up.

level 18

The desctiption was a bit off, since there was no password. 'Just' a private key.

Anyway, copy the content of the private key and put it into a new key file.

I created a new file in /tmp/newkey. Opened it in my editor of choice (vim), pasted everything between the delimiters into it:

-----BEGIN RSA PRIVATE KEY----- 

    and the garbage between 
         included, too

-----END RSA PRIVATE KEY-----

... and save it.

If you have a microsoft-based operating system and fuck up the line endings due to copy paste, you're to blame (CRLF instead of just LF.).

If all was done accordingly, it will work as can be seen here. Of course, I forgot chmod 600 on the keyfile once again.

[sjas@beckett /tmp]% ssh -i /tmp/newkey bandit17@asdf                          

This is the OverTheWire game server. More information on http://www.overthewire.org/wargames

Please note that wargame usernames are no longer level<X>, but wargamename<X>
e.g. vortex4, semtex2, ...

Note: at this moment, blacksun and drifter are not available.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0664 for '/tmp/newkey' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /tmp/newkey
bandit17@bandit.labs.overthewire.org's password: 

[sjas@beckett /tmp]% chmod 600 newkey                                          
[sjas@beckett /tmp]% ssh -i /tmp/newkey bandit17@asdf

.
. (this time I omitted the servers welcome message........)
.

bandit17@melinda:~$ 

After having this out of the way, there are options to solve this:

diff wasn't mentioned, but is the easiest by far:

bandit17@melinda:~$ diff passwords.old passwords.new 
42c42
< PRjrhDcANrVM6em57fPnFp4Tcq8gvwzK
---
> kfBf3eYk5BPBRzwjqutbbfE887SVc5Yd

It shows the differences between two files.

42c42 tells, line 42 in the first file got changed to line 42 in the second file, and thus is the new password.

Another solution:

bandit17@melinda:~$ TEMP=`cat passwords.new`; for i in $TEMP; do grep $i passwords.old > /dev/null; if [ $? -ne 0 ]; then echo $i; fi; done
kfBf3eYk5BPBRzwjqutbbfE887SVc5Yd

Explaining this, because there's helpful stuff in there:

 1      TEMP=`cat passwords.new`
 2      for i in $TEMP
 3      do 
 4          grep $i passwords.old > /dev/null
 5          if [ $? -ne 0 ]
 6          then 
 7              echo $i
 8          fi
 9      done
  1. assign the new variable TEMP the contents of 'passwords.new'
  2. for loop, index i, running through contents of temp
  3. start of body of for loop
  4. grep for finding matches. exit code 0 if yes, exit code 1 if no match found. The grep output was streamed to /dev/null, because the output is of no importance to us.
  5. if clause, checking if no match was found. $? returns return code of last command that was run. -ne is 'not equal'.
  6. start of if body
  7. echo the currently tested string, which is our winner
  8. return to escape the loop as soon as we found our match
  9. end of if body
  10. end of body of for loop

level 19

Execute the cat from your localhost to run on the server and return the results. To do so simply append the command you want to the ssh call.

[sjas@beckett /tmp]% ssh bandit18@asdf cat readme                              

This is the OverTheWire game server. More information on http://www.overthewire.org/wargames

Please note that wargame usernames are no longer level<X>, but wargamename<X>
e.g. vortex4, semtex2, ...

Note: at this moment, blacksun and drifter are not available.

bandit18@bandit.labs.overthewire.org's password: 
IueksS7Ubh8G3DCwVzrTd8rAVOwq3M5x

level 20

bandit19@melinda:~$ ll /etc/bandit_pass/
total 108
drwxr-xr-x   2 root     root     4096 Jun 27  2013 ./
drwxr-xr-x 109 root     root     4096 Aug 30 10:57 ../
-r--------   1 bandit0  bandit0     8 Jun  6  2013 bandit0
-r--------   1 bandit1  bandit1    33 Jun  6  2013 bandit1
-r--------   1 bandit10 bandit10   33 Jun  6  2013 bandit10
-r--------   1 bandit11 bandit11   33 Jun  6  2013 bandit11
-r--------   1 bandit12 bandit12   33 Jun  6  2013 bandit12
-r--------   1 bandit13 bandit13   33 Jun  6  2013 bandit13
-r--------   1 bandit14 bandit14   33 Jun  6  2013 bandit14
-r--------   1 bandit15 bandit15   33 Jun 27  2013 bandit15
-r--------   1 bandit16 bandit16   33 Jun  6  2013 bandit16
-r--------   1 bandit17 bandit17   33 Jun  6  2013 bandit17
-r--------   1 bandit18 bandit18   33 Jun  6  2013 bandit18
-r--------   1 bandit19 bandit19   33 Jun  6  2013 bandit19
-r--------   1 bandit2  bandit2    33 Jun  6  2013 bandit2
-r--------   1 bandit20 bandit20   33 Jun  6  2013 bandit20
-r--------   1 bandit21 bandit21   33 Jun  6  2013 bandit21
-r--------   1 bandit22 bandit22   33 Jun  6  2013 bandit22
-r--------   1 bandit23 bandit23   33 Jun  6  2013 bandit23
-r--------   1 bandit24 bandit24   33 Jun  6  2013 bandit24
-r--------   1 bandit3  bandit3    33 Jun  6  2013 bandit3
-r--------   1 bandit4  bandit4    33 Jun  6  2013 bandit4
-r--------   1 bandit5  bandit5    33 Jun  6  2013 bandit5
-r--------   1 bandit6  bandit6    33 Jun  6  2013 bandit6
-r--------   1 bandit7  bandit7    33 Jun  6  2013 bandit7
-r--------   1 bandit8  bandit8    33 Jun  6  2013 bandit8
-r--------   1 bandit9  bandit9    33 Jun  6  2013 bandit9
bandit19@melinda:~$

Since we need the pass for bandit20...

bandit19@melinda:~$ ls -ln /etc/bandit_pass/bandit20 
-r-------- 1 11020 11020 33 Jun  6  2013 /etc/bandit_pass/bandit20

but really only bandit20 can read it.

So what's up with the binary?

bandit19@melinda:~$ whoami
bandit19
bandit19@melinda:~$ id
uid=11019(bandit19) gid=11019(bandit19) groups=11019(bandit19)
bandit19@melinda:~$ ./bandit20-do whoami
bandit20
bandit19@melinda:~$ ./bandit20-do id    
uid=11019(bandit19) gid=11019(bandit19) euid=11020(bandit20) groups=11020(bandit20),11019(bandit19)

Niiice. And so...

bandit19@melinda:~$ ./bandit20-do cat /etc/bandit_pass/bandit20 
GbKksEFF4yrVs6il55v6gwY5aVje5f0j

level 21

For this one, you have to open two shells, with which you connect to the server:

On the first one, open a netcat server on a free port. (ss -a, to look up which ports are in use, nc -l <port> to run it.)

On the second shell, connect with the SUID binary to the netcatserver. (./suconnect <port used before with netcat>)

Once connected, send the password from the last level from the server (via first shell).

FIRST SHELL: (there arrives the new pw!)

bandit20@melinda:~$ nc -l 54545
GbKksEFF4yrVs6il55v6gwY5aVje5f0j
gE269g2h3mw3pwgrj0Ha9Uoqen1c9DGr
bandit20@melinda:~$

SECOND SHELL:

bandit20@melinda:~$ ./suconnect 54545
Read: GbKksEFF4yrVs6il55v6gwY5aVje5f0j
Password matches, sending next password
bandit20@melinda:~$ 

Et voila.

level 22

First lets see what cronjobs are defined in /etc/cron.d. For better readability, I let the filename be printed in yellow.

bandit21@melinda:/etc/cron.d$ for i in *; do echo $'\e[1;33m'$i$'\e[0m'; cat $i; done
boobiesbot-check
@reboot root /vulnbot/launchbot.sh start boobiesbot
cron-apt
#
# Regular cron jobs for the cron-apt package
#
# Every night at 4 o'clock.
0 4 * * *   root    test -x /usr/sbin/cron-apt && /usr/sbin/cron-apt
# Every hour.
# 0 *   * * *   root    test -x /usr/sbin/cron-apt && /usr/sbin/cron-apt /etc/cron-apt/config2
# Every five minutes.
# */5 * * * *   root    test -x /usr/sbin/cron-apt && /usr/sbin/cron-apt /etc/cron-apt/config2
cronjob_bandit22
* * * * * bandit22 /usr/bin/cronjob_bandit22.sh &> /dev/null
cronjob_bandit23
* * * * * bandit23 /usr/bin/cronjob_bandit23.sh  &> /dev/null
cronjob_bandit24
* * * * * bandit24 /usr/bin/cronjob_bandit24.sh &> /dev/null
eloi0
@reboot eloi0 /eloi/eloi0/eloi0.sh
eloi1
@reboot eloi1 /eloi/eloi1/eloi1.sh
hintbot-check
@reboot root /vulnbot/launchbot.sh start hintbot
manpage3_resetpw_job
cat: manpage3_resetpw_job: Permission denied
melinda-stats
*/30 * * * * root /root/scripts/melinda-cronjob.sh
natas-session-toucher
* * * * * root /root/scripts/natas-session-toucher.sh
natas-stats
*/30 * * * * root /root/scripts/natas-cronjob.sh
natas25_cleanup
cat: natas25_cleanup: Permission denied
natas26_cleanup
cat: natas26_cleanup: Permission denied
php5
# /etc/cron.d/php5: crontab fragment for php5
#  This purges session files older than X, where X is defined in seconds
#  as the largest value of session.gc_maxlifetime from all your php.ini
#  files, or 24 minutes if not defined.  See /usr/lib/php5/maxlifetime

# Look for and purge old sessions every 30 minutes
09,39 *     * * *     root   [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete
semtex0-32
@reboot root /semtex/semtex0 24000 /semtex/semtex0.data32
semtex0-64
@reboot root /semtex/semtex0 24001 /semtex/semtex0.data64
semtex0-ppc
@reboot root /semtex/semtex0 24002 /semtex/semtex0.datappc
semtex10
@reboot root /semtex/semtex10 24019
semtex12
@reboot root /semtex/semtex12.authd 24012 /semtex/semtex12.data/password
@reboot root /semtex/semtex12.reader 24013 /semtex/semtex12.data/dir/
semtex5
@reboot root /semtex/semtex5 24027
semtex6
@reboot root /semtex/semtex6
semtex8
@reboot root /semtex/semtex8 /semtex/semtex8.data/semtex8.jpg /semtex/semtex8.data/semtex8.sock
semtex9
@reboot root /semtex/semtex9.fshell /semtex/semtex9.data/fakeshell
@reboot root /semtex/semtex9.i2t -f /semtex/semtex9.data/fakeshell
sysstat
# The first element of the path is a directory where the debian-sa1
# script is located
PATH=/usr/lib/sysstat:/usr/sbin:/usr/sbin:/usr/bin:/sbin:/bin

# Activity reports every 10 minutes everyday
5-55/10 * * * * root command -v debian-sa1 > /dev/null && debian-sa1 1 1

# Additional run at 23:59 to rotate the statistics file
59 23 * * * root command -v debian-sa1 > /dev/null && debian-sa1 60 2
vortex0
@reboot root /vortex/vortex0
vortex20
@reboot root /vortex/vortex20
vulnbot0-check
# @reboot root /vulnbot/launchbot.sh start vulnbot0
vulnbot1-check
# @reboot root /vulnbot/launchbot.sh start vulnbot1
bandit21@melinda:/etc/cron.d$ 

Looks like cronjob_bandit22 is the way to go.

bandit21@melinda:/etc/cron.d$ cat cronjob_bandit22 
* * * * * bandit22 /usr/bin/cronjob_bandit22.sh &> /dev/null

Now we know which script gets executed every minute. (* * * * *)

bandit21@melinda:/etc/cron.d$ cat /usr/bin/cronjob_bandit22.sh 
#!/bin/bash
chmod 644 /tmp/t7O6lds9S0RqQh9aMcz6ShpAoZKF7fgv
cat /etc/bandit_pass/bandit22 > /tmp/t7O6lds9S0RqQh9aMcz6ShpAoZKF7fgv

Well, lets check the rights and the content of these files from the script...

bandit21@melinda:/etc/cron.d$ ll /etc/bandit_pass/bandit22
-r-------- 1 bandit22 bandit22 33 Jun  6  2013 /etc/bandit_pass/bandit22

No luck, no read access for us.

bandit21@melinda:/etc/cron.d$ ll /tmp/t7O6lds9S0RqQh9aMcz6ShpAoZKF7fgv
-rw-r--r-- 1 bandit22 bandit22 33 Aug 30 19:32 /tmp/t7O6lds9S0RqQh9aMcz6ShpAoZKF7fgv

But here...

bandit21@melinda:/etc/cron.d$ cat /tmp/t7O6lds9S0RqQh9aMcz6ShpAoZKF7fgv
Yk7owGAcWjwMVRwrTesJEwB7WVOiILLI

level 23

bandit22@melinda:~$ cd /etc/cron.d
bandit22@melinda:/etc/cron.d$ ll
total 128
drwxr-xr-x   2 root root 4096 Jul 22 13:40 ./
drwxr-xr-x 109 root root 4096 Aug 30 10:57 ../
-rw-r--r--   1 root root  102 Apr  2  2012 .placeholder
-rw-r--r--   1 root root   52 Oct 22  2013 boobiesbot-check
-rw-r--r--   1 root root  355 Nov 18  2011 cron-apt
-rw-r--r--   1 root root   61 Jun  6  2013 cronjob_bandit22
-rw-r--r--   1 root root   62 Jun  6  2013 cronjob_bandit23
-rw-r--r--   1 root root   61 Jun  6  2013 cronjob_bandit24
-rw-r--r--   1 root root   35 Jun  6  2013 eloi0
-rw-r--r--   1 root root   35 Jun  6  2013 eloi1
-rw-r--r--   1 root root   49 Jul  3 14:13 hintbot-check
-rw-------   1 root root  233 Jun  6  2013 manpage3_resetpw_job
-rw-r--r--   1 root root   51 Jul 12 15:57 melinda-stats
-rw-r--r--   1 root root   54 Sep 30  2013 natas-session-toucher
-rw-r--r--   1 root root   49 Sep 30  2013 natas-stats
-r--r-----   1 root root   47 Sep 30  2013 natas25_cleanup
-r--r-----   1 root root   45 Jul 22 13:40 natas26_cleanup
-rw-r--r--   1 root root  544 Mar 11  2013 php5
-rw-r--r--   1 root root   58 Jun  6  2013 semtex0-32
-rw-r--r--   1 root root   58 Jun  6  2013 semtex0-64
-rw-r--r--   1 root root   59 Jun  6  2013 semtex0-ppc
-rw-r--r--   1 root root   36 Jun  6  2013 semtex10
-rw-r--r--   1 root root  143 Jun  6  2013 semtex12
-rw-r--r--   1 root root   35 Jun  6  2013 semtex5
-rw-r--r--   1 root root   29 Jun  6  2013 semtex6
-rw-r--r--   1 root root   96 Jun  6  2013 semtex8
-rw-r--r--   1 root root  134 Jun  6  2013 semtex9
-rw-r--r--   1 root root  396 Dec 16  2011 sysstat
-rw-r--r--   1 root root   29 Jun  6  2013 vortex0
-rw-r--r--   1 root root   30 Jul  2  2013 vortex20
-rw-r--r--   1 root root   52 Jul  3 13:41 vulnbot0-check
-rw-r--r--   1 root root   52 Jul  3 13:41 vulnbot1-check
bandit22@melinda:/etc/cron.d$ cat cronjob_bandit23
* * * * * bandit23 /usr/bin/cronjob_bandit23.sh  &> /dev/null

Oh, another cronjob running every minute.

bandit22@melinda:/etc/cron.d$ ll /usr/bin/cronjob_bandit23.sh 
-rwxr-x--- 1 bandit23 bandit22 211 Jun  6  2013 /usr/bin/cronjob_bandit23.sh*

Running a script, for which our group just happens to have execute permission, too.

bandit22@melinda:/etc/cron.d$ cat /usr/bin/cronjob_bandit23.sh 
#!/bin/bash

myname=$(whoami)
mytarget=$(echo I am user $myname | md5sum | cut -d ' ' -f 1)

echo "Copying passwordfile /etc/bandit_pass/$myname to /tmp/$mytarget"

cat /etc/bandit_pass/$myname > /tmp/$mytarget

And the script just further happens to copy something into /tmp, again. Granting read permissions to everyone in the process.

This tells us different things.

  • We can run the script ourselves. But this won't help us.
  • The path where the file is stored lies under /tmp and the filename is generated.
  • If we knew the filename, we'd have the pw.

So let's do the line with the filename creation by hand:

bandit22@melinda:/etc/cron.d$ echo I am user bandit23 | md5sum | cut -d' ' -f1
8ca319486bfbbc3663ea0fbe81326349

Which is the filename. Since i am not overly into copy-typing:

bandit22@melinda:/etc/cron.d$ cat /tmp/`echo I am user bandit23 | md5sum | cut -d' ' -f1`
jc1udXuA1tiHqjIsL8yaapX5XIAI6i0n

level 24

level 25

This one is not prepared yet. It's over for now.

\O/
 |
/ \
Running tomcat on port 80
posted on 2014-08-13 12:46:39

To run a tomcat on port 80 (which needs system user rights):

vim /etc/default/tomcat7:

AUTHBIND=yes

In case you need tomcat on port 80 and 443, this seems not to work.

My approach was to let it run on the base ports (8080 and 8443) and to redirect these via iptables.

vim /etc/init.d/firewall:

# first open all the ports needed
$bin -A INPUT -i eth0 -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 80 -j ACCEPT
$bin -A INPUT -i eth0 -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 443 -j ACCEPT
$bin -A INPUT -i eth0 -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 8080 -j ACCEPT
$bin -A INPUT -i eth0 -p tcp -m conntrack --ctstate NEW,ESTABLISHED,RELATED --dport 8443 -j ACCEPT
# flush nat table... beware if you already use that chain elsewhere!
$bin -F -t nat
$bin -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
$bin -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8443

In case you want to run tomcat and apache in parallel, assign a second IP to your NIC.
Then add the IP of tomcat to the redirect statements above via -d <this.is.the.ip>.

Change tomcat's used java version
posted on 2014-08-12 10:20:19

To change the java version used, on a debian install, which is used by tomcat, change /etc/default/tomcat7.

There you have to change the JAVA_HOME setting accordingly.

Exporting JAVA_HOME by hand is no use, and changing the init script in /etc/init.d/tomcat7 is not just ugly and bad style, but michgt also be overwritten by future updates.

less syntax highlighting
posted on 2014-08-08 23:44:09

This works on debian:

$ apt-get install source-highlight -y
$ export LESSOPEN="| /usr/share/source-highlight/src-hilite-lesspipe.sh %s"
$ export LESS=' -R '

Put the export lines into your .bashrc to make them stick.

bash heredoc to file
posted on 2014-08-08 09:36:34

To append a heredoc to an existing file:

$ cat << EOHD >> myfile.txt
> line 1
> line 2
> line 3
> line 4
> EOHD

This works on a prompt, and will add this:

line 1
line 2
line 3
line 4

to file 'myfile.txt'.

This was shown on a prompt. To have the same effect from within a script, insert this into your file:

#!/bin/bash
cat << EOHD >> myfile.txt
line 1
line 2
line 3
line 4
EOHD

Make it executable (chown 755 <name-of-script.sh>), and run it (./<name-of-script.sh>).

Create password and copy it into clipboard
posted on 2014-08-07 09:51:29

To create a password and put it into you clipboard immediatly, use this line in your window manager's global shortcuts:

echo `\pwgen -cn 18 -1` | cut -d' ' -f1 | tr -d "\n" | xclip   
List file contents of all files in a folder
posted on 2014-08-06 18:52:29

To show all file contents of all files contained within a folder, you can of course use a loop:

for i in *; do cat $i; done

However, it might be helpful to know from which file which content originated:

for i in *; do echo $i; cat $i; done'

Since this is better, maybe still a bit crowded on the screen, how about coloring the file names?

for i in *; do echo $'\e[1;33m'$i$'\e[0m'; cat $i; done

This will show the same output as previously, but the filenames will show up in yellow.

This is due to the shell escape codes $'\e[1;33m' and $'\e[0m'. The first one tells the shell to use yellow font, the second tells to quit all extra formatting. Else everything would show up yellow.

Besides 1;33 for yellow, these exist: (and maybe even more, I do not know)

Black         0;30
Dark Gray     1;30
Red           0;31
Light Red     1;31
Green         0;32
Light Green   1;32
Brown         0;33
Yellow        1;33
Blue          0;34
Light Blue    1;34
Purple        0;35
Light Purple  1;35
Cyan          0;36
Light Cyan    1;36
Light Gray    0;37
White         1;37

\e[ stands for a literal escape symbol, IIRC.

More info on the shell control sequences can be found here and here.

Run wireshark as non-root user
posted on 2014-08-05 10:23:00

To run wireshark as a non-root user, do this (as root, or use sudo):

apt-get install wireshark   
dpkg-reconfigure wireshark-common
usermod -a -G wireshark USER
# logout and login for new environment, for testing do:
groups USER  ## should lack group wireshark
su USER
groups USER  ## should now have group wireshark and thus work
wireshark
linux file attributes
posted on 2014-08-03 11:32:52

These should be present for ext2 / ext3 file systems. No idea for ext4 or if these are present across all unices.

a : append only
c : compressed
d : no dump
e : extent format
i : immutable
j : data journalling
s : secure deletion
t : no tail-merging
u : undeletable
A : no atime updates
C : no copy on write
D : synchronous directory updates
S : synchronous updates
T : top of directory hierarchy
ssh tricks links
posted on 2014-07-31 11:30:59

Really nice articles and comments:

  1. https://news.ycombinator.com/item?id=1624010
  2. http://www.symkat.com/ssh-tips-and-tricks-you-need
  3. https://news.ycombinator.com/item?id=1536126
  4. https://pthree.org/2011/07/22/openssh-best-practice/
Bash dollar sign shell variables
posted on 2014-07-25 11:51:54

Cheatsheet shamelessly stolen from here.

VARIABLE    MEANING
$0          Filename of script
$1          Positional parameter #1
$2 - $9     Positional parameters #2 - #9
${10}       Positional parameter #10
$#          Number of positional parameters
"$*"        All the positional parameters (as a single word) ***
"$@"        All the positional parameters (as separate strings)
${#*}       Number of positional parameters
${#@}       Number of positional parameters
$?          Return value
$$          Process ID (PID) of script
$-          Flags passed to script (using set)
$_          Last argument of previous command
$!          Process ID (PID) of last job run in background

*** Must be quoted, otherwise it defaults to $@.
useradd cheatsheet
posted on 2014-07-23 11:38:55

This topic is already covered in more depth in an earlier post here, but now I figured a cheatsheet would help.

SYSTEM USER
    system user privileges? (UID below 1000)
    -r

HOME FOLDER
    create home folder in /home/<username>?
    -m
    no home folder creation:
    -M
    add existing folder as home:
    -d <folder>

    add contents to created home?
    -k <'skeleton' folder containing data>

GROUPING
    create new user group?
    -U
    add to existing group?
    -g <id or groupname>
    add several groups?
    -G <groups separated by comma's>
    don't create group? (user will be added to group with id 100 usually, see manpage)
    -N 

SHELL
    shell access? (use appropriate shell, /bin/sh for system users if login is needed)
    -s /bin/bash
    no shell access?
    -s /bin/false
    no shell access, with notification?
    -s /sbin/nologin

COMMENT
    -c 'comment explaining user usage'
linux: cat to clipboard
posted on 2014-07-21 11:13:46

To put the contents of a file directly into the clipboard, there exist several different ways. One possibility is to mark, CTRL-C or SHIFT-DEL, or whatever is used in you application for copying.

Applications like Klipper, besides providing the functionality of having a memory, also enable the system to copy every selection you make (with your mouse) into the clipboard.

All this is helpful, but once you have content that spans several screen pages, this gets old pretty fast.

Solution on debian: xclip

$ sudo apt-get install xclip

Usage:

$ echo test | xclip     ## clipboard contains now string 'test'
$ cat file.txt | xclip  ## clipboard contains content of file 'test.txt'
Certificates, OpenSSL in depth and GnuTLS
posted on 2014-07-10 14:37:52

This post should give an overview on the most used OpenSSL commands, and how SSL/TLS/X.509 in general works.

EDIT:
Since this post was written a long time ago, it might get revisited in the future. But this will be a major overhaul, so this will not happen in the near future either.

But there will come some ascii art on a schematic PKI in general, the section about the filenames will get cleaned up as well as the openssl section.

post vocabulary and some notes

The most used terms are abbreviated in the following.

PK = Private Key
C = Certificate
CSR = Certificate Signing Request
CA = Certificate Authority

Usually this seems way harder than it is in reality, once you get the hang of it. Hardest part is to understand which file belonging to which server is needed for the current step.

Certificates...

Some more abbreviations first:

SSL : Secure Sockets Layer
TLS : Transport Layer Security
X.509 : Public Key Infrastructure (PKI) and Priviledge Management Infrastructure (PMI) standard by the "International Telecommunication Union Telecommunication Standardization Sector" (ITU-T).

SSL and its successor TLS, which includes SSL, are protocols for encrypting internet communication. The C infrastructure setup is defined in the X.509 standard. That is why these acronyms are popping up in any discussion about this topic.

On a sidenote, a more general equation:

HTTPS = HTTP + SSL/TLS + TCP

Since this post is focused on usability, the techniques in question that are used in a PKI or PMI are of no concern here.

The C chain looks usually like this: (intermeadiates can, but need not exist)

  1. Root C
  2. Intermediate C
  3. C

The last C is the one issued by the CA where you subitted your CSR to.

Only if all C's are present and used correctly, SSL checking tools (See here or here.) will tell you your C's are set up accordingly.

File types

There exist a bunch of file types, you have to be able to differentiate.

file types

.key : private key file (PK), but that's just a convention
.csr : certificate signing request (CSR)
.crt : certificate (C)
.cer : certificate (C), Microsoft used this naming scheme earlier

For .pem and .der files, see next section.

PK.key, CSR.csr, C.crt are kind of placeholders for your actual filenames in the following sections. A good naming scheme would be subdomain_domain_tld-year, without dots. Dots happen to either not work or cause other problems. Appending the year your C was issued helps with distinguishing in case you renew a certain certificate.

containers and encodings

Containers are used for grouping together C's (and) into a single file.

.pem: ascii / base64 encoded container
.der: container in binary format

The extension hints at the encoding being used, for the container. A container usually consists of the set of all C's (the entire trust chain), and can optionally also contain the PK.

All the files from the section before can be in PEM or DER format, IIRC!

For more information on the Distinguished Encoding Rules (DER) or the Privacy-enhanced Electronic Mail (PEM), just click these links.

OPENSSL

PK / CSR generation

For usage with Certificate Authorities (CA's)

Generate a PK and a CSR:

openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout PK.key

If you already have an existing PK and just need a CSR:

openssl req -out CSR.csr -key PK.key -new

Create a new CSR for an existing C:

openssl x509 -x509toreq -in C.crt -out CSR.csr -signkey PK.key

Complete self-signed certificate

Generation of a self-signed (ss) C, based on a newly generated PK with a term of validity of one year (365 days):

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout PK.key -out C.crt

ss-C's for https are still better than traffic over plain http, but for private websites for example, StartSSL Certificates provide C's for free. Free as in 'no money needed'.

convert PEM to DER

openssl x509 -in C.crt -outform der -out C.der

convert DER to PEM

openssl x509 -in C.crt -inform der -outform pem -out C.pem

viewing PEM encoded files containing a C

For debugging reasons, this might actually be the most used command.

openssl x509 -in C.pem -text -noout
openssl x509 -in C.crt -text -noout
openssl x509 -in C.cer -text -noout

This will not work on a single PK file.

GNUTLS

Get it:

apt-get install gnutls-bin -y

Use:

certtool

Instead of the openssl tool suite, this is actually self-explanatory.

Examples

In the following, keyfiles are called .key extension-wise, but that is just a name differentiation. They are in reality just .pem files, too, but with this practice files are easier to differentiate.

generate PK's (private keys)

certtool --generate-privkey --outfile PK.key --rsa

Use --dsa or --ecc flags if you want to change the used cryptosystem.

generate CSR's (certificate signing requests)

certtool --generate-request --load-privkey PK.key --outfile CSR.pem

generate C (certificate) from CSR (certificate signing request)

Usually this is a CA_C.pem, a CA certificate.

certtool --generate-certificate --load-ca-privkey CA_PK.key --load-ca-certificate CA_C.pem --load-request CSR.pem --outfile C.pem

generate C (certificate) from PK (private key), lacking a CSR

certtool --generate-certificate --load-ca-privkey CA_PK.key --load-ca-certificate CA_C.pem --load-privkey PK.key --outfile C.pem

generate a self-signed C (certificate), the fast way

certtool --generate-privkey --outfile CA_PK.key --rsa
certtool --generate-self-signed --load-privkey CA_PK.key --outfile CA_C.pem

Here's a one-liner to copy-paste:

certtool --generate-privkey --outfile CA_PK.key --rsa && certtool --generate-self-signed --load-privkey CA_PK.key --outfile CA_C.pem

create a .p12 / pkcs #12 container file

A .p12 file includes all three part usually needed on the server side:

  • CA certificate

  • server PK

  • server C

    certtool --to-p12 --load-ca-certificate CA_C.pem --load-privkey PK.key --load-certificate C.pem --outfile CONTAINER.p12 --outder

show certificate information

certtool --certificate-info --infile C.pem
Downgrading packages in Debian / Ubuntu
posted on 2014-07-09 16:50:55

Downgrading packages can become important once an apt-get update && apt-get upgrade breaks something.

Most of the following must be done as root. Since you need to be for

Logs on what got upgraded

To find out the packages in question, look here:

/var/log/apt/term.log
/var/log/dpkg.log
/var/log/apt/history.log

Beware of the logrotating, in case you do not find anything. Use zgrep, zless, zcat on these .bz2 files.

A handy script to gather all information into on single place:

cd /var/log/apt && cat history.log > ~/apthistory.log && zcat history.log*gz >> ~/apthistory.log

That you all your logs get aggregated into apthistory.log in your homefolder.

If you are using synaptic, you can check the logs through it, too.

Find out which version to use

apt-cache showpkg <package>

Package is the name you usually use when using apt-get.

In the last section, you may find info on the version to use. It doesn't help that the manpage tells you to look at apt's source code for more information. (Yes, it does that for real...)

If you can't help but feeling lost, that's normal.

The other approaches are skimming through the aforementioned logs with grep and more grep. Or using google, which is harder than one might imagine.

Actual downgrading

apt-get install <packagename>=<version number>

Beware you might need the :amd64 or :i386 extension for the package name, too.

Preventing future upgrades from killing your changes

echo '<packagename> hold' | dpkg --set-selections

To undo this:

echo '<packagename> install' | dpkg --set-selections

To show what currently gets upgraded or not:

dpkg --get-selections | grep 'install'
dpkg --get-selections | grep 'hold'

Keep in mind that things might break in a later update & upgrade, if you do not fix the cause of your problem. The package you downgraded might be needed in a newer version by a lot of other packages, it's just a question of time.

Certificate content viewed with OpenSSL
posted on 2014-07-07 11:27:04

To show the contents of a ssl/tls certificate with openssl, do:

openssl x509 -in C.crt -text -noout

where C.crt is the name of your actual certificate. Should work for .pem, .cer and .key files as well.

bash emacs shell shortcuts
posted on 2014-07-04 13:19:19

Linux comes around usually with the bash shell. By default bash comes with the possibility to enable vi or bash shortcuts. (Google set -o emacs vs. set -o vi for more info.)

Since vi mode is a bit strange to use (No possibility to see which mode you are in, maybe with zsh this could be changed?) I stick to emacs bindings.

Most useful are:

# CHOOSING
CTRL - P    previous command (previous line)
CTRL - N    next command (next line)

# SEARCHING
CTRL - R    incremental search in command history

# MOVING
CTRL - A    beginning of line
CTRL - E    end of line
ALT - B     backward one word
ALT - F     forward one word
CTRL - B    backward one character
CTRL - F    forward one character
ALT - A beginning of sentence
ALT - E end of sentence

# DELETING / CUTTING / "KILLING"
CTRL - D    next char
CTRL - H    previous char
ALT - D     next word
ALT - BSPC previous word
CTRL - W    previous word (not preferred, as it won't work in emacs with evil-mode enabled :o))
CTRL - K    from cursor to end of line
CTRL - U    from cursor to beginning of line

# INSERTING / "YANKING"
CTRL - Y    put paste buffer contents back at cursor location

# UPPERCASE
ALT - U     uppercase next word
ALT - L     lowercase next word

# TRANSPOSE
CTRL - T    last two characters
ALT - T     last two words

# AUTOCOMPLETION
ALT - *     insert all possible completions
ALT - ?     show all possible completions
VirtualBox shared folders
posted on 2014-07-01 23:18:08

When using VirtualBox for toying around with VM's, you usually need a possibility to exchange files. This can either happen through version control, shared folders, Dropbox and the shared filehosters would be another option.

Here the shared folder approach will be described, in Ubuntu 14.04:

  1. Create a folder to be used on your host OS.
  2. Include the folder you just created in Virtualbox. Choose VM, Settings >> Shared Folders and add the new one. Transient is just temporary, you will very likely not want that. Choose a persistant approach, auto-mount and persistent. Plus the folder you just created in your Host OS. The 'label' (What you choose as 'Folder Name') you will need as mount position at the end.)
  3. Check your vbox version, Help >> About VirtualBox... will show it.
  4. Get the 'Guest Additions' ISO from here, the one fitting your vbox version.
  5. As root you have to prepare some things. In particular: apt-get install update, apt-get install upgrade, apt-get install dkms -y. This will make the changes to the kernel modules persistant across updates. The 'upgrade' part can maybe be skipped, IIRC.
  6. Mount the GuestAdditions ISO: Choose VM, Settings >> Storage >> choose IDE controller >> click on the CD icon on the right (not the one in the tree overview!).
  7. Either your VM will tell you to auto-run the ISO (easy).

Or you will have to use the supplied installer script (a little harder):

$ mkdir /mnt/iso
$ sudo mount /dev/cdrom /mnt/iso

Open the mounted ISO (cd to there), and do as root:

$ sh ./VBoxLinuxAdditions.run

If the mount fails due to 'bad superblock' or such, do cd /media && cd /cdrom && eject and reload the iso in the VirtualBox Manager. If the 'bad superblock' error still persists, this may be related to 14.04 or just me being dumb. Once all this is done, the folder chosen in VirtualBox needs to be mounted in the guest OS. sudo mkdir /mnt/share and sudo mount -t vboxsf <label-you-chose-in-step-2.> /mnt/share

So basically, there are several steps:

  • You need to have the Guest Additions installed in your guest OS.
  • In VirtualBox there has to be a linked folder to your host OS.
  • The folder linked in VirtualBox has to be mounted by its label into your guest OS.

Should be easy, but everytime I have to do it, it takes me ages. Or at least feels like.

linux shell calculator
posted on 2014-06-30 13:25:39

Often a calculator is needed, fast, you are in the terminal anyway, so what to do?

Try this function:

calc () { 
    echo "scale=4;$*" | bc -l
}

Only downside is, you have to escape *, else the shell will use it for file expansion. No quotation marks needed.

Usage:

calc 1 + 2
calc 3 - 4
calc 44 \* 88
calc 77 / 234

This should do for most cases where you need a calculator fast.

A better 'find'
posted on 2014-06-30 12:52:15

Linux's find syntax is kind of strange and rather unfriendly to type.

A helpful 'alias' (which is actually a function, not an alias) is this:

ff () {
        find . -iname "*$**"
}

Usage:

ff <searchterm>

No quotation marks needed, case insensitive.

Apache redirect http traffic to https
posted on 2014-06-30 10:30:09

There are two approaches to this. Either use a redirect or a rewrite rule.

Rewrite, the officially recommended method

NameVirtualHost *:80
<VirtualHost *:80>
   ServerName mysite.example.com
   DocumentRoot /usr/local/apache2/htdocs 
   Redirect permanent / https://mysite.example.com/
</VirtualHost>

<VirtualHost _default_:443>
   ServerName mysite.example.com
  DocumentRoot /usr/local/apache2/htdocs
  SSLEngine On
 # etc...
</VirtualHost>

It's a little faster than a rewrite plus it is officially recommended. However behind a SSL offloader its said not to work, I read on stackoverflow. My guess would be this case could be fixed through the use of HTTP headers, but I currently have no setup where I can verify that without breaking things and wreaking havoc.

Rewrite, for completeness sake (and in case #1 did not work)

Use this in your vhost configuration:

RewriteEngine on
RewriteCond %{HTTPS} !=on
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1  [R=301,L,QSA]

To redirect just specific http traffic concerning a specific folder, use a different rewrite rule:

RewriteRule ^/?name-of-your-folder/(.*) https://%{SERVER_NAME}/name-of-your-folder/$1 [R=301,L,QSA]
emacs shortcuts in xterm
posted on 2014-06-26 19:21:35

To have proper working keycodes (alt is alt key and not meta, like it is the default in xterm), do this:

$ cd ~
$ vi .Xresources

and add the following line:

xterm*metaSendsEscape: true

Hit escape key several times, in case you do not know vi/vim. :wq, Enter, done.

Apache webserver redirect to subfolder
posted on 2014-06-24 11:21:17

Put into an .htaccess which resides in your web root folder:

RewriteEngine On
RewriteCond %{REQUEST_URI} ^/$
RewriteRule (.*) /name-of-subfolder-to-redirect-to/ [L,R=301]

When omitting the R=301 flag, the URL will not be rewritten in the browser's address bar.

ip commands in linux
posted on 2014-06-23 17:28:11

The currently usually used tools and which ones will succeed these:

Purpose                        | Legacy net-tools | iproute2
-------------------------------+------------------+-----------------
Address and link configuration | ifconfig         | ip addr, ip link
Routing tables                 | route            | ip route
Neighbors                      | arp              | ip neigh
VLAN                           | vconfig          | ip link
Tunnels                        | iptunnel         | ip tunnel
Multicast                      | ipmaddr          | ip maddr
Statistics                     | netstat          | ss
A good looking xterminal
posted on 2014-06-19 18:55:25

xterm in its basic form is black on white, has a less comfy font, and no blinking cursor.

To fix these complaints as well as have some other goodies, try:

uxterm -fg white -bg black -bc -j -vb -maximized -fn 9x15

-bc is for proper scrolling behaviour (so not every single line needs an update). -j is a blinking cursor. \ o / -vb is visual bell, so you don't get to hear sounds when reaching the end of a file in less.

xterm is a nice terminal choice, once you realize it is pure speed. uxterm is also provided by the xterm package, but it is also utf8 capable. To enable utf8 in xterm, use the -u8 flag.

Depending on your distribution you use, flags may be missing. On Debian I do miss the -maximized flag, for example.

Scripting in Common Lisp
posted on 2014-06-15 16:41:05

To use Common Lisp for scripts, there exist two approaches, depending on the common lisp implementation you use. The approaches for SBCL and CCL will be shown here, as these seem to be the most widely used ones.

  1. Either create an executable file just like you would for a bash script, and set to the shebang accordingly.
  2. Or create a sh script with an exec line, which in turn will call your lisp file, as a wrapper.
  3. You could as well just create a compiled executable.

on approach 1: SBCL / Steel Bank Common Lisp

The first one will work with SBCL / Steel Bank Common Lisp and some others, but not all implementations:

#!/usr/local/bin/sbcl --script

It's just the path to the executable binary, followed by the --script parameter. This line is put at the top of the executable file containing your common lisp code.

on approach 2: CCL / Clozure Common Lisp

touch run-lisp
chmod a+x run-lisp
vim run-lisp

Wrapper file content:

#!/bin/sh
ccl64 --no-init --terminal-encoding utf-8 --load $1.lisp --eval '(ccl:quit)'

Save, quit.

Put the wrapper somewhere where it will be referenced via the $PATH variable, so you can call it from everywhere.

The wrapper will then be called for running the lisp file in question via run-lisp </path/to/script>/<scriptname>.<ext>.

I.e. run-lisp ./helloworld.lisp.

on approach 3: create an self-sustained executable in SBCL

;; Make an executable Lisp image.  Execute with ./lisp-image
(save-lisp-and-die #p"lisp-image" :executable t)

More on this can be found here.

Linux battery status from shell
posted on 2014-06-08 16:47:39

To get the percentage of remaining battery under Fedora 19, use:

$ cat /sys/class/power_supply/BAT0/capacity

This will however only show the accurate percentage when no power chord is attached.

If you want the accurate percentage, calculate it yourself, other files in the BAT0 folder will tell you the values to use.

grep 'or' syntax
posted on 2014-06-03 14:19:34

To search for several matches at once, try one of the following:

$ grep -e '<first_search_term>' -e '<second_search_term>' <filename>

or

$ egrep '<first_search_term> | <second_search_term>' <filename>

Lastly, there are the several regexp specifiers.

$ grep -E '<first_search_term> | <second_search_term>' <filename>

-E flags the search terms to be interpreted as POSIX regular expressions. There's also -P for perl-regexp's, and -F and -B. -B is the default, if omitted, you can just use:

$ grep '<first_search_term> \| <second_search_term>' <filename>

Choose wisely! ;)

Compiling git from source on debian
posted on 2014-06-02 19:58:58

Compiling git from source may not be the what first comes into your mind when using git. But when working with large repositories you should consider this, especially with the profiling option. The speed improvement I got was pretty impressive. (Ok this may also be related to the upgrade from 1.7.x to 2.0.0.)

Anyway, do this for a full, profiled (= speed optimized) build: ('$' here indicates running the command as a regular user, '#' indicates root.)

$ git clone --progress -v https://github.com/git/git
$ cd git
$ make configure
$ ./configure --prefix=/usr
$ make prefix=/usr PROFILE=BUILD all
# make prefix=/usr PROFILE=BUILD install

Afterwards check git --version output if you are running the correct version and be amazed by the new speedyness.

Debian java update-alternatives
posted on 2014-05-21 06:23:43

If you need javaws with oracle java (not that IcedTea crap), and have it installed already, but lost your settings due to an update, do:

$ update-alternatives --config javaws

This will show you something like this:

There are 6 choices for the alternative javaws (providing /usr/bin/javaws).

  Selection    Path                                              Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws   1071      auto mode
  1            /usr/lib/jvm/j2re1.7-oracle/bin/javaws             316       manual mode
  2            /usr/lib/jvm/j2sdk1.7-oracle/jre/bin/javaws        317       manual mode
  3            /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws   1061      manual mode
  4            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws   1071      manual mode
  5            /usr/lib/jvm/java-7-oracle/bin/javaws              9         manual mode
  6            /usr/lib/jvm/java-7-oracle/jre/bin/javaws          1064      manual mode

Press enter to keep the current choice[*], or type selection number: 

Choose the according number and be happy.

nagstamon setup
posted on 2014-05-19 22:29:27

According to apt-get: nagstamon - Nagios status monitor which takes place in systray or on desktop

This is a tool to provide a usable interface to nagios messages, and after some setup you might quite like it.

Install (Debian):

$ apt-get install nagstamon -y

After installing, it might be a good idea to put it into autostart.

Then some settings will be quite practical:

  • connect to nagios (else no messages)
  • apply a filterlist (so you just see what is important to you)
  • expand/collapse not set to hover (quite buggy in KDE)
  • ssh access (fix console setting if you are not using gnome)
  • adjust downtime default time span to your liking
Runlevel configuration in Debian
posted on 2014-05-15 14:43:12

Use sysv-rc-conf for this. It will show an curses GUI, where you can edit the present settings.

Get it from apt:

$ apt-get install sysv-rc-conf
Create mails in bash
posted on 2014-05-09 20:17:25

Write this directly on your command prompt:

/usr/bin/mail -s "testmail" root 'mailaddress@domain.tld' -a "From: mail_daemon" <<< "ti ta testmail"

Which will create this:

To: <root@hostname>, <mailaddress@domain.tld>
Subject: testmail
From: <mail_daemon@hostname>

ti ta testmail

This is useful when you already have a postfix (or whatever maildaemon) running, and you need email notification in your scripts.

dd Howto and some MBR tricks
posted on 2014-05-09 19:00:18

dd is used to "convert and copy files" under linux.

Basic Stuff

Read: Use it for disk images. I.e. put an .ISO on an USB stick. Or copy whole HDD's. Or create ISO's.

# use 'mount' to find out where the stick is mounted

# copy iso onto usb stick
dd if=<isoname>.iso of=/dev/sdX

# create an iso
dd if=/dev/cdrom of=/home/<username>/Desktop/<isoname>.iso

# wipe clean
dd if=/dev/zero of=/dev/sdX

# make a file of 100 random bytes
dd if=/dev/urandom of=/home/<username>/my.random bs=1 count=100

That was it with the usual suspects. Now onto more serious stuff.

Serious Stuff

# PRO: create image from one host and stream to the other
#on target host
netcat -l -p 1234 | dd of=/dev/hdc bs=16065b
#on source host (where the ISO will be created and streamed)
dd if=/dev/hda bs=16065b | netcat <targethost-IP> 1234

#or use this:
# from remote to local
rsh 192.168.xx.yy "dd if=/dev/sda ibs=4096 conv=notrunc,noerror" | dd of=/dev/sda obs=4096
# or from local to remote
dd if=/dev/sda ibs=4096 conv=notrunc,noerror | (rsh 192.168.xx.yy dd of=/dev/sda obs=4096)

In the above you may use ssh, or rsh. Do as you please. Of course you may use different IP's.

And now...

Very Serious Stuff

# PRO: show your MBR
dd if=/dev/sda count=1 | hexdump -C

# PRO: back up MBR
dd if=/dev/sda of=mbr.bin count=1

    #put this on a floppy you make with
    dd if=boot.img of=/dev/fd0
    #along with dd. Boot from the floppy and
    dd if=mbr.bin of=/dev/sda count=1
    #will restore the MBR.


# PRO: command to read your BIOS and all interfaces
dd if=/dev/mem bs=1k skip=768 count=256 2>/dev/null | strings -n 8

on decrypting an MBR

Use file. Just do it:

dd if=/dev/sda of=mbr.bin
file mbr.bin

will give you something like this:

mbr.bin: x86 boot sector; partition 1: ID=0xee, starthead 0, startsector 1, 234441647 sectors, extended partition table (last)\011, code offset 0x0

which is easily read and understandable.

rsync howto
posted on 2014-05-07 14:50:43

Usually when using rsync you want to use it like this:

rsync -avzh --progress server1:/path/to/file server2:/path

This does not preserve hard links, use the -H option, if you need this, too.

man [
posted on 2014-05-04 23:30:22

Whenever writing bash if-clauses, this will be your new best friend:

$ man [
bash heredoc
posted on 2014-05-04 22:45:26

Bash's heredoc functionality provides functionality to work with stream literals.

Usages:

# 1. basic case with parameter expansion
command << DELIMITER_YOU_CHOOSE
this here
is input
which is
line for line
piped into command
DELIMITER_YOU_CHOOSE

# 2. no parameter expansion due to the " "
command << "DELIMITER_YOU_CHOOSE"
    echo $SHELL
DELIMITER_YOU_CHOOSE

# 3. same as above, but will strip leading TAB's
# Use TAB's. DON'T use spaces!
command <<- "DELIMITER_YOU_CHOOSE"
    echo $SHELL
DELIMITER_YOU_CHOOSE

# 4. parameter expansion, strip leading TAB's
command <<- DELIMITER_YOU_CHOOSE
    echo $SHELL
DELIMITER_YOU_CHOOSE

# 5.  command <<< evaluated_command, some kind of shortcut
command <<< cat $SHELL

Results:

# 1.
this here
is input
which is
line for line
piped into command

# 2.
    echo $SHELL

# 3.
echo $SHELL

# 4.
/bin/bash

# 5.
/bin/bash

In general, the usage depends on the type of the input source: - < is for files - << is for typed stream literals - <<< is for evaluated commands

This is a possibility to pass several lines of arguments to command expecting input in several stages. I.e. for scripted certificate generation this comes in handy.

bash redirection
posted on 2014-05-04 21:18:29

The bash shell provides three differnt file handles by default. These are called file descriptors in bash.

  • /dev/stdin = 0 = read input from shell prompt
  • /dev/stdout = 1 = stuff printed to shell
  • /dev/stderr = 2 = error messages channel

& is 1 and 2 combined.

For I/O redirection all these can be accessed through piping:

command  <  file        # redirect file to STDIN
command <<  heredoc     # redirect heredoc to command STDIN
command <<< herestring  # redirect herestring to command STDIN
         >  file        # BEWARE: truncates/deletes file contents!
command 1>  file        # redirect command STDOUT to file
command 2>  file        # redirect command STDERR to file
command &>  file        # redirect command STDOUT and STDERR to file
command  >| file        # redirect command STDOUT forcefully
command  >> file        # like '>', but will append instead of overwrite in file

Forceful redirection will write, even when bash has noclobber set. 'noclobber' prevents files from getting overwritten when redirection operators are used.

heredoc's and herestrings's will go into a separate post, soon.

Sidenotes:

# redirection between handles, these are equivalent:
command &>  file        # redirect STDOUT and STDERR
command >   file 2>&1   # redirect STDOUT as well as STDERR to STDOUT

In more detail:

# redirects "i" to "j", if "i" is omitted, defaults to "1"
        i>&j

i,j range from 1 to 9, 3-9 are free to use. Beware, "5" is inherited by child processes and exec usage.

Plus:

# close file output descriptor "i"
        i>&-
# close file input descriptor "j"
        j<&-

and

# these are equivalent

command < input-file > output-file

< input-file command > output-file   # Although this is non-standard.

Files can also be opened for reading AND writing simultanously via <>. Also any file descriptor ID can be used with.

exec can be used to redirection for the complete current shell. See here.

Also you can much more with duplicating, closing or moving file descriptors. But this is stuff for another post when I need it. In the meantime, this bash reverse proxy is really 'wickedly cool'.

bash multitasking
posted on 2014-05-04 17:50:46

When working in the shell under linux and having started a long-running process, which blocks the shell (and you working in it), gives you some options:

  • wait until process is finished, you land at the prompt, you can work on (NO!)
  • open a new shell window, which is no problem when working under a graphical window manager (MAYBE...)
  • use bash's multitasking capabilities (YES!)

How?

First the shortcuts:

Ctrl-C kills the process currently in the foreground.
Ctrl-Z "suspends"/pauses the process currently running in the foreground and puts it into background.
Ctrl-Y suspend job the next time it asks for user input.

The last two differ in such a way, as suspending via Ctrl-Z may swallow pending shell output, whereas Ctrl-Y will not.

Now the commands:

$ jobs              # list all background processes
$ fg                # start marked process in foreground (see '+' on `jobs` list)
$ bg                # start marked process in background (see '+' on `jobs` list)
$ fg %x             # start process with id 'x' in foreground
$ %n                # alias for fg %n
$ bg %x             # start process with id 'x' in background
$ %n &              # alias for bg %n
$ kill %x           # kill process with id 'x'
$ <processname> &   # '&' will start a process and let it run in background

Example usage: (demonstrated via bash's sleep, which just waits for the specified time in seconds)

[sjas@lorelei ~]% sleep 100
    CTRL-Z
[1]  + 7848 suspended  sleep 100

[sjas@lorelei ~]% sleep 200
    CTRL-Z
[2]  + 10920 suspended  sleep 200

[sjas@lorelei ~]% sleep 300
    CTRL-Z
[3]  + 3676 suspended  sleep 300

[sjas@lorelei ~]% sleep 400
    CTRL-Z
[4]  + 10012 suspended  sleep 400

[sjas@lorelei ~]% jobs
[1]    suspended  sleep 100
[2]    suspended  sleep 200
[3]  - suspended  sleep 300
[4]  + suspended  sleep 400

[sjas@lorelei ~]% bg
[4]    10012 continued  sleep 400

[sjas@lorelei ~]% jobs
[1]    suspended  sleep 100
[2]  - suspended  sleep 200
[3]  + suspended  sleep 300
[4]    running    sleep 400

[sjas@lorelei ~]% kill %1
[1]    7848 terminated  sleep 100

[sjas@lorelei ~]% jobs
[2]  - suspended  sleep 200
[3]  + suspended  sleep 300
[4]    running    sleep 400

[sjas@lorelei ~]% fg
[3]    3676 continued  sleep 300
    CTRL-Z

[sjas@lorelei ~]% jobs
[2]  - suspended  sleep 200
[3]  + suspended  sleep 300
[4]    running    sleep 400

[sjas@lorelei ~]%

There is also disown for removing jobs from the list of jobs and other actions. In case of interest man bash and search for disown.

Have fun.

Linux file system structure, boot process, partitioning
posted on 2014-05-03 08:25:21

The linux directory layout comes from the Filesystem Hierarchy Standard. It is used in linux and the unix derivates. The draft of version 3.0 can be found here. Also helpful might be man hier, the description of the linux file system hierarchy.

In short, there follows an explanation of its depicted structure, plus the folders usually present in a regular linux distro. The list is also annotated with information on how these are used during OS startup, why these are separated the way they are, partitioning considerations. First some background a the boot process in IBM compatible (server) PC's.

Booting procedure, first stage

Power on

When you boot a computer through pressing the power button/IPMI/ILO/whatever, the power supply powers up all hardware with electricity. The BIOS/EFI/UEFI on the motherboard kicks in and searches all known (hardware) drives, in order of the boot ordering set in the bios, for a boot sector. (Sector means a sector on a harddrive, for example.) When a MBR/VBR is found there, the BIOS loads the data which is stored there into the RAM on a fixed position. The software being responsible for this is a bootloader, and comes with the BIOS. This is the first stage of the boot loading.

The boot sector

The boot sector is always located in the first sector on the drive. In case of a MBR being in use, its on the first sector and also at the first head on the first cylinder of the drive.

The boot sector may contain a MBR (Master Boot Record), or a VBR (Volume Boot Record). This differentiation is depending on the boot sector being located outside of the partition containing the other data (like the OS). The boot record, regardless of its type has the same structure.

The boot record

On the data structure of a boot record:
Its starting mark is identified by the hex code 0x55 0xAA on disk, or AA55h in memory. See here.. The hex sequences may differ for BIOS'es for non-x86-CPU's, just for the record. Then it contains the partitioning table, providing basic information on the divisioning into primary partitions. The other code needed for bootstrapping will follow. The order on disk is reversed compared to the one in memory due to Little Endianness usually. No idea how this is on Big Endian based systems.

Usually this creates the following layout:
Size-wise it is 446 bytes for Bootstrap code, 64 bytes for Partition table, 2 bytes for Signature (= 0x55 0xAA). Summing up to 512 byte, when the disks sector size is 512 Bytes. Of course, this can differ, too, A LOT, depending on the OS in use, sector size, ... See here.

The partition table

Because of the 64 byte maximum size of the partition table, a maximum limit of 2 TiB adressable space exists for a disk with 512 bytes per sector: 2^32 * 512 Bytes. This led to the development of GPT (GUID Partition Table, GUID = globally unique identifiers), to address bigger storage devices. But this is just for the record. More info here.

The boot record will tell the system which single partition is currently marked as active, indicating where the operating system or bootloader can be found.

Booting procedure, second stage

o In the case of Linux, the boot record tells the BIOS on which partition the sector containing /boot is located.

/boot

/boot can be placed on the same partition as the other data of the operating system or user data. There exist several reasons on why to put /boot on a separate partition from / and others like /home etc..

  • for full disk encryption like TrueCrypt: if /boot is encrypted, how to decrypt the whole drive?
  • your root file system is administrated via LVM, RAID
  • BIOS limitations (Maybe your BIOS can only read like the first 1022 cylinders of your disk? Don't laugh...)
  • Or the operating system uses a file system which cannot be read by the BIOS.

Some notes on partitioning under linux are here.

Boot manager/loader

Only after things went that way, the boot manager, if one is installed, comes into play. Examples of boot managers are grub/lilo/syslinux on linux, NTLDR/BOOTMGR/winload.exe on Windows and whatever MacOS uses. With the boot loader having been found and loaded into RAM, the second stage of boot loading starts.

On a sidenote, bootloaders can boot make booting into other partitions beside the active one, possible. Also booting from logical volumes is possible through them.

An entry in a boot manager must point to kernel file, or another boot sector on another partition or disk. When another boot sector is chosen, the one currently loaded into memory will be replaced with the one being pointed to. The other boot sector must contain another bootloader or kernel. Else your system won't be able to boot an operating system. ;) This procedure is what is called chain-loading or bootstrapping. This also took place from first to second boot stage.

Now after a long prelude on how bootstrapping works, why placing /boot on a separate partition is useful and how it and its contents will be accessed during boot time... Finally we are near Linux' directory structure.

General partitioning

In the following I tried to logically group folders which are to be placed together onto one partition. In theory the OS can mount all folders here from separate partitions. This schema depicts a useful partitioning layout, but may be overkill. For a usual desktop system, use /boot, /, /home.

swap

During partitioning for linux also a swap partition needs to be created. This is from where linux accesses the virtual memory, but it won't show up in the directory tree below. In the past double the amount of your RAM size was a good value. However this was once upon a time where PC's had like 512 MB RAM. Nowaydays use half the RAM size, that should do fine.

The swapspace could as well be located on the same partition as the root file system. But if the swapspace is corrupted the system may be rendered unusable. This is more likely when / and /swap lie on the same partition.

Linux directory layout

BOOT STUFF, PARTITION 1
=============================================================================================================
/boot           static boot files: bootloader, kernel, ramdisk image ("early userspace")
=============================================================================================================


ROOT FILESYSTEM ("primary hierarchy") with executables / configs / system folders, PARTITION 2
=============================================================================================================
/               root node of the file system, mounted after boot

/lib            kernel modules and dynamic libraries
/lib*           i.e. /lib64 for 64-bit-specific libraries

/bin            binaries, available in single user mode and to all users. 
                used to bring up the system or repair it
                i.e. cat, ls

/sbin           system binaries for system maintenance/adminning
                same as /bin, but for files having to be used with root priviledges
                for boot, restore, recover tasks, just of no concern for regular users
                i.e. sysctl, mkfs, ifup/ifdown

/dev            device files referencing physical interfaces for accessing hardware
/proc           virtual pseudo-file system, for process information
                used as an interface to the kernel's data structures.
/run            data relevant to running processes, files placed earlier in 
                /etc, /var/run, /var/lock, /dev/hcmem etc.
                /var/lock and /var/run were segregated temporal file systems earlier.
                sometimes even mounted prior to /var.
                now with /run is only a single tmpfs (temporal file system) needs to be initiated.
                a RAM-disk is a virtual disk, hosting a file system and thus not the same.

/etc            host-specific configuration files
/tmp            temporal files, may be deleted at any time. (in theory, at last)
/var            variable data
    /var/log        all logfiles you may need sooner than you might like
    /var/tmp        see /tmp

/mnt            mount temporal external devices here
/media          same as /mnt, for removeable devices, will show up on Desktop when used for mounts

/lost+found     via fsck recovered damaged files

/root           home folder of the admin, the root user
-------------------------------------------------------------------------------------------------------------
/usr            could also be here, if not mounted separatedly.
/opt            could also be here, if not mounted separatedly.
/home           could also be here, if not mounted separatedly.
/srv            could also be here, if not mounted separatedly.
=============================================================================================================


USER SYSTEM RESOURCES ("secondary hierarchy"), SHARED APPLICATIONS, PARTITION 3
=============================================================================================================
/usr            secondary hierarchy, 'user system resources'
                also contains i.e. /usr/etc and other folders present in '/'.
                static files that may reside on a separate partition.
                i.e. for being shared for use among several linux systems.
                all the package-manager-installed files go here.
    /usr/bin        binaries of user programs and user-installed applications
    /usr/sbin       same as sbin, system files to be used with root priviledges
                    i.e. the package manager executables/binaries live here
    /usr/lib/       libraries for user programs
    /usr/tmp/       obsolete, use /var/tmp instead
    /usr/include/   include files for C sources
=============================================================================================================

USER SYSTEM RESOURCES ("tertiary hierarchy"), LOCAL APPLICATIONS, PARTITION 4
=============================================================================================================
/usr/local          tertiary hierarchy for local data, specific to this host.
                    also has /usr/local/bin, /usr/local/sbin, /usr/local/etc and so forth
                    usually third party programs concerning the OS go here.
                    be it self-compiled or packaged software, all not maintained by your distribution
                    this is useful because these folders are not touched during OS updates/upgrades
                    everything else you install goes into /opt

                    no idea if it is of use to mount this one on a separate partition?
=============================================================================================================

USER INSTALLED SOFTWARE, PARTITION 5
=============================================================================================================
/opt            additional installed software, addon packages containing static files
                no OS specific stuff, just application software?
                /etc/opt, /var/opt are compagnion folders

                /opt vs. /usr/local/ is a decision like vim vs. emacs...

                if /etc/opt is used for configs, it might make sense putting this on an extra partition, too
=============================================================================================================

USER DATA, PARTITION 6
=============================================================================================================
/home           root point for user's home folders
=============================================================================================================


SERVER DATA, PARTITION 7
=============================================================================================================
/srv            data from daemons/services of this system.
                site-specific data served by this system.
=============================================================================================================

Reasons for the separation of files by design

  • static or variable
  • shareable or unshareable

Shareable means just "is storable on one host and usable on another". Unshareable cannot be.

Static data usually does not change much opposed to variable data, thus it can be stored on read-only media, for example.

Also there is no need to back it up as much as variable data. (Which explains why this distinction actually makes sense, even though this adds additional complexity.)

Some examples for unshareable:
/boot is static and not shareable.
/var/run and /var/lock are variable and not shareable.

Some examples for shareable data:
/var/mail is variable and shareable. /usr or /opt are static and shareable.

Static / read-only being cleanly separated is becoming more important nowadays, since SSD's work better with RO data.

This grew quite a bit longer than expected.

Linux file packers
posted on 2014-05-03 08:15:08

Cheatsheet for zipping/unzipping:

# if you can install programs
$ unp <archive>

unp will determine the filetype etc. by itself and just works.

Else:

## EXTRACT

# .tar.gz
$ tar xzvf <archive>

# .tar.bz2
$ tar xjvf <archive>

And:

## COMPRESS

# .tar.gz
$ tar czvf <archive>

# .tar.bz2
$ tar cjvf <archive>
Linux system users
posted on 2014-05-02 09:35:11

Finding substantiated info on this topic via google is really hard...

Here's what the user id's mean, depending on the number range where they are from:

 The UID and GID numbers are divided into classes as follows:

 0-99:
 Globally allocated by the Debian project, the same on every Debian system.
 These ids will appear in the passwd and group files of all Debian systems.
 New ids in this range are added automatically as the base-passwd package updates.

 Packages which need a single statically allocated uid or gid should use these.
 Their maintainers should ask the base-passwd maintainer for ids.

 100-999:
 Dynamically allocated system users and groups. 
 Packages which need a user or group, but can have this user or group allocated 
 dynamically and differently on each system.
 Should use adduser --system to create the group and/or user. 
 adduser will check for the existence of the user or group.
 If necessary choose an unused id based on the ranges specified in adduser.conf.


 1000-59999:
 Dynamically allocated user accounts. 
 By default adduser will choose UIDs and GIDs for user accounts in this range, 
 though adduser.conf may be used to modify this behavior.


 60000-64999:
 Globally allocated by the Debian project, but only created on demand. 
 The ids are allocated centrally and statically, but the actual accounts are 
 only created on users' systems on demand.

 These ids are for packages which are obscure or which require many 
 statically-allocated ids.
 These packages should check for and create the accounts in /etc/passwd or 
 /etc/group (using adduser if it has this facility) if necessary.
 Packages which are likely to require further allocations should have a "hole" 
 left after them in the allocation, to give them room to grow.


 65000-65533:
 Reserved.


 65534:
 User nobody. The corresponding gid refers to the group nogroup.


 65535:
 (uid_t)(-1) == (gid_t)(-1) must not be used, because it is the error return 
 sentinel value.

Source for this was the Debian Policy Manual. Above is almost 1:1 like from in the source.

Working with linux users
posted on 2014-05-01 18:49:21

Usually you want this when creating a new basic user on a linux system:

$ useradd -m -U <username>

-m creates a homefolder in /home/<username>, -U creates a group with the same name as the specified. -U can be omitted, since it's the default, but what if the defaults on the system you are working on have been changed by someone else?

In case you want to initially revoke login rights, use

$ useradd -m -U -s /usr/sbin/nologin <username>

where -s sets the shell to /usr/sbin/nologin. Another possibility is to use /bin/false. The /bin/false setting will just prevent logging in without an error message, whereas /usr/sbin/nologin will either print This account is currently not available. or whatever is specified in /etc/nologin.txt.

For 'system' users you should use /bin/false if no login is needed, or /bin/sh if it is.

But what if these need no homefolder? Or a group with the same name as their username?

$ useradd -r -d /tmp -G <group> -s /bin/false <username>

The example above will create a user that cannot login and has no dedicated homefolder, and a user id residing within the system user id range. /etc/passwd will show /tmp as homefolder, but thats about it. -r classifies the user as a system user, means he has an user / group id below 1000 usually.
This depends on the linux distribution you are using, IIRC, i.e. in Debian see here.

This has to do with internal filtering, so regulars can be distinguished from system users by id.

Maybe you have to have to set an account for a real person, so lets also specify also his full name in a comment via -c and give him access to a proper shell:

$ useradd -m -U -G <group1>,<group2> -s /bin/bash -c "<REAL NAME>" <username>

Specifying the shell is important, else he will have only a sh, not a bash. If you want, you can change this behaviour in /etc/defaults/useradd.

How about adding the user to groups, too?

$ useradd -m -U -G <group1>,<group2> -s /bin/bash -c "<REAL NAME>" <username>

Changing user settings can either be done editing /etc/passwd, but DO NOT DO THIS DIRECTLY!
Use vipw instead, it will lock the file so concurrent updates are impossible. Same goes for /etc/groups, use vigr for this.
For editing the shadow files like /etc/shadow (users) or /etc/gshadow (groups) use vipw -s and vigr -s.
Changing sudo rights is done via visudo.

You can also use one of the following:

chsh = change a user's shell
chfn = change user information such as real name and more
passwd = change user's password
usermod = change users's properties
userdel = delete a user

For userdel you usually want to use userdel -r <username> so the mail and the homefolder will be deleted. Keep in mind this might also delete the dedicated user group .

For creating users in batch use newusers.

Using rdesktop
posted on 2014-05-01 11:34:57

Either use a graphical frontend, or in case you want the console command for binding it to a shortcut:

rdesktop -g 800x600 -d <domain> -u <user> -p <pw> 123.123.123.123

g is short for geometry and means the resolution, which can be arbitrarily chosen. It can also be set as a percentage value, i.e. 80%.
d, u, p mean domain, user, password.

ssh key forwarding
posted on 2014-04-28 18:26:53

To enable ssh key forwarding on startup, use these in .bashrc or .zshrc:

eval $(ssh-agent|grep -v echo)
ssh-add > /dev/null

This has supressed output, and will work next time you login a new shell.

Further you have to have set these in your ssh config.

# Mind the indentation!
Host *
    ForwardAgent yes
    StrictHostkeyChecking no

    # if you want, try these
    User root
    VisualHostkeyChecking yes

User root is the user that will be used, if the username is omitted with the ssh command, and you do not want to use the current user on your machine. VisualHostkeyChecking shows the graphical fingerprint of the machine you are connecting to.

On Linux, for system-wide changes these go into /etc/ssh_config. If you just want to change for a specific user, just change the /home/<username>/.ssh/config file.

User specific changes on windows go into C:\Users\<username>\.ssh\config IIRC.

A proper du / disk usage alias
posted on 2014-04-26 21:18:50

This finds you all files in the current folder, sorts them from biggest to lowest, and puts human readable file sizes on it.

function dus () {
du --max-depth=0 -k * | sort -nr | awk '{ if($1>=1024*1024) {size=$1/1024/1024; unit="G"} else if($1>=1024) {size=$1/1024; unit="M"} else {size=$1; unit="K"}; if(size<10) format="%.1f%s"; else format="%.0f%s"; res=sprintf(format,size,unit); printf "%-8s %s\n",res,$2 }'}

Usage:

$ dus

Sample output:

[sjas@ctr-014 ~]% dus
3.1G     Downloads
1.7G     VMware-vCenter-Server-Appliance-5.5.0.5100-1312297-system.vmdk
1.4M     blog
576K     work
80K      hs_err_pid25560.log
80K      hs_err_pid24938.log
8.0K     bin
4.0K     yankring_history_v2.txt

This should be included in all linux distros by default.

Installing the Oracle JDK/JRE on Debian
posted on 2014-04-25 12:52:31

Sometimes you need the reference implementation (And not, i.e. the OpenJDK one that is easily available from the package repositories...) from the Oracle homepage. Might be you need exactly Java in v6 or v7 for IPMI for your Supermicro servers.

In this case several problem pop up:

  1. Oracle only provides .rpm and .tar.gz downloads.
  2. When getting the .tar.gz, might have problems installing it.
  3. Setting new package resources in /etc/apt/sources.list might also cause other problems, depending on the information you dig up from the internet.
  4. If No.3 works, you will run into the same trouble again, once you have to redo and regoogle what you did. (Of course this never happens. Haha.)
  5. Depending on what you install, you might miss the Java Web Start executable. Or it might be wrongly installed. (Of course, this never happens, either...)

So here is a better approach, which is easier to reproduce and will work.

First download the install of choice. (Choose the 32bit .tar.gz or the 64 bit one, according to your system. I.e. jdk-7u55-linux-x64.tar.gz)

$ apt-get install java-package
$ make-jpkg jdk-7u55-linux-x64.tar.gz

Say yes and ok, and let it work it's magic. Do not worry about error messages, at least in my case they were not of importance.

$ dpkg -i oracle-j2re1.7_1.7.0+update55_amd64.deb

And you are mostly done.

Only problem left might be that all is installed correctly, just the javaws not.

Check by running:

$ javaws

If this does not work, due to previously installed IcedTea implementation or whatnot, try this:

$ cd /etc/alternatives
$ ls java*

Then everything should point to the oracle install.

In my case everything did. Except the Web Start Link.

$ rm javaws
$ ln -s /usr/lib/jvm/java-7-oracle/bin/javaws javaws

Afterwards run

$ javaws

and you might see something like this:

[root@ctr-014 ~/Downloads]% javaws
Java(TM) Web Start 10.55.2.13-fcs 
Usage:  javaws [run-options] <jnlp-file>
        javaws [control-options]

where run-options include:
  -verbose              display additional output
  -offline              run the application in offline mode
  -system               run the application from the system cache only
  -Xnosplash            run without showing a splash screen
  -J<option>            supply option to the vm
  -wait                 start java process and wait for its exit

control-options include:
  -viewer               show the cache viewer in the java control panel
  -clearcache           remove all non-installed applications from the cache
  -uninstall            remove all applications from the cache
  -uninstall <jnlp-file>                remove the application from the cache
  -import [import-options] <jnlp-file>  import the application to the cache

import-options include:
  -silent               import silently (with no user interface)
  -system               import application into the system cache
  -codebase <url>       retrieve resources from the given codebase
  -shortcut             install shortcuts as if user allowed prompt
  -association          install associations as if user allowed prompt

Done.

Postgresql 9.3 install error on Debian Wheezy
posted on 2014-04-08 20:14:09

When trying to install the newest postgres DB on debian 7.x according to the howto on the postgres howto this causes trouble:

Create the file /etc/apt/sources.list.d/pgdg.list, and add a line for the repository deb http://apt.postgresql.org/pub/repos/apt/ wheezy-pgdg main

If instead of pasting the deb http://apt.postgresql... line into the sources list in the parent folder, everything works as expected...

Find out which linux distro & version you are running
posted on 2014-04-08 09:21:20

Ever wondered what linux distribution or which version you are running?
Try:

$ lsb_release -a

or

$ cat /etc/*-release

This works on Fedora and Debian at least, haven't tested it on other distributions.

clusterssh on Fedora
posted on 2014-04-05 12:32:28

Ever had to administer several linux machines after another with quite the same configuration? Or had to work on several machines while being on really bad connection forcing you to reconnect and having to reopen half a dozen shell windows or even more?

clusterssh to the rescue!

Its features:

  • Connect to several servers at once.
  • Send all terminals the same input AT ONCE.

Start with getting the packages:

sudo yum install clusterssh -y

Install.

Afterwards its a nice idea to create serveraliases:

qwer name@box1.tv noname@box2.ru
asdf user@server1.xy anotheruser@srv2.yz

both qwer asdf

Put this either in /etc/clusters or in $HOME/.csshrc.

Upon calling cssh both clusterssh will try to connect to name@box1.tv noname@box2.ru user@server1.xy anotheruser@srv2.yz. Of course, cssh qwer and cssh asdf can be used separately, too.

Also you can leave the username out anyway, when connecting as root to all other boxes. :)
Via the -l you can specify the user, as which you want to log in on the remote machines. Go look at the examples on the manpage yourself, you might like this tool very much.

Linux terminal chat
posted on 2014-04-05 12:25:46

In the linux shell there is a possibility to have a chat between logged in users.

Terminal chat is used like this:

# show users logged in on the server/workstation
$ w

# open chat [i.e. 'write sjas pts/4']
$ write user <terminal>

# logout, same as in regular console
CTRL+D

Even though you might not need this often, it can be quite helpful while fixing things together on a server while working over ssh from remote machines.

Proper workspace switching in XFCE
posted on 2014-04-04 13:51:23

To change the jump-to-workspace and related shortcuts:

Applications Menu -> Settings -> Window Manager

Tab: Keyboard

Ctrl-1 to Ctrl-4 work pretty decent, instead of the previously bound Function keys.

List available disks in Fedora 20
posted on 2014-04-04 13:41:11

Show all available devices via console:

# fdisk -l /dev/[sh]d?

Lists all hdx/sdx devices and its partitions.

bash screen essentials
posted on 2014-03-17 14:41:15

According to its man page GNU screen is a

full-screen window manager that multiplexes a physical terminal between several processes [...] There is a scrollback history buffer for each virtual terminal and a copy-and-paste mechanism that allows moving text regions between windows.

which sounds quite interesting.
All this means you can get several terminals into a single shell window. Other nice functions are detachable windows, which is a killer feature for unstable ssh sessions. Tmux (http://blog.hawkhost.com/2010/06/28/tmux-the-terminal-multiplexer/) seems to be the the better alternative nowadays, but sometimes you might have to stick to screen.

$ screen

will invoke the program. There are lots of options possible here, see man screen.

Ctrl-a is by default the global hotkey. All screen commands have to be prefixed with it.

In screen there exist windows and regions. Windows are like several shells running in parallel. Regions are like the two window areas when you use split screen.

basic usage

Most important are:

? for help
| for creating a vertical split region
S for creating a horizontal split region
tab for switching to another region
w for showing a list of windows in the titlebar
" for showing a list of windows in a window, use j or k and enter for navigating
c for starting a new shell
a for changing the buffer within a window
X for killing a region (leaving the window alive)
k for killing a window (leaving the region intact)
\ for exiting screen

That is about it.

Open screen, create a region, switch to it, create a new shell window, be happy.

ssh

## create screen with session named '<sessionname>'
screen -S <sessionname>

## detach a session (while running screen)
ctrl-a d

## show available screen sessions
screen -list

## reattach a session (from the shell)
screen -r <sessionname>

## reattach if session wasn't detached earlier
## happens when you accidentally closed the window
## or when connection went broke
screen -d -r <sessionname>

If you have to copy huge amounts of data or have other long running screen sessions that should not be interrupted, screen will have you covered, literally.

Distinguish builtin shell functions, aliases, functions and commands
posted on 2014-01-20 10:04:41

If you have a grown .bashrc and wonder what commands you did define in the past, these are helpful:

bash

type -t

Will show you what exactly you are dealing with. (Builtin, alias, function, regular command.)

[jl@jerrylee ~]$ type -t git
file

[jl@jerrylee ~]$ type -t export
builtin

[jl@jerrylee ~]$ type -t gc
alias

gc is an alias which I have locally defined in my .bashrc. It has a function bound to itself as we will see.

Built-ins are looked up a the main man page. (I.e. man bash or man zsh.)

alias

[jl@jerrylee ~]$ alias gc
alias gc='gitcommit'

If alias is used with no string afterwards, it will push out a complete list of all defined aliases.

declare / typeset

To look up functions:

[jl@jerrylee ~]$ declare -f gitcommit
gitcommit () 
{ 
    git c "$*"
}
[jl@jerrylee ~]$ typeset -f gitcommit
gitcommit () 
{ 
    git c "$*"
}

declare -f and typeset -f are synonymous.

zsh

Here you easiest start with which.

Besides, all other commands are the same.

Find files via grep and open in editor
posted on 2014-01-18 23:33:31

Often you have to find a function definition or use or just a unique string or setting, but do not know in which file it is located. Grepping will return the results, but you have to type in the filename again when opening the file in question in an editor. This here is intended to streamline the process.

USAGE:
grepe static void main or
grepv static void main.

No "" needed.

INSTALL:
Put this into your .bashrc.

#emacs:
grepe(){
    emacs $(\grep -irl "$*" .)
}
#vim:
grepv(){
    vim $(\grep -irl "$*" .)
}

I wonder why I did not look for this earlier. :o)

grep is used instead of ack so all files are searched, as ack will only search files it 'knows'.

Extract list of classes being used in legacy java project
posted on 2013-12-28 09:55:59

Currently I am working with a small sized legacy code base. To get a better overview, the actual LOC (lines of code) might be of interest:

# all lines including the whitespace
time \grep '.*' * -rc | cut -d ':' -f 2 | paste -sd+ | bc

Stripping the blank lines is left as an exercise to the reader.

This is ugly, but blazingly fast. time is just in there to see how fast things actually are.

Also a sorted list of all self-defined classes might make for a handy overview:

ack -h 'public class' | sed -e 's/^\s*//g' | cut -d " " -f 3-5 | sort | sort -k 2,3

Do yourself a favor, and use ack instead of grep. Nothing to regret in 99% of all use cases...

Installing emacs 24.3 on Fedora 19
posted on 2013-12-28 00:56:42

Installing emacs under one of the latest fedora releases is a bit of an act.

First, as of 12/2013 the newest version in yum is 24.1. If you are happy with this, then you are fine. If you need helm... things are different since it needs a recent emacs version, but at least 24.3. Would only be half a funny, if this one wasn't the newest you can possibly get.

Anyway, get the download from the homepage here, and let the games begin!

If you chose one with this funny new .xz ending, unxz emacs-24.3.tar.xz followed by a tar -xf emacs-24.3.tar will do. (bzip will be deprecated for exchanging kernel files beginning 2014, it seems, so xz will stick.)

When trying the magic ./configure, make, make install triplet the configure step will fail with this message:

configure: error: The following required libraries were not found: libXpm libjpeg libgif/libungif libtiff Maybe some development libraries/packages are missing? If you don't want to link with them give --with-xpm=no --with-jpeg=no --with-tiff=no as options to configure

Solution is to install all these lib's -devel files. (At least I did, so I could run the regular ./configure step without disabling anything.) Seems to be fine, if you have either libgif or libungif. I was missing the first one, but that did not pose a problem.

Afterwards, if you try configure right again, it will tell you this:

configure: error: The required function `tputs' was not found in any library. The following libraries were tried (in order): libtinfo, libncurses, libterminfo, libtermcap, libcurses Please try installing whichever of these libraries is most appropriate for your system, together with its header files. For example, a libncurses-dev(el) or similar package.

Install ncurses-devel, make, make install and the newest emacs will be glad to be of service, after about the while it took to write this. :o)

Using yum's shell
posted on 2013-11-30 17:00:39

When using yum and having to search or change a lot of installed packages, it makes sense to use yum's shell.

Invoke it via

yum shell

Do not forget to sudo, when running it as non-root.

Once you are at the yum prompt, you can directly use the following:

help
search <package-name>
remove <package-name>
install <package-name>

This will lead to a list of changes that are to be applied. If you escape the shell then via exit or quit, nothing will be changed.

Prior do

run

to apply the transactions you prepared.

scp properly explained
posted on 2013-11-16 17:05:03

scp is handy when transferring files from one host to another while being in a shell. How else to transfer stuff without using FTP or a kind of version control? Of course there are other alternatives, but scp's advantage is that it is widely available, does not need any kind of setup on the other host (As long as you have access to your other box, that is.) and has encrypted traffic. Also no GUI, mounting of USB sticks etc. pp. is needed. Sounds great.

The syntax looks like this, higlevel:

scp SOURCE DESTINATION

Or lower level and a little more concrete:

scp <src-user>@<src-host>:/dir/file <dst-user>@<dst-host>:/dir/file

src is shorthand for 'source', dst for 'destination', in case you wondered. Of course there are flags and parameter settings that can be used. But using man scp yourself is not rocket science. :o)

To specify working scp calls it is helpful to properly understand the user@host:file syntax. If your current use case is to copy a file from the host you are currently on, user@src-host can be omitted. Just the filename (and its path if you are not in the same directory on the shell) is needed. user is the username of the system user on the machine in question. This is the user with which you'd log into the remote machine. If passwords are needed, the system will promt you to enter them.

If you have setup SSH keys properly, and are in the same folder as the file you want to transfer, a call could look like this:

scp example.txt <dst-host>:

<dst-host> is either a valid IP or a domain name pointing to the IP.

Here are several things omitted:

  1. The <src-user>, the user on the machine you are currently logged on, and <src-host>, the address of the current host you are on.
  2. The file path and file name at the destination.

So the file will be put in the homefolder of the user that is used on the remote machine. (This is the folder entry of the user entry, to look it up use grep <username> /etc/passwd on the remote machine, in case it is not /home/<username>.)

The colon in the example above MUST NOT be omitted.

Else nothing will be copied to the remote address. You will not an error message, since linux thinks the destination address you specified is a file name, and the file is copied locally.

If you want to specify a certain folder on the remote host, either use the full path, or specify it in relation to the users home directory.

Examples:

## file on server will be '/home/sjas/.ssh/asdf.txt'
scp file.txt 123.123.123:.ssh/asdf.txt

## file on server will be '/tmp/file.txt'
scp file.txt sjas.de:/tmp

So long. Maybe as a last note that there is the -r flag, so you can copy whole directories and not just files.

man man
posted on 2013-10-31 18:17:24

What are the reasons for the numbers when firing up manpages? What is the difference between man printf and man 3 printf? (The first is about the shell command. The latter is about the C function.)

Its the group to which the entries belong: (Found in man man.)

1 Executable programs or shell commands
2 System calls (functions provided by the kernel)
3 Library calls (functions within program libraries)
4 Special files (usually found in /dev)
5 File formats and conventions eg /etc/passwd
6 Games
7 Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7)
8 System administration commands (usually only for root)
9 Kernel routines [Non standard]

If there exists only one man entry, the number can be omitted. That is why man git or man gittutorial works, even though git is from section 1, but the tutorial is located in section 7.

Line numbers in vim
posted on 2013-10-27 14:27:18

Line numbers in vim are turned off by default, and there exist two variations of them. (Since vim version 7.3 IIRC.)

When viewing a file with a dozen lines, numbers can be shown absolute (1,2,3,4,5..12) or relative to the line you are in. If your cursor is located in line 3, numbering would change if you move to line 4. (2,1,[3],1,2,3,4..9) changes to (3,2,1,[4],1,2,3..8).

Relative numbering would look plain useless on first sight, but when was the last time you wondered which line numbering argument to use relative to the current line? I.e. delete the next x lines, where x is not just 4,5,6 lines, where you can still easily tell the count?
Less obvious at first, think less complex, do you use j and k for in-file-navigation? Sure, { and } will jump paragraph-wise. But is this really the best means to an end?

When using absolute numbers :10<CR> will jump to line 10 in the file. But 5j will jump five lines down, 5- goes five down from the current line, and relative numbering will show exactly how many lines you will have to enter. For example, what is when the file you are editing in like 4000 lines long, and you move around lines 3500? Jumping to absolute lines gets old fast, if you are in the four-digits, line-wise.

To enable line numbers shown:

# turn on absolute numbering
:se nu
# turn on relative numbering
:se rnu

To disable: (Comes in handy for c&p things, especially when using vim in the console.)

# turn off absolute numbering
:se nonu
# turn off relative numbering
:se nornu

Of course, you cannot use both modes at once. But I disgress. The incentive for this post was a different use case.

How to number the lines of the current code snippet for documentation purposes in-file after pasting?

There are a lot of possible solutions with vimscript usage (which is very likely not the easiest way). Or just use linux'/cygwin' nl command:

:%!nl -ba

The % indicated this is applied to the whole file.

:'<,'>!nl -ba

Wheres this is just applied to the lines currently marked in visual mode. Select lines, :!nl -ba, vim will insert the rest. Use man nl for the different options the nl command provides.

Just put this in your .vimrc:

vnoremap <leader>ln !nl -ba<cr>

This lets you annotate an area you selected in visual mode via Leaderkey-l-n.

To easily remove the line numbers in case they would not be needed anymore, C-v enables vim's block-wise editing mode. At least under linux, for windows this was C-Q or C-q or something, I just remember it differed.
Anyway, this enables you to easily select a rectangle of text anywhere within your current file. The selection can be changed, deleted, whatever. Changes my only appear after exiting block-selection-mode, do not wonder when only one line is affected during your changing.

Scale down PDF size with Ghostview
posted on 2013-10-25 13:28:50

For downsizing large .pdf files, I tend to utilize Ghostscript as an open source alternative. There are programs you come across via google, but mostly these scale down fast and with bad filters. This leads to bad quality in the resulting PDF's. I.e. you cannot read text properly anymore, pictures are pixelated and so on.

There also exist proprietary solutions like Adobe Acrobat (The free Acrobat Reader does not have this functionality!) From a technical point of view, the Adobe Suite is by my estimation roughly faster by factor x10 to x20 faster compared to Ghostscript. The output quality is fine, so its price exists for a reason, to be fair. Last time I looked, it was around 200 USD.

This is what I am using on windows via Cygwin, is analog on regular linux boxes:

\gs  -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -dPDFSETTINGS=/ebook -sOutputFile=name_of_downsized_pdf.pdf  name_of_original_file.pdf

Works well, costs nothing. You just have to have Ghostscript installed.

Last time I used it on a PDF with 60 pages text and a dozen pictures of 6MB size, the output was around 1.5MB and took like 2 minutes. The time-intensive part is scaling the pictures.

These notes of from a shellscript of mine I used for batch-converting several PDF's reapeatedly in the past may be of interest:

#!/bin/bash
#
# Converts .pdf files to lower quality.
# This may take a while.
# And you have to have ghostscript installed for this to work.
#
# the leading '\' is cosmetical, i have an alias binding on 'gs'.
# It tells the shell to ignore aliases for the following command IIRC.
#
# add '-q' flag for less verbose output
#
# -dPDFSETTINGS=/screen   (screen-view-only quality, 72 dpi images)
# -dPDFSETTINGS=/ebook    (low quality, 150 dpi images)
# -dPDFSETTINGS=/printer  (high quality, 300 dpi images)
# -dPDFSETTINGS=/prepress (high quality, color preserving, 300 dpi imgs)
# -dPDFSETTINGS=/default  (almost identical to /screen)
Create strong random passwords with openSSL in linux
posted on 2013-10-21 18:18:56
openssl rand -base64 10

The 10 is the password length.

Insert date and time via vim
posted on 2013-10-16 00:30:54

Out of the box vim does not have any possibility to insert date/time information into text files you are editing. (At least that vim were sporting such functionality would be news to me.) But unix comes to help. Vim is programmable and makes it easy to bind anything to any key.

First how to obtain the proper date information. Test this on your command prompt:

date --rfc-3339=seconds

which produces something like

2013-10-16 00:03:26+02:00

To bind commands to keys in vim, the .vimrc file needs some editing. It lies in your homefolder. (This file could also be called _vimrc, if you are under windows. Depending on how you installed vim.) In case it does not exist, create a new one and insert the following line:

nnoremap <Leader>fs :.!date --rfc-3339=seconds<cr><esc>

If the timezone information is too much, just use instead:

nnoremap <Leader>fs :.!date --rfc-3339=seconds<cr><esc>$xxxxxx

and you have

2013-10-16 00:03:26

The binding can be used in normal mode in vim via \fs (press these three keys one after another), except if you have rebound your leader key. When used, the entire line will be replaced with the date entry.

Leaderkey rebinding can be done like here.

This blog covers .csv, .htaccess, .pfx, .vmx, /etc/crypttab, /etc/network/interfaces, /etc/sudoers, /proc, 10.04, 14.04, AS, ASA, ControlPanel, DS1054Z, GPT, HWR, Hyper-V, IPSEC, KVM, LSI, LVM, LXC, MBR, MTU, MegaCli, PHP, PKI, R, RAID, S.M.A.R.T., SNMP, SSD, SSL, TLS, TRIM, VEEAM, VMware, VServer, VirtualBox, Virtuozzo, XenServer, acpi, adaptec, algorithm, ansible, apache, apachebench, apple, arcconf, arch, architecture, areca, arping, asa, asdm, awk, backup, bandit, bar, bash, benchmarking, binding, bitrate, blackarmor, blowfish, bochs, bond, bonding, booknotes, bootable, bsd, btrfs, buffer, c-states, cache, caching, ccl, centos, certificate, certtool, cgdisk, cheatsheet, chrome, chroot, cisco, clamav, cli, clp, clush, cluster, coleslaw, colorscheme, common lisp, console, container, containers, controller, cron, cryptsetup, csync2, cu, cups, cygwin, d-states, database, date, db2, dcfldd, dcim, dd, debian, debug, debugger, debugging, decimal, desktop, df, dhclient, dhcp, diff, dig, display manager, dm-crypt, dmesg, dmidecode, dns, docker, dos, drivers, dtrace, dtrace4linux, du, dynamictracing, e2fsck, eBPF, ebook, efi, egrep, emacs, encoding, env, error, ess, esx, esxcli, esxi, ethtool, evil, expect, exportfs, factory reset, factory_reset, factoryreset, fail2ban, fbsd, fedora, file, filesystem, find, fio, firewall, firmware, fish, flashrom, forensics, free, freebsd, freedos, fritzbox, fsck, fstrim, ftp, ftps, g-states, gentoo, ghostscript, git, git-filter-branch, github, gitolite, gnutls, gradle, grep, grml, grub, grub2, guacamole, hardware, haskell, hdd, hdparm, hellowor, hex, hexdump, history, howto, htop, htpasswd, http, httpd, https, i3, icmp, ifenslave, iftop, iis, imagemagick, imap, imaps, init, innoDB, innodb, inodes, intel, ioncube, ios, iostat, ip, iperf, iphone, ipmi, ipmitool, iproute2, ipsec, iptables, ipv6, irc, irssi, iw, iwconfig, iwlist, iwlwifi, jailbreak, jails, java, javascript, javaws, js, juniper, junit, kali, kde, kemp, kernel, keyremap, kill, kpartx, krypton, lacp, lamp, languages, ldap, ldapsearch, less, leviathan, liero, lightning, links, linux, linuxin3months, lisp, list, livedisk, lmctfy, loadbalancing, locale, log, logrotate, looback, loopback, losetup, lsblk, lsi, lsof, lsusb, lsyncd, luks, lvextend, lvm, lvm2, lvreduce, lxc, lxde, macbook, macro, magento, mailclient, mailing, mailq, manpages, markdown, mbr, mdadm, megacli, micro sd, microsoft, minicom, mkfs, mktemp, mod_pagespeed, mod_proxy, modbus, modprobe, mount, mouse, movement, mpstat, multitasking, myISAM, mysql, mysql 5.7, mysql workbench, mysqlcheck, mysqldump, nagios, nas, nat, nc, netfilter, networking, nfs, nginx, nmap, nocaps, nodejs, numberingsystem, numbers, od, onyx, opcode-cache, openVZ, openlierox, openssl, openvpn, openvswitch, openwrt, oracle linux, org-mode, os, oscilloscope, overview, parallel, parameter expansion, parted, partitioning, passwd, patch, pdf, performance, pfsense, php, php7, phpmyadmin, pi, pidgin, pidstat, pins, pkill, plesk, plugin, posix, postfix, postfixadmin, postgres, postgresql, poudriere, powershell, preview, profiling, prompt, proxmox, ps, puppet, pv, pvecm, pvresize, python, qemu, qemu-img, qm, qmrestore, quicklisp, r, racktables, raid, raspberry pi, raspberrypi, raspbian, rbpi, rdp, redhat, redirect, registry, requirements, resize2fs, rewrite, rewrites, rhel, rigol, roccat, routing, rs0485, rs232, rsync, s-states, s_client, samba, sar, sata, sbcl, scite, scp, screen, scripting, seafile, seagate, security, sed, serial, serial port, setup, sftp, sg300, shell, shopware, shortcuts, showmount, signals, slattach, slip, slow-query-log, smbclient, snmpget, snmpwalk, software RAID, software raid, softwareraid, sophos, spacemacs, spam, specification, speedport, spi, sqlite, squid, ssd, ssh, ssh-add, sshd, ssl, stats, storage, strace, stronswan, su, submodules, subzone, sudo, sudoers, sup, swaks, swap, switch, switching, synaptics, synergy, sysfs, systemd, systemtap, tar, tcpdump, tcsh, tee, telnet, terminal, terminator, testdisk, testing, throughput, tmux, todo, tomcat, top, tput, trafficshaping, ttl, tuning, tunnel, tunneling, typo3, uboot, ubuntu, ubuntu 16.04, udev, uefi, ulimit, uname, unetbootin, unit testing, upstart, uptime, usb, usbstick, utf8, utm, utm 220, ux305, vcs, vgchange, vim, vimdiff, virtualbox, virtualization, visual studio code, vlan, vmstat, vmware, vnc, vncviewer, voltage, vpn, vsphere, vzdump, w, w701, wakeonlan, wargames, web, webdav, weechat, wget, whois, wicd, wifi, windowmanager, windows, wine, wireshark, wpa, wpa_passphrase, wpa_supplicant, x2x, xfce, xfreerdp, xmodem, xterm, xxd, yum, zones, zsh

View posts from 2017-03, 2017-02, 2017-01, 2016-12, 2016-11, 2016-10, 2016-09, 2016-08, 2016-07, 2016-06, 2016-05, 2016-04, 2016-03, 2016-02, 2016-01, 2015-12, 2015-11, 2015-10, 2015-09, 2015-08, 2015-07, 2015-06, 2015-05, 2015-04, 2015-03, 2015-02, 2015-01, 2014-12, 2014-11, 2014-10, 2014-09, 2014-08, 2014-07, 2014-06, 2014-05, 2014-04, 2014-03, 2014-01, 2013-12, 2013-11, 2013-10


Unless otherwise credited all material Creative Commons License by sjas