Posts from 2015-04

Common Lisp: CCL on a raspberry pi

posted on 2015-04-28 00:12:21

A raspberry is n ARM-architecture based processor. Usually my common lisp implementation of choice is SBCL / Steel Bank Common Lips, but for ARM CCL / Clozure Common Lisp might be a better fit.

According to some sources on the internet, this is due to CCL's native threadingsupport.

download & compile & install

Head over to http://ccl.clozure.com/download.html and copy the download link.

wget <download-link>
tar xzvf ccl-<...>.tar.gz
mv ccl-<...> /usr/local/src
cd /usr/local/src/ccl/lisp-kernel/linuxarm
vi float_abi.mk
# uncomment FLOAT_ABI_OPTION = -mfloat-abi=hard and save, quit
make clean && make
ls -s /usr/local/src/ccl/armcl /usr/local/bin/armcl

Now you should have running armcl binary, which is at the right position according to the FHS (Filesystem Hierarchy Standard) and can be run from anywhere of your shell.

install quicklisp

cd /usr/local/src
curl http://beta.quicklisp.org/quicklisp.lisp > quicklisp.lisp
armcl
(load "quicklisp.lisp")
(quicklisp-quickstart:install)
(ql:add-to-init-file)
(quit)

Then vi ~/.ccl-init.lisp and wrap the generated code within (defun load quicklisp () ... ). That way you have an easy to call function if you really need ql loaded ((load-quicklisp) will do in the interpreter.), but initial startup is faster. This and the float config trick is something I found at lispm.de, thanks to Rainer Joswig.

quicklisp help

These might be helpful:

(ql:update-all-dists)
(ql:update-client)
(ql:system-apropos '<string>)
(ql:who-depends-on '<string>)

MySQL: restore single table from dump

posted on 2015-04-24 11:13:21

MySQL dumps are usually created from whole databases. But what if you only need a single table restored?

You could edit/sed/grep the dump for information on just this single one table (and hopefully don't fuck up), or let mysql do the work. Simply restore the dump to a test database and then dump just the table in question, so you can load just the table dump back into the production database.

Keep in mind, this might take ages if you have extremely large dumps.

In the following it is assumed you have a working .my.cnf so you do not have to enter the user and passwort with every shell call.

#create db
mysqladmin create NAME_OF_TEMP_DB

#replay full dump
mysql NAME_OF_TEMP_DB < fulldump.sql

#dump table in question
mysqldump NAME_OF_TEMP_DB TABLE_NAME > table_name.sql

#load tabledump back into production
mysql NAME_OF_PROD_DB < table_name.sql

So simply mysqladmin - mysql - mysqldump - mysql and you are done. :)

Raspberry Pi: Seafile installation from scratch plus WebDAV access

posted on 2015-04-21 21:34:33

the use case

After dropbox cut a program I took part in my free space went down from 25gb to 3gb, I had a reason to get an alternative up and running. iOS apps should work with it, too, as long as the can work via WebDAV.

There are a lot of comparisons of owncloud, pydio, seafile and all the alternatives, but I am somehow suspicious of pydio, owncloud has problems once you start having too many files it seems, so I ended up with a seafile test. The results were great, so a pi was bought and this guide is the result.

what will be covered

Here a rather detailed setup howto is given, to get a seafile install on a brand new raspberry pi. It will contain side info's on the networking stuff. These tend to be not covered in almost all the other guides usually since all this is in general considered 'trivial'. Which just means everybody was to lazy to give hints where that stuff just is NOT trivial, and actual work to explain properly.

Seafile will run with a mysql backend, and with an nginx webserver so I can get some education on it myself, having worked almost only with apaches until now.

However there are no guarantees, as once again, this is partly written just from memory.

prerequisites

You need these items:

  • a raspberry pi (get the latest model and be happy)
  • a case for the rasp, so it won't lie around in the open
  • a micro SD card with 4gb, with an SD card adapter
  • a cardreader (to write the pi's OS onto the micro SD with the SD adapter)
  • an AC adapter for the pi with a micro USB B connector
  • an ethernet cable (just a network cable with rj45 plugs)

And one of these:

  • a HDD/SSD with own power connection
  • or a usb-only HDD/SSD plus an USB hub with own power supply

If you try an external drive without an extra power supply, it won't run. The pi simply cannot provide enough power via its USB ports. You can see this through the red LED. If the voltage drops below 4.6V or something, it will flicker or just turn off.

DynDNS

Also some kind of DynDNS service would be helpful. Since there seemed to be a lot of trouble with the free ones, either spend some money or set up your own.

Since I was bored and have already a DNS server running and have a domain, I chose to roll my own setup. In that way you either run your whole domain via your DNS server (means your DNS is the primary domain server for your domain), or you can try using 'subzone delegation', so your DNS server runs only a subdomain whereas the domain hoster will run your 'main' domain.

On how to do this, get some other tutorials, I covered it here.

setup the system

get the OS

Install the hardware, which should not pose a problem. If it does, seriously get someone with more knowledge on computers to help you!

Get the Raspbian image, which is a debian-based OS for the pi. Download and unzip it.

get the OS onto the SD card

Open one console and enter watch -d -n1 lsblk, then insert the SD card. That way you will know what the device is called on your linux box.

Open another shell window, and then put the raspbian image onto the card:

dd if=/path/to/the/<raspbian-file.iso> of=/dev/sdX

Of course, fix path and device in the line above.

If you want to know how fast the copy process runs, try:

ps aux | grep dd

And search for the process id of the dd process from above. Then do in another shell window:

watch -n5 kill -usr1 <dd-process-id>

That way the copy process stats will be shown in the dd window every five seconds, which is nice since the 3GB image takes some time to copy. But pardon, I disgress.

fix ssh and networking

Once the copying is finished, mount the card and fix ssh. That way you will not need to hook up the pi to a keyboard and monitor (I have no HDMI capable screen here, so ...) but just connect the ethernet cable and be done.

So:

mkdir asdf
sudo mount /dev/sdX asdf
cd asdf
vim etc/network/interfaces

then either hand the eth0 interface a DHCP configuration (which is stupid), ar just give it a fixed ip.

If your home network is set to the 192.168.0.0/24 net, try configuring this:
If your network is 192.168.178.0/24 or 10.0.0.0/24, fix the ip in the following examples.

allow-hotplug eth0
iface eth0 inet static
    address 192.168.0.254
    netmask 255.255.255.0
    gateway 192.168.0.1
    network 192.168.0.0
    broadcast 192.168.0.255
    dns-nameservers 192.168.0.1

If you cannot use vim, try the damned nano or whatever editor you fancy.

Also these might be a good idea:

echo pi > etc/hostname

and

vim etc/resolv.conf

where you'd enter this line:

nameserver 192.168.0.1

Save and quit.

vim etc/ssh/sshd_config

and make sure there is this:

PermitRootLogin yes

If it were PermitRootLogin without-password, you'd not be able to connect. Save, close.

cd 
sudo umount /dev/sdX

And you can pull the SD card out and put it into the pi.

Hook up your pi to the network via network cable to your home router.

Try ping 192.168.0.254 and see if something answers after the pie has booted (which should take no longer than some minutes, I never measured the time.). If it doesn't work, you either have network issues, or misconfigured something above.

If it answers your ping, get a host entry so connecting to the pi is easier: (/etc/hosts entries are basically local DNS records, this will do you no harm.)

echo '192.168.0.254 pi' >> /etc/hosts

and copy your ssh key onto it, so passwordless login will work:

ssh-copy-id root@pi

You could also use ssh-copy-id 192.168.0.254 - it will work the same, but in the further text the pi ip will be referenced by the local dns name 'pi'. Period. Try it:

ssh root@pi

And you should be connected.

disk preparation

For the following it is assumed, that you have plugged in the usb hub/hdd already, partitioned it and created a file system. Choose your tools and filesystem to use, do it and do not forget to mount the disk afterwards.

To get the disk to be permanently included (across reboots), add it to /etc/fstab.

My entry looks like this:

/dev/sdb1       /var/seafile    btrfs   defaults          0       2

(Since the harddisk is /dev/sdb, with a single partition, filesystem btrfs or what you have to specify for mount -t when mounting by hand, default mount -o options, no dump, not root. In case you doubt what disk your harddisk is, try lsblk, the size should tell you which one to use. The last number is actually about the filesystem check: 0=off, 1=first, 2=afterwards. root is set to 1.)

The directory /var/seafile was created by me for later usage via mkdir, so I have a working mountpoint.

To reload /etc/fstab, a mount -a will do. All this was done as the root user.

Sidenote:
lsblk will not show the mountpoints for btrfs volumes, so you have to use mount to check if everthing looks as expected.

actual seafile install

Install will be done with the MySQL Backend, as the installer tells about Problems when using an USB disk (which we do) and SQLite.

get the install files

Head over to the official download section, so you will get the newest install files. Here choose the raspberry package. Intel Stuff, wether 32 or 64 bit, will not work, since the raspberry has an ARM processor. See the output of uname -m if in doubt.

prepare the system

Copy the link location, for wget'ing it later. Lets also create a dedicated user, too, as it is better to run the program without root rights, security-wise.

apt-get install python2.7 python-setuptools python-imaging mysql-server python-mysqldb -y

Remember the mysql root password, you will need it later on.

mkdir /opt/seafile
useradd seafile
chown seafile.seafile /opt/seafile

Also chown the seafile folder for the seafile user else the installer will have troubles:

chown seafile.seafile /var/seafile
chmod 775 /var/seafile

installing

su - seafile
wget https://github.com/haiwen/seafile-rpi/releases/download/v4.1.2/seafile-server_4.1.2_pi.tar.gz
tar xzvf seafile-server_4.1.2_pi.tar.gz
mkdir installed
mv seafile-server_4.1.2_pi.tar.gz installed/
cd seafile-server-4.1.2/

seafile setup

./setup-seafile-mysql.sh

Enter information:

NAME: is just a label
IP / DOMAIN: enter the pi's ip if you use seafile only on LAN / via VPN, or the dynamic dns
CCNET PORT: default on 10001, since there isn't anything running besides
PATH: /var/seafile/seafile-data, since /opt/seafile is on the SD card, and the data should go to the harddisk's mountpoint
SERVER PORTS: defaults, respectively 12001, and 8082
DATABASE INIT: 1, create new tables
MYSQL HOST: default, localhost
MYSQL PORT: default, 3306
MYSQL ROOT PASSWORT: the one you gave to mysql during install
MYSQL USER: seafile
MYSQL SEAFILE PASSWORT: use a new one
DATABASE NAMES: all default

After all this, the configuration is almost done. Basically the server can be started and run:

Since we want to use the services like any other ones usually, we will just link the scripts into /etc/init.d/. The following stuff is again done as root user:

ln /opt/seafile/seafile-server-latest/seafile.sh /etc/init.d/seafile
ln /opt/seafile/seafile-server-latest/seahub.sh /etc/init.d/seahub

I used hard links on purpose, but I cannot remember why I though this was a good idea.

Anyway, now you can do just:

service seafile start
service seahub start

and you will be asked to setup an seafile admin account, this time the one for the web interface, not the DB user.

Just use your email and yet another password, and remember them. You have to use this account for creating new user accounts, libraries, in short everything.

If your webinterface does not work, you might have to (re)start both services, in case you forgot one.

test seahub

Now, going into your browser, and entering the raspberry's local IP plus port 8000 should get you to the login screen. pi:8000 in the addressbar should do btw, when you set up the above mentioned /etc/hosts entry.

With the web login you just created, you should be able to login. :)

Looks promising so far.

open your firewall for external access

When using the service externally, without a VPN connection, don't forget to open port 8000 in your firewall/router. For testing the webgui, this is enough. To actually use seafile, these must be reachable: (copy-paste from the seafile install message)

port of ccnet server:         10001
port of seafile server:       12001
port of seafile fileserver:   8082
port of seahub:               8000

Later we will also open 8080 so WebDAV can be used. And 80 and 443 where the nginx will be listening to, too.

Of course, the services can be run on arbitrary ports. You might as well leave everything on default for the pi, but use different ports in the router forwarding to the actual ones on the pi, for security reasons. If you know what you are doing are bored when reading this, oh well. Just change it to your liking. Everybody else use default ports, makes the setup easier to debug.

security considerations

When using the service from the outside without a VPN or SSH tunnel, your traffic is plaintext. 'Thou shalt use thee TLS encryption!' in that case, but for that you will have to use a proper web server instead of the built-in one that comes with seafile.

Read: apache or nginx.

WebDAV

Since this is not everything which is needed, the WebDAV plugin has to be integrated.

configure webdav

Without a dedicated webserver this is rather easy.

In /opt/seafile/conf/seafdav.conf, set:

enabled = true

Save and close, plus afterwards:

service seafile restart

test webdav

Besides using webdav from the iOS app in question, you can as well us the linux command line to test, via the davfs2 package. Install it on your home computer (if you ran a linux there, else have a look at the official manual). As root do these:

In /etc/davfs2/davfs2.conf set:

use_locks 0

Save, close.

mkdir /mnt/davtest
mount -t davfs -o uid=<your linux system user> http://pi:8080 /mnt/davtest

Then you will be prompted for user credentials, you might just use the Web UI login from above. The one you entered, when you first started the seahub service.

The /mnt/davtest should now contain some more stuff, meaning WebDAV access works.

If in doubt, create a file with i.e. touch testfile in that folder, which you then can see in the web interface.

Now I have to repeat, using this externally means unencrypted data over the wire. Set up a proper webserver with TLS and configure WebDAV there, if you plan on using this setup from the outside of your home LAN without a VPN. That way you can also use proper fastcgi. ;)

a proper webserver - nginx with TLS

Since I need some nginx practice, this one will be used here.

First lets get the certificates up and running:

mkdir /etc/ssl/nginx
cd /etc/ssl/nginx
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 3650 -key ca-key.pem -out ca-cert.pem
openssl req -newkey rsa:2048 -days 3650 -nodes -keyout server-key.pem -out server-req.pem
openssl rsa -in server-key.pem -out server-key.pem
openssl x509 -req -in server-req.pem -days 3650 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

When prompted to enter something, do as you like. You could also hit just ENTER all the time until it's finished.

Then lets fix the domains the server is bound to:

In /opt/seafile/ccnet/ccnet.conf:

SERVICE_URL = https://www.yourdomain.com

In /opt/seafile/seahub_settings.py:

FILE_SERVER_ROOT = https://www.yourdomain.com/seafhttp

Now onto the nginx config, you just have to change your domain below. Open /etc/nginx/sites-available/yourdomain.com and change it accordingly, to have http directed to https and a working https:

server {
        listen       80;
        server_name  dyn.sjas.de;

        # force redirect http to https
        rewrite ^ https://$http_host$request_uri? permanent;
}

server {
        listen   443;
        server_name yourdomain.com;

        ssl on;
        ssl_certificate /etc/ssl/nginx/server-cert.pem;
        ssl_certificate_key /etc/ssl/nginx/server-key.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_session_timeout 5m;
        ssl_prefer_server_ciphers on;

        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_connect_timeout       300;
        proxy_send_timeout          300;
        proxy_read_timeout          300;
        send_timeout                300;

        location / {
                fastcgi_pass   127.0.0.1:8000;
                fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
                fastcgi_read_timeout 300;

                fastcgi_param  HTTPS            on;
                fastcgi_param  HTTP_SCHEME      https;
                fastcgi_param  PATH_INFO        $fastcgi_script_name;
                fastcgi_param  SERVER_PROTOCOL  $server_protocol;
                fastcgi_param  QUERY_STRING     $query_string;
                fastcgi_param  REQUEST_METHOD   $request_method;
                fastcgi_param  CONTENT_TYPE     $content_type;
                fastcgi_param  CONTENT_LENGTH   $content_length;
                fastcgi_param  SERVER_ADDR      $server_addr;
                fastcgi_param  SERVER_PORT      $server_port;
                fastcgi_param  SERVER_NAME      $server_name;
                fastcgi_param  REMOTE_ADDR      $remote_addr;

                access_log      /var/log/nginx/seahub.access.log;
                error_log       /var/log/nginx/seahub.error.log;
        }

        location /seafhttp {
                rewrite ^/seafhttp(.*)$ $1 break;
                proxy_pass http://127.0.0.1:8082;
                client_max_body_size 0;
                proxy_connect_timeout  36000s;
                proxy_read_timeout  36000s;
        }

        location /media {
                root /opt/seafile/seafile-server-latest/seahub;
        }
}

Now just create a proper link for sites-enabled and restart nginx:

ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/yourdomain.com
service nginx restart

From the browser you should be able to test things now, https://pi.

automatically start everything

Let us make them known as services to be run on startup. This should be not too hard, but turned out to be a hairy problem.

Just using update-rc.d on the already present files won't work. seahub will start, but seafile will not.

Just putting service seafile start; service seahub start into /etc/rc.local will not work either. That way seafile will start, but seahub will not. Oh my.

Also, seahub tries to find seafile. (... ... ...)

Long story short: Open /etc/init.d/seafile and comment out in about line 150 the "warning_if_seafile_not_running" function:

function before_start() {
    check_python_executable;
    validate_ccnet_conf_dir;
    read_seafile_data_dir;

    #warning_if_seafile_not_running;

That way the check is turned off, and seahub will come up. I honestly have no idea where the problem lies, but its related to no proper startscripts being provided.

In the official manual there does exist a skeleton that you can adapt... Sadly that stuff over there is pretty outdated. Also I simply chose not to put up with it, as it will just wrap the scripts we are currently using.

By now this article is finished, and you should have a raspberry with a working seafile+webdav install.

offtopic

For fun and educational purposes, some words on initialization scripts, through a very pointy little anecdote:

Wrapping scripts with another script will cause endless headaches when things go haywire.

On a legacy system of rather complex web application with borked initscripts three really great people could not find the error over the course of like 1,5 years, and not for the lack of trying. File encodings, no proper initscripts, a subcontractor playing dumb (and not really having a clue, developers just ain't sysadmins, restarting via 'their' scripts did work, after all), all for a medium-sized clustered (but partly dysfunctional, of course) production system with harsh uptime requirements. To further worsen everything, several people on customer side had nagios notifications for EVERY SINGLE SERVICE, which got checked every 5 minutes, each. You could not count the SMS arriving, when a host went down. Even rebooting a service could cause a MESS, which did of course not help for locating the error. The initscripts (several application instances on each of the machines running) wrapping scripts which wrapped a script which wrapped scripts. Encoding was set in several applications, the system, also on boot time within grub. You name it, a puzzle in a puzzle in a puzzle in a puzzle.

I love bash, but debugging bash environments from scripts referencing each other is something you might have to do, when you happen to be in hell where you have to burn for your sins. At least that is how I imagine it.

Final result somewhere was a forgotten - after a su. Once I found it, my day was over. Will never forget this moment I found the cause, even if get a hundred years old. WRITE PROPER INITSCRIPTS, PEOPLE!

TODO

On the TODO list for this system could be:

  • logrotate and proper logging, since these are written in /opt/seafile/logs on the SD card, which is bad
  • a ramdisk for the /tmp folder
  • a custom fail2ban setup using the seafile configs

But for now, this post is finished.

To the brave soul reading this:
I hope you did like this little write up.

vim: remote editing of files

posted on 2015-04-21 10:24:10

In general, this will do:

vim scp://remoteuser@server.tld//path/to/document

E.g.

vim scp://my_user:my_pass@sjas.de//var/www/domain/somefile.txt

The important part is the double slash after the domain, in case you specify absolute paths.

To not having to fiddle with passwords, create a .netrc file in your home folder containing entries like this one:

machine yourftp.somewhere.org login yourlogin password "yoursecret"

Some more tricks can be found here, where this is initially from.

If vim tells something about 'buftype' and that it cannot save, issue this command prior to saving from within vim:

:se buftype=

Benchmarking micro SD cards

posted on 2015-04-15 01:28:41

Benchmarked some SD-cards, all which were of U1 standard, according to the print on each one.

speed in theory

The speeds mentioned on the packaging were:

  1. read ??mb/s, write ??mb/s
  2. read 90mb/s, write 50mb/s
  3. read 95mb/s, write 90mb/s

first card reader

[root@jerrylee /home/jl]# hdparm -t /dev/sdc
/dev/sdc:
 Timing buffered disk reads:  44 MB in  3.12 seconds =  14.12 MB/sec

[root@jerrylee /home/jl]# dd if=/dev/zero of=/dev/sdc bs=512 count=10000
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB) copied, 2.08556 s, 2.5 MB/s

[root@jerrylee /home/jl]# hdparm -t /dev/sdc
/dev/sdc:
 Timing buffered disk reads:  58 MB in  3.11 seconds =  18.68 MB/sec

[root@jerrylee /home/jl]# dd if=/dev/zero of=/dev/sdc bs=512 count=100000
100000+0 records in
100000+0 records out
51200000 bytes (51 MB) copied, 11.421 s, 4.5 MB/s

[root@jerrylee /home/jl]# hdparm -t /dev/sdc
/dev/sdc:
 Timing buffered disk reads:  58 MB in  3.10 seconds =  18.70 MB/sec

[root@jerrylee /home/jl]# dd if=/dev/zero of=/dev/sdc bs=512 count=100000
100000+0 records in
100000+0 records out
51200000 bytes (51 MB) copied, 11.5068 s, 4.4 MB/s

bash: fun with programming

posted on 2015-04-11 22:37:59

While strolling around and doing some readup on FreeBSD and it's man pages, I came across the intro pages. There exist man 1 intro to man 9 intro. After having read all, I wanted to have an overview, which manpages were referenced from these, which lead to all this in the end.

With some messing around, this is what I ended up with finally:

[sjas@stv ~]$ MATCH=\\w\\+\([[:digit:]]\); MANPAGE="intro"; for (( i=1;i<10;i++ )); do echo "^[[33;1mman $i $MANPAGE^[[0m"; grep "$MATCH" <(man "$i" "$MANPAGE") | grep -v $(echo "$MANPAGE" | tr '[:lower:]' '[:upper:]') | grep --color "$MATCH"; done

Sidenote: Simply copy-pasting this will not work, see the ansi escape sequences part below on why. If you cannot wait, exchange the two occurences of ^[ characters with a literal escape. Insert via Ctrl-V followed by hitting Esc.

Since this makes use of really a lot of bash tricks, a write-up might be some fun and this post is the result. In case you don't understand something, try googling the term in question for further reference. This post is intended as a pointer on what to search at all.

As this grew quite long I could not be bothered to copy contents of man pages or insert links of wikipedia pages, so bear with me.

preface

As most people do not have a BSD installation ready, a reference the manpages of a linux command would help? A command with several pages would be needed, so how about:

man -k . | awk '{print $1}' | sort | uniq -c | grep -v -e 1 -e 2

Which will give:

  3 info
  3 open

So lets just use the 'info' man pages.

man -k will search all manpages for a given string, in our case for a literal dot which should be included in every page. Of the output only the first column is needed, which is done via awk '{print $1}'. (Do not use cut -d' ' -f1 for things like this, won't work if you have commands separated by several spaces.) sort the output, so double commands are listed in a row, followed by uniq -c which will list all the unique occurences as well as their count. grep -v excludes all occurences of either 1 or 2. (That is why -e is used for providing these, instead of piping through grep -v 1 | grep -v 2, which would work the same.)

overview

Now onto the real beef, which will look like this:

[sjas@nb ~]$ MATCH=\\w\\+\(.\); MANPAGE="info"; for (( i=1;i<10;i++ )); do echo "^[[33;1mman $i $MANPAGE^[[0m"; grep "$MATCH" <(man "$i" "$MANPAGE") | grep -v $(echo "$MANPAGE" | tr '[:lower:]' '[:upper:]') | grep --color "$MATCH"; done
man 1 info
man 2 info
No manual entry for info in section 2
man 3 info
No manual entry for info in section 3
man 4 info
No manual entry for info in section 4
man 5 info
       The Info file format is an easily-parsable representation for online documents.  It can be read by emacs(1) and info(1) among other programs.
       Info files are usually created from texinfo(5) sources by makeinfo(1), but can be created from scratch if so desired.
       info(1), install-info(1), makeinfo(1), texi2dvi(1),
       texindex(1).
       emacs(1), tex(1).
       texinfo(5).
man 6 info
No manual entry for info in section 6
man 7 info
No manual entry for info in section 7
man 8 info
No manual entry for info in section 8
man 9 info
No manual entry for info in section 9

The headlines are printed in bold yellow, the matched manpages are printed in red.

For a better explanation, the one-liner above transformed into a bash script with line numbers:

1  #!/bin/bash
2  MATCH=\\w\\+\([[:digit:]]\)
3  MANPAGE="open"
4  for (( i=1;i<10;i++ ))
5  do 
6      echo "^[[33;1mman $i $MANPAGE^[[0m"
7      grep "$MATCH" <(man "$i" "$MANPAGE") | grep -v $(echo "$MANPAGE" | tr '[:lower:]' '[:upper:]') | grep --color "$MATCH"
8  done

shebang

The shebang in line 1 consists of the magic number #!, meaning the first byte of the file represents # and the second byte !. Unix systems scan files which have their executable bit set for these. When they are found, the rest of the line is treated as the path to the interpreter with which the script should be used. Its maximum lenght is 128 characters due to a compile time restraint, at least in FreeBSD.

variable declaration, definition

Lines 2 and 3 declare and define two variables. These are arbitrarily called MATCH and MANPAGE by me. By convention, these are uppercase, but lowercase will work as well. When a not-yet-present var is introduced (the shell does not know of one with the same name already) via its name and a =, it is declared (memory is reserved and it is created) and assigned the null string. When something follows after the =, it is also defined at once, and will hold the string which follows. Bash variables are usually untyped, when used like this (it's all strings), but with the declare or typeset built-ins (see man bash and search there) you can also define a 'variable' to be an integer, an indexed or associated function, a nameref (means it's a symlink to another variable), to be read-only, to be exported, to automatically uppercase the string of it's definition and such. But I disgress...

quoting and quotation marks (or lack thereof)

"quoting" is the act of 'removing the special meaning of certain characters or words to the shell'.

The second var is just the string 'open' in double quotation marks, whereas the first is also a string, just not enclosed within any quotation marks.

There are quite some variants that can be used:

'
"
(nothing)
\'
\"

In bash, everything in between single quotes is taken literally, no EXPANSION or other substitutions will take place in between the marks. There are these kinds of expansions or substituions:

- brace expansion
- tilde expansion
- parameter and variable expansion
- command substitution
- arithmetic expansion
- word splitting
- and pathname expansion

Look them up in the bash manual, if you are not already second-guessing your decision to read this posting.

Double quotes are used for enclosing strings, but letting bash be able to recognize these:

$ = most expansions
` = command substitions
\ = escapes
! = history expansion

That way, the expansion mechanisms mentioned above are possible to create strings dynamically.

No single quote may be used within double quotation marks, and if you need a literal quotation mark (i.e. for using a string of parameters for a command which is wrapped within another command) you can use pairs of \' or \".

If quoting is omitted, escape spaces and other special signs via the already mentioned escape character alias \, to get a coherent string, like shown in the first variable.

shell escaping and special characters

Since \, ( and ) are special characters in bash, and we want to end up with this string for the regular expression to match our manpage mentions:

'\w\+([[:digit:]])'

they have to be escaped.

regular expressions and character classes

The string itself is a regexp expressing 'match one or several (\+) word characters but no whitespace (\w), followed by an opening parens ((), an element belonging to the character class of digits, which means a number ([[:digit:]]) and finally an closing parens ()). Character classes are part of the POSIX standard and nice to know, since they are easier to use than \s or \w and will just work regardless of implementation as long as your system is POSIX-compliant.

for loop

Line 4 is the header of the for loop, whereas 5 and 8 enclose its body. The header is looped for all eternity while all statements return true. Usually bash's for is used like for i in <number-sequence>; do ..., but this is not everything which is possible.

i is the control variable, which is referenced via "$i" later on, just as the other variables are. ($MANPAGE, $MATCH)

arithmetic evaluation

The (( )) parentheses trigger arithmetic evaluation for what is contained in between, which are three statements in a row. The second statement is also an expression, while it evaluates to 'true', the loop's condition is satisfied and will run. Besides, the c-style for-loop should be self-explanatory.

This is basically the same as $(( ... )) (arithmetic expansion), the difference is the missing $. In bash $ denotes most kind of expansions or substitutions, references to a variable's definition are preluded with a $, too. Whereas in regular expressions it denotes the end of the line, just for the record.

ansi escape sequences

Line 6 is for getting some color into the shell. The ^[ is a literal escape sign, and needed to get bash to recognize the usage of ANSI escape sequences. To insert it, use Ctrl-v followed by Esc, and is a single character internally, even if its representation on the screen is given via two characters. You can see this when you delete it via backspace.

Usually the ANSI sequence part goes like this: <esc>[ <some numbers> m, where the [ denotes the start and m denotes the end of the escape-numbers list. 33 happens to be the number for yellow, red would be i.e. 31. The 1 just means bold. Depending on the feature set the console/terminal emulator you use, you could use the corresponding numbercode to make text underlined or let it blink. The 0 disables all non-standard settings again, so the text afterwards is regular colored and non-bold again.

Since the next part is a little bit more complex, here line number seven from above for easier reference:

7      grep "$MATCH" <(man "$i" "$MANPAGE") | grep -v $(echo "$MANPAGE" | tr '[:lower:]' '[:upper:]') | grep --color "$MATCH"

piping

The | character denotes piping. This simply means the part left of it is executed and the part to its right takes left's output as it's input via a character stream. (I hope this is correct, no warranty on that. :)) Internally a pipe is created through linking two file descriptors of two processes together.

process substitution

In the following, xyz will denote an arbitrary linux/unix command producing some output to the shell, in hope that this will help understanding.

<( xyz ) denotes process substitution (also look it up in man bash ;)), where the output of the command xyz is written to file referenced by a file descriptor which name is passed as argument to the calling command grep.
If >( xyz ) were used, xyz would read and not write to the file referenced by the file descriptor.

Phew. This sounds way harder than it actually is.

grep <searchterm> <( xyz ) means, grep the file descriptor naming the open file where xyz has written it's output to for <searchterm>.

Process substitution and the file descriptor are used, as grep can search only within files, not within an output stream which our xyz command above being man <number> <manpage-name> usually provides.

command substitution (through a subshell)

$( ... ) denotes a sub-shell, which will pass its result to it's parent shell. An older form is to use a pair of backticks, but this form is deprecated:

` ... `

Prior to executing grep -v on the input it is given from the pipe, the subshell is executed as a forked process of the calling process (the invoking shell) which will wait, and the result is handed back to its parent process (grep -v), which will resume execution again then.

This may sound like a contradiction to 'grep can only search in files', but it ain't. The searchterm of grep can be returned from another expression's evaluation, but the location in which to search has to be a file. As the input of grep comes from the pipe, which uses the connection of two processes' file descriptors, we close the circle.

It may be also noted, that if the search term is handed from an expression which hands back a list of several results, only the first result is used and searched for.

Proof:

[sjas@stv ~/test]$ grep --color $(ls -aF | grep '/' | grep './') <(ls -alhF)
/dev/fd/63:drwxr-xr-x  2 sjas  sjas     2B Apr 12 10:40 ./
/dev/fd/63:drwxr-xr-x  6 sjas  sjas    18B Apr 12 10:40 ../

The colored part of the output is just ./, as grep won't search for ../. In case you would want to achieve something like this, you'd have to use a for loop like for i in <command>; do grep --color "$i" <file>; done.

the rest

tr is just used to change every character matched with another one, here via the character classes. Each lowercase char will be exchanged with its uppercase equivalent.

For all die-hards that see this, thank you for reading.

HTTP: list of status codes

posted on 2015-04-09 10:16:44

Here is a list of the HTTP status codes, copied more or less straight from the standard RFC7231. RFC 2616 is obsolete, but if you look up there, the status codes are almost the same.

Status Code Definitions

Informational 1xx

100 Continue
101 Switching Protocols

Successful 2xx

200 OK
201 Created
202 Accepted
203 Non-Authoritative Information
204 No Content
205 Reset Content
206 Partial Content

Redirection 3xx

300 Multiple Choices
301 Moved Permanently
302 Found
303 See Other
304 Not Modified
305 Use Proxy
306 (Unused)
307 Temporary Redirect

Client Error 4xx

400 Bad Request
401 Unauthorized
402 Payment Required
403 Forbidden
404 Not Found
405 Method Not Allowed
406 Not Acceptable
407 Proxy Authentication Required
408 Request Timeout
409 Conflict
410 Gone
411 Length Required
412 Precondition Failed
413 Payload Too Large
414 URI Too Long
415 Unsupported Media Type
416 Range Not Satisfiable
417 Expectation Failed

Server Error 5xx

500 Internal Server Error
501 Not Implemented
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
505 HTTP Version Not Supported

GREP: find ip address

posted on 2015-04-07 14:35:34

When having to have a look at all IPv4 adresses in a logfile, try this:

egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' <filename>

DNS: check server for misconfiguration

posted on 2015-04-03 21:10:48

To find out, if your provider's dns servers are misconfigured, such that an attacker can find out all subdomains to a given domain, try this:

dig AXFR <domain-name> @<ns-server-of-domain-name>

If it succeeds, the server is missing a whitelist of machines to where zone transfers are allowed, allowing everyone to request zone transfers.

This blog covers .csv, .htaccess, .pfx, .vmx, /etc/crypttab, /etc/network/interfaces, /etc/sudoers, /proc, 10.04, 14.04, 16.04, AS, ASA, ControlPanel, DS1054Z, GPT, HWR, Hyper-V, IPSEC, KVM, LSI, LVM, LXC, MBR, MTU, MegaCli, PHP, PKI, PS1, R, RAID, S.M.A.R.T., SNMP, SSD, SSL, TLS, TRIM, VEEAM, VMware, VServer, VirtualBox, Virtuozzo, XenServer, acpi, adaptec, algorithm, ansible, apache, apache2.4, apachebench, apple, applet, arcconf, arch, architecture, areca, arping, asa, asdm, autoconf, awk, backup, bandit, bar, bash, benchmarking, binding, bitrate, blackarmor, blockdev, blowfish, bochs, bond, bonding, booknotes, bootable, bsd, btrfs, buffer, c-states, cache, caching, ccl, centos, certificate, certtool, cgdisk, cheatsheet, chrome, chroot, cisco, clamav, cli, clp, clush, cluster, cmd, coleslaw, colorscheme, common lisp, configuration management, console, container, containers, controller, cron, cryptsetup, csync2, cu, cups, cygwin, d-states, database, date, db2, dcfldd, dcim, dd, debian, debug, debugger, debugging, decimal, desktop, df, dhclient, dhcp, diff, dig, display manager, dm-crypt, dmesg, dmidecode, dns, docker, dos, drivers, dtrace, dtrace4linux, du, dynamictracing, e2fsck, eBPF, ebook, efi, egrep, emacs, encoding, env, error, ess, esx, esxcli, esxi, ethtool, evil, expect, exportfs, factory reset, factory_reset, factoryreset, fail2ban, fakeroot, fbsd, fdisk, fedora, file, files, filesystem, find, fio, firewall, firmware, fish, flashrom, forensics, free, freebsd, freedos, fritzbox, fsck, fstrim, ftp, ftps, g-states, gentoo, ghostscript, git, git-filter-branch, gitbucket, github, gitolite, global, gnutls, gradle, grep, grml, grub, grub2, guacamole, hardware, haskell, hdd, hdparm, hellowor, hex, hexdump, history, howto, htop, htpasswd, http, httpd, https, i3, icmp, ifenslave, iftop, iis, imagemagick, imap, imaps, init, innoDB, innodb, inodes, intel, ioncube, ios, iostat, ip, iperf, iphone, ipmi, ipmitool, iproute2, ipsec, iptables, ipv6, irc, irssi, iw, iwconfig, iwlist, iwlwifi, jailbreak, jails, java, javascript, javaws, js, juniper, junit, kali, kde, kemp, kernel, keyremap, kill, kpartx, krypton, lacp, lamp, languages, ldap, ldapsearch, less, leviathan, liero, lightning, links, linux, linuxin3months, lisp, list, livedisk, lmctfy, loadbalancing, locale, log, logrotate, looback, loopback, losetup, lsblk, lsi, lsof, lsusb, lsyncd, luks, lvextend, lvm, lvm2, lvreduce, lxc, lxde, macbook, macro, magento, mailclient, mailing, mailq, make-jpkg, manpages, markdown, mbr, mdadm, megacli, micro sd, microsoft, minicom, mkfs, mktemp, mod_pagespeed, mod_proxy, modbus, modprobe, mount, mouse, movement, mpstat, multitasking, myISAM, mysql, mysql 5.7, mysql workbench, mysqlcheck, mysqldump, nagios, nas, nat, nc, netfilter, networking, nfs, nginx, nmap, nocaps, nodejs, numberingsystem, numbers, od, onyx, opcode-cache, openVZ, openlierox, openssl, openvpn, openvswitch, openwrt, oracle linux, org-mode, os, oscilloscope, overview, parallel, parameter expansion, parted, partitioning, passwd, patch, pct, pdf, performance, pfsense, php, php7, phpmyadmin, pi, pidgin, pidstat, pins, pkill, plasma, plesk, plugin, posix, postfix, postfixadmin, postgres, postgresql, poudriere, powershell, preview, profiling, prompt, proxmox, ps, puppet, pv, pveam, pvecm, pvesm, pvresize, python, python3, qemu, qemu-img, qm, qmrestore, quicklisp, quickshare, r, racktables, raid, raspberry pi, raspberrypi, raspbian, rbpi, rdp, redhat, redirect, registry, requirements, resize2fs, rewrite, rewrites, rhel, rigol, roccat, routing, rs0485, rs232, rsync, s-states, s_client, samba, sar, sata, sbcl, scite, scp, screen, scripting, seafile, seagate, security, sed, serial, serial port, setup, sftp, sg300, shell, shopware, shortcuts, showmount, signals, slattach, slip, slow-query-log, smbclient, snmpget, snmpwalk, software RAID, software raid, softwareraid, sophos, spacemacs, spam, specification, speedport, spi, sqlite, squid, ssd, ssh, ssh-add, sshd, ssl, stats, storage, strace, stronswan, su, submodules, subzone, sudo, sudoers, sup, swaks, swap, switch, switching, synaptics, synergy, sysfs, systemd, systemtap, tar, tcpdump, tcsh, tee, telnet, terminal, terminator, testdisk, testing, throughput, tmux, todo, tomcat, top, tput, trafficshaping, ttl, tuning, tunnel, tunneling, typo3, uboot, ubuntu, ubuntu 16.04, ubuntu16.04, udev, uefi, ulimit, uname, unetbootin, unit testing, upstart, uptime, usb, usbstick, utf8, utm, utm 220, ux305, vcs, vgchange, vim, vimdiff, virtualbox, virtualization, visual studio code, vlan, vmstat, vmware, vnc, vncviewer, voltage, vpn, vsphere, vzdump, w, w701, wakeonlan, wargames, web, webdav, weechat, wget, whois, wicd, wifi, windowmanager, windows, wine, wireshark, wpa, wpa_passphrase, wpa_supplicant, x11vnc, x2x, xfce, xfreerdp, xmodem, xterm, xxd, yum, zones, zsh


Unless otherwise credited all material Creative Commons License by sjas