Posts tagged virtualization

proxmox usb passthrough to VM
posted on 2016-08-17 09:35

This works while the VM is already running, no reboot needed.

Plug the USB device into your hypervisor.

lsusb to see if it's there:

root@server:~# lsusb
Bus 002 Device 002: ID 8087:8002 Intel Corp. 
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 8087:800a Intel Corp. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 003: ID 0557:2419 ATEN International Co., Ltd 
Bus 003 Device 002: ID 0557:7000 ATEN International Co., Ltd Hub
Bus 003 Device 004: ID 1058:25a2 Western Digital Technologies, Inc. 
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Here the Identifier is 1058:25a2.

Now qm list to discern your vm's id.

root@server:~# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID       
     10280 vm_01                running    25600           5580.00 482591    
     10281 vm_02                running    6144             500.00 764248

Enter console via qm monitor <vmid>:

root@server:~# qm monitor 10280
Entering Qemu Monitor for VM 10280 - type 'help' for help
qm> 

There do info usbhost:

qm> info usbhost
  Bus 3, Addr 3, Port 12.1, Speed 1.5 Mb/s
    Class 00: USB device 0557:2419
  Bus 3, Addr 4, Port 1, Speed 480 Mb/s
    Class 00: USB device 1058:25a2, Elements 25A2
qm> 

Via the device identifier we know its bus 3 and port 1. Now attach it to the virtual machine:

qm> device_add usb-host,hostbus=3,hostport=1
qm> 

lsblk from within the VM shows me the plugged in harddisk.

linux p2v via rsync
posted on 2016-08-03 13:09

To virtualize an existing and running linux system via rsync, install a fresh linux system. (Or do just the partitioning of the disk of the VM which you want the system to run on afterwards.) It helps to just use a single partition for /, otherwise you have to sync the mountpoints individually. In that case, create a script, in case you have to redo the data sync. (Which is likely to happen.)

If you installed a complete system first, you might consider backup up its /etc/fstab, in case you do not want to fix it afterwards by hand, but just copy-pasting the config back.

Also, if you did not install a complete linux installation on the destination VM, you will have to fix the bootmanager (read: grub2 nowadays) after the initial sync. If you did a complete install, just exclude /boot from rsync.

Boot from a live disk like GRML and mount your partition(s) where you want the data to end up(s) where you want the data to end up.

cd into the folder where you mounted the destination system's / to in your live-disk.

Then:

rsync -av --delete --progress --exclude=/dev --exclude=/sys --exclude=/proc --exclude=/mnt --exclude=/media--exclude=/boot <source-server>:/* .
proxmox: restoring backups
posted on 2016-06-11 10:42

First, create backup via gui, at the backup tab of your vm.

Go to backup save location and copy it to wherever you want to restore it. Maybe you need to fix ssh keys to directly copy the data directly, i.e. copy the current host's content of ~/.ssh/id_rsa.pub over to the ~/.ssh/authorized_keys of your destination machine.

After ssh'ing to the destination hypervisor, do:

qmrestore vzdump-qemu-100-2016_06_06-19_38_36.vma.lzo 616 -storage local

This would restore your backupped vm image automatically to a 'newly created' VM with ID 616 on your local storage.

linux: resize vm to full disk size
posted on 2016-06-11 10:38

fter resizing the the virtual harddisk of your virtual machine, several other steps are needed so you can utilize this additional space within the VM. This will only talk about increased sizing, which will usually just work. Unlike with downsizing, which are the same steps just in reverse order, but where you can easily kill your currently still running system. Handle downsizing with very, very much care.

This guide assumes you have a single partition, which is used by LVM, where in you have your filesystem(s) in different logical volumes.

resize the partition

I have a vm with hostname called 'test', which has a single disk (/dev/vda), with a single partition (/dev/vda1), which is used by lvm. LVM volume groups are usually called like the hostname (best approach I know of, so here test), and the logical volumes what they are used for (root, swap), or where they are mounted (i.e. var_lib_vz, not shown here).

root uses ext4 as file system.

Initially the disk size was 50G and was increased to 500G.

After the disk size was increased, you can see the available space on the device:

root@test:~# lsblk
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0               11:0    1 1024M  0 rom
vda              253:0    0  500G  0 disk
+-vda1           253:1    0   50G  0 part
  +-test-root 252:0    0 14.3G  0 lvm  /
  +-test-swap 252:1    0  976M  0 lvm  [SWAP]

Use a partition manager of your choice (fdisk or cfdisk for disks with an MBR, gdisk or cgdisk for disks using a GPT, or parted if you know what you are doing.), delete your partition. Recreate it with the maximum size, reboot.

Then it should look like this, with adjusted partition size:

root@test:~# lsblk
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0               11:0    1 1024M  0 rom
vda              253:0    0  500G  0 disk
+-vda1           253:1    0  500G  0 part
  +-test-root 252:0    0   49G  0 lvm  /
  +-test-swap 252:1    0  976M  0 lvm  [SWAP]

resize PV, LV, file system

First make LVM format the addition free space: (It will 'partition' it so it can work with it, effectively splitting it into junks of like 4MB if I recall correctly.)

root@test:~# pvresize /dev/vda1
  Physical volume "/dev/vda1" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

Since the PV was already a member of the VG, no need to extend the VG.

Now for the actual volume:

root@test:~# lvextend -L 499G /dev/test/root
  Size of logical volume test/root changed from 49.04 GiB (12555 extents) to 499.00 GiB (127744 extents).
  Logical volume root successfully resized.

Here I specified it to be resized to 499GB. If I wanted to just use all available space, I'd do:

lvextend -l +100%FREE /dev/mapper/test-root

root@test:~# lvextend -l +100%FREE /dev/mapper/test-root
  Size of logical volume test/root changed from 450.00 GiB (115200 extents) to 499.04 GiB (127755 extents).
  Logical volume root successfully resized.

The -L is just easier to remember.

Lastly, resize the used filesystem:

root@test:~# resize2fs -p /dev/mapper/test-root
resize2fs 1.42.13 (17-May-2015)
Dateisystem bei /dev/mapper/test-root ist auf / eingeh�ngt; Online-Gr��en�nderung ist
erforderlich
old_desc_blocks = 1, new_desc_blocks = 32
Das Dateisystem auf /dev/mapper/test-root is nun 130821120 (4k) Bl�cke lang.

Verify it:

root@test:~# df -h
Dateisystem              Groesse Benutzt Verf. Verw% Eingehaengt auf
udev                      983M       0  983M    0% /dev
tmpfs                     201M    3.2M  197M    2% /run
/dev/mapper/test-root  492G    2.3G  469G    1% /
tmpfs                    1001M       0 1001M    0% /dev/shm
tmpfs                     5.0M       0  5.0M    0% /run/lock
tmpfs                    1001M       0 1001M    0% /sys/fs/cgroup
proxmox: qemu-img convert
posted on 2016-06-11 10:33

In proxmox you sometimes want to convert images from one type to another.

available types

QCOW2 (KVM, Xen)    qcow2
QED   (KVM)         qed
raw                 raw
VDI   (VirtualBox)  vdi
VHD   (Hyper-V) vpc
VMDK  (VMware)  vmdk
RBD   (ceph)    rbd

example

qemu-img convert -f raw -O qcow2 vm-100-disk-1.raw vm-100-disk-1.qcow2

-f is the first image format, -O the second. Look at the manpage to guess why -f is called -f.

Virtualization types
posted on 2015-06-27 18:57:55

For a more abstract view, there exist different perspectives on virtualization.

This post intends to give a practical overview on these and the currently available technologies. Keep in mind this is also work in progress and will get additional content in the future, by then this message will be removed.

First perspective: virtualization classes

hardware emulation

A piece of hardware emulates another piece of hardware, such that no distinction seems to exist.

The virtualization software makes sure, all hardware (CPU, chipset, I/O, ...) instructions from the host cpu are translated for the guest. Such that a completely different set of hardware seems to be present.

That way, with a big hit on performance, different architectures than the one being provided by the host system can be made available. I.e. MIPS / ARM / SPARC on x86.

The guest OS runs natively without changes.

Software:

hardware virtualization

CPU emulation will not take place, just chipset and other hardware gets emulated. Some CPU instructions may be altered though, but no hardware emulation takes place, CPU-wise.

This yields way better performance than hardware emulation does, but you usually have to stick with one kind of architecture.

Software:

paravirtualization

No hardware emulation takes place, but the host offers an API for hardware access to the guests.

Different architectures will NOT run.

Guest operating systems may may need the have their kernels patched, such that this API can be used. Xen has different operating modes, depending on the degree of paravirtualization being used.

Software:

  • XEN
  • VMWare vSphere (device drivers are partly paravirtualized, in the past this was also the case with CPU's)
  • KVM (see virtio drivers)

Note: KVM is not just a pure Paravirtualizator, it just also provides paravirtualized drivers along with virtualized ones. Also it also uses qemu under the hood for hardware emulation.

operating-system-level virtualization

No hardware emulation takes place, and the operating system kernel is shared.

Software: (native)

Software: (patched kernel needed, thus only backported changes = bad.)

  • Linux : Parallels Virtuozzo, OpenVZ, VServer

custom kernels or not?

Just leave these technologies needing kernel patches alone, here's why I guess this is the better choice:

The same development will eventually take place, like it happened with KVM vs. Xen. All major linux distributions chose KVM as primary virtualization technique once a solution (read: KVM) was present within the mainline kernel. Xen was dropped. I'd be astonished if this were different with OpenVZ vs. LXC.

LXC just got fresh support in Proxmox, and will likely supersede Virtuozzo in the future. (But that's just an educated guess of mine.)

difference between docker und i.e. LXC

Currently there is a lot of fuss about docker for 'app virtualization'. docker used to use LXC as a backend, but nowadays they develop their own lib/userland tool called libcontainer for managing the OS functions such that their product will run.

Google's lmctfy development ('let me contain that for you'), which has got the same scope as docker, is currently stalled according to the github project readme:

lmctfy is currently stalled as we migrate the core concepts to libcontainer and build a standard container management library that can be used by many projects.

second perspective: virtualization types

type 1: baremetal

Where you have minimal OS, acting as a hypervisor and virtual machine manager, and most interaction flows directly between VM and processor, without passing the HV OS.

type 2

A regular OS like any linux distribution, a Windows variant or Mac OSX is used, and your virtualization software is installed there.

All system calls have to pass the emulated/virtualized hardware which is provided through the host OS. All calls will have to pass through the host OS / the Hypervisor.

process-based

This is simply all the container stuff, where a guest OS is running as another process (-tree) is running within the host OS.

background 1: hardware-supported virtualization features

Hardware virtualization purely through sofware is costly and slow. Processors nowadays usually provide instruction set extensions like VT-x (Intel), VIA TV (VIA) or AMD-V (AMD), depending on the manufacturer.

These implement an access control specifically for virtualization, along to the rings we will talk about in a minute.

With VT-x there basically exist two modes:

  • VMX Root Operation
  • VMX non Root Operation

Hypervisors run in VMX Root Op mode. VM's do not.
If non-root-op stuff is run in ring 0 (see below) by a VM, the Hypervisor can catch this instructions since he runs in root-op-mode, basically implementing trapping.

Prior to this, binary code was passed to the HV from the VM and translated on the fly for security reasons. But with extra instructions, this of course takes place much faster.

To further speed things up, there also exist hardware implemenations for 'Nested Paging' / SLAT (second level address translation). These are called EPT ('Extended Page Tables', Intel) or RVI ('Rapid Virtual Indexing', AMD) and make 'shadow page table' management via the hardware possible. That way usually MMU (memory mapping unit of the cpu) intensive work loads can be sped up.

Also maybe you have to have turned on these CPU virtualization features on in the BIOS, too, if your hypervisor is slow as hell. It can be the case, that the mainboard has these deactivated by default (for whatever a reason).

If you really want to know more theoretical stuff about this, head over here at VMWware. To just have the 'light' version, try VirtualBox' technical background section in its manual here.

background 2: kernel protection rings / privilege levels

These are separations such that processes within a certain ring can just execute a subset of the processor instructions of the processes being present in the lower ring. For going lower, a kind of API is provided, via interrupts, and context switches are necessary for transitions.

Rings can be implemented in purely in software (slow), but nowadays hardware (instructions within the processor, way faster, see above, google 'binary translation') is used for this.

First an overview, which ring permits which level of hardware enforced access in protected mode on an x86 cpu: (There exist some more modes, of course. ;))

  • ring 0: kernel
  • ring 1: device drivers
  • ring 2: device drivers
  • ring 3: applications

Another term for the rings is hierarchical protection domains. They are mechanisms to secure execution of hardware-level instructions in the processor.

I.e. processes running in ring 0 have direct memory access, and do not have to use virtual memory where the RAM access would be limited for security reasons.

According to the virtualbox manual usually only 0 and 3 are used usually. But virtualbox also happens to use ring 1 for security reasons. See the aforementioned manual for more information how this takes place.

When ring protection is coupled to certain processor modes, it is basically the known differentiation between kernel- and userspace.

Depending on the ring the guest operates mostly in, the virtualization classification is also different, and that is why this part here was included into the post initially.

Linux: speedy LXC introduction
posted on 2015-06-15 23:12:20

Since the official LXC manual is just bollocks, here is the quick and dirty version to get something up and running for people with not overly much time who wish for something that 'just works (TM)':

some notes first

Depending on the kernel you are using, you might have to create containers as root user, since unprivileged containers are a newer feature of LXC.

Also not all funtionalities or flags are present, depending on your luck. Consult the manpage of the command in question to see if the things you are trying are available at all.

More often than not, the availability of programs / feature / options is package-dependant, read:
It just depends what version you get from your package management (If you don't get the source directly.), and what is listed as available in the corresponding manual page.

install

Install lxc package via your package management. lxctl might be nice, too, although it will not be discussed here, as at least my version still had quite some bugs. Where it will definitely help, is with configuring the config which you will not have to edit by hand.

Also these packages will help, do not bother if they are not all available for your distro, it still might work, even though your OS does not know or cannot find them:

lxc-devel
lxc-doc
lxc-extra
lxc-libs
lxc-templates
lxc-python3-lxc
debootstrap

check system

Use lxc-checkconfig. It easily tells you if you have trouble with running containers. Could be due to kernel version or missing userland tools.

have some throwaway space ready

This section can be skipped.

If you bother:
Easiest it'd be if you have a spare hdd at your disposal, but an USB stick will do just nicely. Use LVM to prepare the disk, so the containers can be created with a present volume group, the logical volume will be created during container creation.

Mountpoint would be /var/lib/lxc. The folder which will be used can be passed on the commandline, too, at lxc-create.

You do not have to do this, but it is kind of a security measure. When toying around with LVM, you will not as easily make your desktop go broke, just the USB stick will be wiped.

usage

create / start to container

create / get templates

Containers can be created via lxc-create.
I.e. lxc-create -n <containername> -t <templatename> The list of available templates can be found under /usr/share/lxc/templates, just omit the lxc- prefix:

\ls -Alh /usr/share/lxc/templates | awk '{print $9}' | cut -c5-

(Or wherever man lxc-create tells you to look described at the -t flag.)

If the containers shall not be saved at the default location, use the -P / --lxcpath parameter.

Creating a container off the download template prompts you with a list of operating systems from which you can choose. (lxc-create -n <containername> -t download is all you need to do.) If you do not have the template which you chose, it will be downloaded automatically. The internet will be consulted on how to create the container by LXC and it might take a little, initially.

When the next container is created from the same template, it goes MUCH faster.

Don't forget to note the root password at the console output after lxc-create is finished. Depending on the OS template, the root pw is sometimes 'root', sometimes a random one, sometimes you have to chroot into the container's file system (see file in the container folder) and set the pass by hand first. It 'depends'.

clone

Created containers can be duplicated with the lxc-clone command, i.e.:

lxc-clone <containername> <new_containername>

Look up lxc-clone --help, you can pass the backingstore to use (folder where containerfiles are saved) or the clone method (copy vs. snapshot).

start

Started are containers via lxc-start -n <containername>. That way you will get to the user login prompt.

Else start the container with the -d flag, meaning daemonized... in the background.

There also exists lxc-autostart... That is if you have to start several containers in a certain order.

lxc.start.auto = 0 (disabled) or 1 (enabled)
lxc.start.delay = 0 (delay in second to wait after starting the container)
lxc.start.order = 0 (priority of the container, higher value means starts earlier)
lxc.group = group1,group2,group3,… (groups the container is a member of)

It will also autostart 'autostart'-flagged containers at boot of the host OS, as far as I understood it.

list/watch available containers

lxc-ls will do. There are some options, but just use lxc-ls --fancy, if your version has this functionality. Otherwise you will have to stick to lxc-ls for all containers, and lxc-ls --active for the running ones.

Specific infos on a particular container can be obtained via lxc-info -n <containername>.

lxc-monitor will work like tail -f and tell the status of the container in question. (RUNNING / STOPPED)

connect to / disconnect from container

Connecting to daemonized containers will work via lxc-console -n <containername>

Exit via CTRL+a q. Be cautionous, if you put screen to use the shortcut to escape will not work. Either close the terminal then, or shutdown the container.

pause / unpause containers

lxc-freeze -n <containername>

and

lxc-unfreeze -n <containername>

will do.

stop / delete container

stop

Either turn of the linux (e.g. issuing poweroff or shutdown -h now from within the container). Or use lxc-stop -n <containername>

destroy

Simply lxc-destroy -n <containername>.

snapshots!

Snapshotting VM's does work, somehow. Usually you seem to need LVM for it. See lxc-snapshot for more info.

networking

This is a little hairy if you have never worked with bridges in linux before. You will almost certain have to reconfigure your network settings by hand to let the container access the internet.

Sample settings:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

Either put these directly into the container config (but change the xx pair to HEX values), or, to have this set automatically for all containers, put it into the global lxc config (no HEX needed, will be replaced accordingly during container creation). (/etc/lxc/default.conf)

scripting

Container usage can be scripted, i.e. in python. This opens up quite a lot of possibilities for development/deployment/testing workflows. Things run fast due to fast startup times, in a clean environment, which will lower the bar to using proper testsetups quite a lot.

#!/usr/bin/python3

import lxc

c = lxc.Container("<containername>")
c.start()

config

The list of available config options is best looked up in the manpages directly:

man lxc.conf
man 5 lxc.conf
man 5 lxc.system.conf
man 5 lxc.container.conf
man 5 lxc-usernet
man lxc-user-nic

web GUI

See LXC-webpanel, if you're on ubuntu, that is. I haven't tested it, tough. But the pictures for it on the internet look rather nice. :)

closing notes

Well, now you might have a running container, with or without network, depending on your host OS. If you put VLAN's to use, you will have no luck without further work. ;)

For more information, there's some nice documentation over at IBM.

thin vs. thick provisioning
posted on 2015-05-15 23:12:29

Since I always mix these up:

thin = disk is created, but not completely allocated. overprovisioning is possible.
thick = disk is created with actual size

VMWare has furter lazy and eager provisioned thick drives:

lazy zeroed thick:
    disk is created with actual size, nothing else is done at create time.
    But every newly used block is zeroed first:
        Thus a little bit slower than eagerly provisioning during runtime.

eager zeroed thick:
    disk is created AND zeroed during creating, thus wiped completely then.

This blog covers .csv, .htaccess, .pfx, .vmx, /etc/crypttab, /etc/network/interfaces, /etc/sudoers, /proc, 10.04, 14.04, AS, ASA, ControlPanel, DS1054Z, GPT, HWR, Hyper-V, IPSEC, KVM, LSI, LVM, LXC, MBR, MTU, MegaCli, PHP, PKI, R, RAID, S.M.A.R.T., SNMP, SSD, SSL, TLS, TRIM, VEEAM, VMware, VServer, VirtualBox, Virtuozzo, XenServer, acpi, adaptec, algorithm, ansible, apache, apachebench, apple, arcconf, arch, architecture, areca, arping, asa, asdm, awk, backup, bandit, bar, bash, benchmarking, binding, bitrate, blackarmor, blowfish, bochs, bond, bonding, booknotes, bootable, bsd, btrfs, buffer, c-states, cache, caching, ccl, centos, certificate, certtool, cgdisk, cheatsheet, chrome, chroot, cisco, clamav, cli, clp, clush, cluster, coleslaw, colorscheme, common lisp, console, container, containers, controller, cron, cryptsetup, csync2, cu, cups, cygwin, d-states, database, date, db2, dcfldd, dcim, dd, debian, debug, debugger, debugging, decimal, desktop, df, dhclient, dhcp, diff, dig, display manager, dm-crypt, dmesg, dmidecode, dns, docker, dos, drivers, dtrace, dtrace4linux, du, dynamictracing, e2fsck, eBPF, ebook, efi, egrep, emacs, encoding, env, error, ess, esx, esxcli, esxi, ethtool, evil, expect, exportfs, factory reset, factory_reset, factoryreset, fail2ban, fbsd, fedora, file, filesystem, find, fio, firewall, firmware, fish, flashrom, forensics, free, freebsd, freedos, fritzbox, fsck, fstrim, ftp, ftps, g-states, gentoo, ghostscript, git, git-filter-branch, github, gitolite, gnutls, gradle, grep, grml, grub, grub2, guacamole, hardware, haskell, hdd, hdparm, hellowor, hex, hexdump, history, howto, htop, htpasswd, http, httpd, https, i3, icmp, ifenslave, iftop, iis, imagemagick, imap, imaps, init, innoDB, inodes, intel, ioncube, ios, iostat, ip, iperf, iphone, ipmi, ipmitool, iproute2, ipsec, iptables, ipv6, irc, irssi, iw, iwconfig, iwlist, iwlwifi, jailbreak, jails, java, javascript, javaws, js, juniper, junit, kali, kde, kemp, kernel, keyremap, kill, kpartx, krypton, lacp, lamp, languages, ldap, ldapsearch, less, leviathan, liero, lightning, links, linux, linuxin3months, lisp, list, livedisk, lmctfy, loadbalancing, locale, log, logrotate, looback, loopback, losetup, lsblk, lsi, lsof, lsusb, lsyncd, luks, lvextend, lvm, lvm2, lvreduce, lxc, lxde, macbook, macro, magento, mailclient, mailing, mailq, manpages, markdown, mbr, mdadm, megacli, micro sd, microsoft, minicom, mkfs, mktemp, mod_pagespeed, mod_proxy, modbus, modprobe, mount, mouse, movement, mpstat, multitasking, myISAM, mysql, mysql 5.7, mysql workbench, mysqlcheck, mysqldump, nagios, nas, nat, nc, netfilter, networking, nfs, nginx, nmap, nocaps, nodejs, numberingsystem, numbers, od, onyx, opcode-cache, openVZ, openlierox, openssl, openvpn, openvswitch, openwrt, oracle linux, org-mode, os, oscilloscope, overview, parallel, parameter expansion, parted, partitioning, passwd, patch, pdf, performance, pfsense, php, php7, phpmyadmin, pi, pidgin, pidstat, pins, pkill, plesk, plugin, posix, postfix, postfixadmin, postgres, postgresql, poudriere, powershell, preview, profiling, prompt, proxmox, ps, puppet, pv, pvecm, pvresize, python, qemu, qemu-img, qm, qmrestore, quicklisp, r, racktables, raid, raspberry pi, raspberrypi, raspbian, rbpi, rdp, redhat, redirect, registry, requirements, resize2fs, rewrite, rewrites, rhel, rigol, roccat, routing, rs0485, rs232, rsync, s-states, s_client, samba, sar, sata, sbcl, scite, scp, screen, scripting, seafile, seagate, security, sed, serial, serial port, setup, sftp, sg300, shell, shopware, shortcuts, showmount, signals, slattach, slip, slow-query-log, smbclient, snmpget, snmpwalk, software RAID, software raid, softwareraid, sophos, spacemacs, spam, specification, speedport, spi, sqlite, squid, ssd, ssh, ssh-add, sshd, ssl, stats, storage, strace, stronswan, su, submodules, subzone, sudo, sudoers, sup, swaks, swap, switch, switching, synaptics, synergy, sysfs, systemd, systemtap, tar, tcpdump, tcsh, tee, telnet, terminal, terminator, testdisk, testing, throughput, tmux, todo, tomcat, top, tput, trafficshaping, ttl, tuning, tunnel, tunneling, typo3, uboot, ubuntu, ubuntu 16.04, udev, uefi, ulimit, uname, unetbootin, unit testing, upstart, uptime, usb, usbstick, utf8, utm, utm 220, ux305, vcs, vgchange, vim, vimdiff, virtualbox, virtualization, visual studio code, vlan, vmstat, vmware, vnc, vncviewer, voltage, vpn, vsphere, vzdump, w, w701, wakeonlan, wargames, web, webdav, weechat, wget, whois, wicd, wifi, windowmanager, windows, wine, wireshark, wpa, wpa_passphrase, wpa_supplicant, x2x, xfce, xfreerdp, xmodem, xterm, xxd, yum, zones, zsh

View posts from 2017-02, 2017-01, 2016-12, 2016-11, 2016-10, 2016-09, 2016-08, 2016-07, 2016-06, 2016-05, 2016-04, 2016-03, 2016-02, 2016-01, 2015-12, 2015-11, 2015-10, 2015-09, 2015-08, 2015-07, 2015-06, 2015-05, 2015-04, 2015-03, 2015-02, 2015-01, 2014-12, 2014-11, 2014-10, 2014-09, 2014-08, 2014-07, 2014-06, 2014-05, 2014-04, 2014-03, 2014-01, 2013-12, 2013-11, 2013-10


Unless otherwise credited all material Creative Commons License by sjas