By

Installing Open-Xchange 7.8 on UCS when the master is on 4.3

OX is on the verge of releasing Version 7.10. That version will not be supported on (or packaged for) Debian 8 Jessie, but only on Debian 9 Stretch (and of course RedHat and SuSE). On the other hand, 7.8.x will not be supported on (or packaged for) Stretch.

The latest UCS version is 4.3 at the moment. This is based on Debian Stretch.

Therefore the current OX is not supported on UCS 4.3, but according to Open Xchange an  Univention, OX 7.8.4 on UCS 4.2 is supported in an environment with 4.3 servers.

So, when your UCS master is running 4.3, you can theoretically run OX on a 4.2 server. This works fine when you installed OX while the master is still on 4.2 or earlier.

In my case, my master was on 4.3. In order to install OX, I set up a server with UC 4.2 and joined it to the master. This already broke on installation but worked in a later step flawlessly.

Next, I started to install OX:

Oopsie. But consistent: univention-app list doesn’t show oxseforucs on the master, therefore there are no package sources and no packages. Fsck.

So let’s do this manually:

Now you can install OX on the 4.2 server.

Done

 

By

Howto: Using check_mk/WATO via ssh and jumphost

I thought I’d pen this down right here as it took me a bit to really figure this out.

Problem: I have some Hosts I would like to monitor but I cannot access them directly (VPN also isn’t an option in this case), so I would like to monitor them using SSH, some directly, some behind a jumphost.

 

Usually the check_mk uses xinetd listening on Port 6556 only limited by allow_from in xinetd config and maybe iptables. This is fine in a closed, trusted environment but not really over public networks.

We could now either use VPN or tunnel the port through ssh port forwarding, but I found it more convenient just using ssh as a datasource program.

Preparing the nodes

We surely won’t use passwords for this, but rather a key with very limited capabilities.

So, first go to the monitoring site (make sure to do this as the monitoring user) and create a key pair:

Don’t use a passphrase. Now append the public key to the authorized_keys file on the monitored nodes. I am using ansible for this:

This results in:

Now you can ssh to that node from the monitoring host using the key, you should then get the output of the check_mk_agent.

Fine.

Case 1: Directly accessible node

In WATO, go to Host & Service Parameters => Datasource Programs => Individual program call instead of agent access.

We give a simple name and the command line using the <IP> macro.

Some hosts have issues spawning a tty, so we omit that with -tt. I also had some issues with the host keys which afford disabling StrictHostKeyChecking.

I am using this rule for all monitored hosts, therefor I don’t need any rules set up. If you only want to apply these rules to single hosts, just add their names below, or better create a rule as shown in the jumphost example below.

Case 2: Nodes behind jumphost

We use the same approach with a jumphost, just adding that host in between.

We extend our ssh line by -J jump@jumphostMake sure that you don’t use the root account on the jump host!

If you are using an older or different version of SSH which doesn’t support the -J switch, you need to do it using the old style -W version.

As I only want specific nodes to be monitored using the jumphost, I created another choice in the networking host tags, so all Hosts tagged with “No ping possible” are monitored using the jumphost.

Now another issue arises: Ping fails, which means that all services are being monitored properly while the host has the status “down”. So we need to change that, too with another rule, also tagged with the same tag above.

Go to Host & Service Parameters => Monitoring Configuration => Host Check Command and add a new rule.

Switch the command from PING to Use the status of the Check_MK Agent and chose the correct host tag.

Done.

 

By

Kubernetes-Mesh

Nur so als kleiner Reminder…
http://philcalcado.com/2017/08/03/pattern_service_mesh.html

By

Encrypted Backup-Server with Debian and Dirvish – LVM in LUKS

If you haven’t heard of Dirvish, it’s about time. It is a very neat, rsync and hardlink based backup solution, just like a pro version of Apple’s TimeMachine.

What I want

My setup is quite specific for my use case: Having used dirvish at work for a while I was still stuck with backup-manager on my own servers, which is quite neat, but it leaves you with differential tarballs of your system which makes it an annoying task finding a specific version of one file somewhere in the past.

Dirvish, running on the backup server, logs into the target system using ssh and syncs the whole filesystem (you can define excludes, of course) to a folder with a timestamp in its name on the backup server. The next time it runs (simple cronjob) it will create a new timestamped folder, sync only changed files and hardlink everything else.
So, in the end we get a kind of snapshot of each time dirvish ran, where we can simply copy files back or even roll back the complete system.

When setting up my new dirvish backup server (replacing the ancient tarball and ftp solution) I wanted it to save data incrementally and store it on en encrypted device. For easiness of use I simply enter the encryption password manually after booting. Therefore only the dirvish partition (“bank”) is on an encrypted device, the rest of the machine is plain.
Caveat: The ssh key for accessing the other machines is on the unencrypted partition, this needs some rework to be done …

System: LVM on LUKS on VD in VM on LVM. For sure!

My Backup server runs in a VM on one of my Xen systems. Actually, the whole dirvish-bank (i.e. the location of all backups) is also synced to another machine at home for real backup.

The VM has got two virtual disks: the system disk, xvda, and (for historic reason) xvdc as storage for dirvish. This disk is completely encrypted using LUKS. I won’t cover encryption in detail here.

Inside this encrypted container I will create a LVM volume group, and backup will be done on volumes inside this group. The reason for this nested LVMs (the virtual disks already live in a LVM on the host) is that I will be able to create different “tiers” of backups if possible: If space on the underlying host gets tight, I will be able to move lower priority hosts on a second volume and don’t risk critical machines not being backuped.

So we basically have:

Encrypting

I did this on a freshly installed Debian Jessie, though it will quite surely work similarly on Ubuntu, SuSE or RedHatish systems. So first install cryptsetup:

Next, let’s encrypt our whole data disk:

You need to answer some questions, especially for a password. Use a good one, but remember that you will have to type it each time you want to mount the volume. We better create a backup of the LUKS-Header and set up a second, even more complex password. Store both at a very save place:

We now have a completely encrypted partition and will make it available through deivcemapper under /dev/mapper/lukslvm:

Let’s create a lvm within our container, the virtual disk hat 210GB:

Finally we create our mount point and an entry in /etc/fstab.

We use noauto to prevent the system stalling on boot because the encrypted container isn’t open already. We also use noatime and dirnoatime to speed up rsync comparison.

Preparing dirvish

Now let’s get the backup running.

First you need to create a key pair for ssh login to the target machines. I won’t cover this here because there are trillions of pages about this. I called the keys id_rsa.dirvish and id_rsa.dirvish.pub and put them in /root/.ssh/ and copied the public key to all target machines (ansible is a very good friend!) Do yourself a favour and test if the login works.

Now install dirvish:

Now go to /etc/dirvish and edit the master.conf file.

We here define the bank (like “place where you store precious things”), which of course is our volume in the LUKS vault.

We also have some common excludes and a default expiry.

Debian provides a cronjob with some automatism, we won’t use that but rather create our own cronjob later in this process.

Backing up

Now let’s create our first backup. We will create a backup of the dirvish machine itself. Won’t help against failing disks, but against failing admins breaking stuff.

For each target machine we will create a folder hierarchy, the so called vault, which looks like this:

HOST will of course be the hostname of the target machine. Calling it *-root is just convention for backups of the root-tree. You could call it e.g. HOST-mailspool if you only backup the mail dir of a host.

Which means:

  • client: Hostname (needs to be resolvable!) of the target machine.
  • tree: Folder to backup.
  • xdev: 1 means that it will stay within the filesystem. Take care if you have /var or /home in different volumes (which I tend to have).
  • index: Type of compression of the index file (file list)
  • log: Type of compression of the log file saved within the folder.
  • image-default: Timestamp of the backup folder
  • exclude: Folders to be excluded.

There are a lot more config items, some of them which I use quite often:

  • pre-server: Path to a script on the dirvish server to be run before backup starts.
  • post-server: Path to a script on the dirvish server to be run after backup starts.
  • pre-client: Path to a script on the target machine to be run before backup starts. This is helpful for dumping sql databases before backing up.
  • post-client: Path to a script on the target machine to be run after backup starts.
  • speed-limit: Maximum transfer speed in Mbit/s

When we are finished creating our vault, we need to initialise it. This creates the first complete sync.

Depending on the size of the machine this can take quite a while. When done our vault will look like this:

tree now contains the folder hierarchy synced from the target, while the other files contain meta information.

The last step is creating a cronjob now. Simply add a line to /etc/crontab for each vault. Make sure to use different running times:

Ok, there’s one more step: Create any kind of monitoring facility, e.g. use post-server to send the summary and the log by mail, or parse these files and react on the results or use your existing monitoring solution…

By

Bridged Xen on Debian Wheezy on a Hetzner Server

Xen (not XeServer, btw!) seems to have taken a bak-seat recently, RedHat/CentOS/Fedora concentrating on KVM and Debian silently neglecting it.

This is reflected in documentation, there is a lot of outdated stuff around, especially about bridged setups. Same occurs to packages, at least in Debian Wheezy (NB: I also tried on testing, same results with fairly newer packages).

My aim was a virtual host which is directly connected to the internet without any external firewall running different virtual machines which ARE thoroughly firewalled. In order to archive this, I am running the quite decent Sophos UTM (formerly Astaro) as a VM, this is the only virtual machine with direct access to the external network interface. It’s other interface just like all other VMs are connected to an internal bridge without any link to the rest of the world. This is why routing isn’t an option.

This article focusses on the Xen server and the bridging setup, maybe I will write another one later about Sophos UTM etc.

xenserver

 

I am running this setup on some servers at Hetzner, though this should be working at most other hosters (some tend to drop the switch connection when they sense a pseudo ARP-spoofing, take care!), I am in no way affiliated to Hetzner.

My setup needs a secondary IP address (the main IP address is used for management of the host, I am assuming the following setup:

External Host-IP: 1.2.3.4

Secondary IP, used on UTM: 1.2.3.6. At least at Hetzner, this IP address needs to have it’s own MAC address assigned, this can be done in their Robot tool.

Setting up the host

I want to have my host running directly on the (Software-)RAID, I personally don’t really like running the OS on LVM. But I also want to have my VMs live in an LVM realm in order to easily take snapshots, clone etc.

This means that Hetzner’s default setup isn’t very helpful. But they have an answer file based installation using the rescue system. Therefore: boot into the rescue system and run install image.

Note: Preserve the temporary password for the rescue system, you will need it for the freshly installed system!

This lets you define your custom install file and then installs everything within a few minutes. I chose Debian Wheezy Minimal. The only two settings I changed were the hostname (I am using dome in this example) and the partition setup:

installimage2

I chose 50GB for my root filesystem and 12GB swap.

After saving the file and starting the installation, I had to wait for about five minutes and was presented with a brand new Debian system.

installimage3

After the first boot I changed the root password and added my own SSH key.

Note: This document doesn’t cover hardening your server, which you really should do!

First thing to do is updating all package sources:

I tend to install emacs23-nox as soon as possible, YMMV.

It is quite handy to add your domain, if you are using one, to /etc/resolv.conf and to /etc/hosts.

Next is changing the network setup, so edit /etc/network/interfaces :

So we transformed our (only) network card eth0 into a bridge called virbr0 and added a secondary bridge, virbr1.

Set up Xen

First install the xen system (4.1 on Wheezy) and the xen-tools which are quite helpful setting up VMs.

This will install xen and all the necessary tools.

In order to boot into a xen enabled hypervisor, we need to adapt GRUB:

Before we reboot we also adapt the boot command line in /etc/default/grub :

This basically limits resources on Dom0.

Now update grub and reboot:

After reboot we can check if Xen is up and running.

Looks fine.

Now we need to set up the network, which is quite straight forward:

Edit the file /etc/xen/xend-config.sxp  and comment out everything about networking, routing and vif except this line:

You  may also fine tune your Xen setup by changing the following lines:

The first thing we changed tells Xen to run a script called vif-bridge  located in /etc/xen/scripts/  as soon as a virtual machine is being created. The script basically checks if the bridge exists and connects the VMs virtual network card to the bridge.

Now we need to adapt this file to our naming convention, so let’s replace the occurrences of xenbr  to virbr  in the file /etc/xen/scripts/vif-bridge :

Now restart xend (for some reason the service is called xen  on Debian.

Getting the first VM up and running

Using xen-create-image  from the xen-tools makes it a piece of cake installing our first VM:

You can safely ignore the warning about vif-bridge.

Now there’s a little bug in the xen-tools:

So edit /etc/xen/test.cfg and remove the m from 512m:

Now let’s run it:

We can now connect a console to the vm and see why’s going on (you can also create it with the -c parameter above …).

Hint: CTRL + 5 gets you to of the console again.

By

HowTo: Hetzner Backup-Server Automount

Das Problem: Hetzner bietet bei einem Root-Server zwar 100GB Backup-Space an, auf den kann man aber nur per FTP, SFTP oder CIFS zugreifen, es ist z.B. kein direktes rsync möglich.

Ursprünglich hatte ich den backup space einfach ständig gemounted, allerdings gab es immer wieder Probleme mit hängenden Handles, vermutlich gehen die Backupserver in einen Energiesparmodus etc. Also habe ich auf AutoFS umgeschwenkt, und das funktioniert hier prima (Was ich bei grösseren Umgebungen so nicht behaupten kann, btw).

Also hier kurz eine Anleitung, wie man den Backupspace sehr einfach per CIFS und Autofs nutzen kann.

Achtung: Sämtlicher Datenverkehr findet umverschlüsselt zwischen dem Root-Server und dem Backupserver statt. Wollte ich nur loswerden.

Voraussetzungen

Im Robot (Hetzner Config-Tool) muss ein Backup-Space angelegt sein, dafür gibt es dann einen Servernamen, einen Usernamen und ein Passwort.

Hier gelten folgende Daten:

Backup-server: u12345.your-backup.de
Username: u12345
Passwort: secret

Ausserdem braucht ihr das Paket autofs , ist eigentlich bei jeder Distro im Repo vorhanden.

Ich mounte meinen Backupserver unter /mnt/backup-server/.

Umsetzung

Zuerst legen wir eine neue Mapping-Datei an, wir nennen sie auto.backup und legen sie in /etc. Sie hat folgenden Inhalt:

Der Reihe nach bedeutet diese Zeile:

  • Mounte das Share unter backup-server/ relativ zum übergeordneten Mountpunkt (siehe unten)
  • Mount-typ ist CIFS, also SMB
  • Charset ist UTF8 (wichtig, falls Ihr Umlaute etc. verwendet)
  • rw, also lesen und schreiben
  • Zugangsdaten sind in der Datei /etc/backup-credentials.txt  abgelegt. Achtung: auch hier umverschlüsselt.
  • Neue Dateien werden mit 0660, neue Ordner mit 0770 angelegt.
  • Am Schluss steht der Pfad auf dem Server, der gemounted werden soll. ://  sieht seltsam aus, passt aber.

Die /etc/backup-credentials.txt sieht so aus:

Zuletzt fügen wir dann noch in der /etc/auto.master  folgende Zeile ein:

--ghost  teilt autofs mit, dass es beim umounten den Mountpunkt nicht löschen soll.

Nun noch den Automounter mit service autofs start  starten.

Sobald man nun auf den Ordner /mnt/backup-server  zugreift, wird er gemounted. Feine Sache.

 

By

More on Accounting

Hier noch ein paar Bilder vom Accounting-Modul für PHPQstat

pqs_acct_pes

Parallele Umgebungen …

pqs_acct_slots-per-core

Jobs per number of cores (grouped)

pqs_acct_jobs-per-month

Number of jobs per month

By

Accounting für PHPQstat

Unter https://github.com/zapalotta/PHPQstat gibt es eine neue Version von PHPQstat mit einer einfachen Accountingdarstellung…

Mehr später!

By

HowTo install Ubooquity on QNAP

Ubooquity is a very nice little server which scans your eBooks and Comics and displays them in a tablet friendly way.

It is Java-based and runs fine on your Desktop, but if you’re running a (QNAP-)NAS which already stores all your books, why not have them served nicely.

Ubooquity

Ubooquity

Prerequisites

I did this on A QNAP TS-421 (ARM-CPU) Running OS version 4.x, though this should work the same way (except the Java installation on x86 see link below) on any other QNAP NAS.

I am assuming that you already have IPKG installed and are able to log in via ssh and already have some experience with the (Linux-)shell.

Install coreutils, procps and Java

The start script for the daemon requires the nohup and pgrep command which unfortunately aren’t shipped with the basic installation.

So simply do a

Install Java

Follow the instructions on http://wiki.qnap.com/wiki/Category:JavaRuntimeEnviroment in order to install Java. In brief:

Install Ubooquity

Download the jar from http://vaemendis.net/ubooquity/static2/download and put it on your QNAP NAS. I created a Folder Ubooquity in Public/ where everything from Ubooquity lives, so it is in /share/Public/Ubooquity/ now.

Do a test run on the shell:

Now you should be able to connect to the admin server on http://<qnapaddress>:2202/admin

Set a password for administration and play with the Web ui.

Install as a service

As soon as you close the shell from above, Ubooquity quits itself. Not very cool. So we need to install it as a daemon, a service starting on system start and then running all time.

Ubooquity provides a nice startup script called ubooquity.sh at http://vaemendis.net/ubooquity/downloads/scripts/. Get it and put it next to Ubooquity.jar. As pgrep on QNAP doesn’t support the -c (count) option, we need to change one line:

Replace all occurrences of the line

with

QNAP provides a quite easy way to register an application as a service. Simply edit the file /etc/config/qpkg.conf and add the following block.

You may have to adapt the paths to your installation.

Now you can start Ubooquity in the App Center just like any other app.

Ubooquity.start

QPKG View

 

By

checkrestart für Nagios und Check_mk

Falls es jemanden interessiert: ich habe einen kleinen Wrapper für Nagios und Check_mk geschrieben, mit dem sich checkrestart sehr einfach ins Monitoring einbinden lässt…

https://github.com/zapalotta/scriptlets

check_checkrestart

Update: Juhu, jetzt auch auf Nagios Exchange.