This area contains short proposals (howtos) for solution of special hardware and software problems that happened to appear to myself.

Often the fixing of such a problem is relatively simple, but you have to find out. In case even an extensive internet research did not yield the desired result for me or I found only partial solutions that—perhaps in the correct sequence and supplemented by own approaches—have to be put together, here I suggest some solutions that worked for me.

Of course, there is no warranty that these are applicable in general.


Web Server in a Sandbox

There are different methods to get a sandbox environment. Here, we use the  Firejail Security Sandbox, which allows to assign a private sealed scope to a service and all associated processes; this includes resources like network access, process table or filesystem. Therewith, the service only sees its own processes and can only access the part of the filesystem that has been assigned tio it.

A very helpful source for this howto was the articlel How To Use Firejail to Set Up a WordPress Installation in a Jailed Environment, which describes a WordPress installation including Firejail, Nginx and MySQL. In contrast to that article we restrict to the installation of the basic web and database services, especially Apache and PostgreSQL; on this basis, the reader may set up content-management systems (like Drupal or after all WordPress) or completely different services as usual. A further difference to the mentioned article is, that all scripts work with a specified target IP address, i.e., it is possible to run other services on the same machine, independent from the services described here, as long as they respond to other IP addresses.

We will set up the services so that they communicate with each other and the outer world over a bridge. The IP addresses involved are shown in the following table:

Service Public IP Address Bridge IP Address
Web server none
Database server none


1 Preliminary

2 Constituting the Traffic

3 Creating the File Systems

4 Setting Up the Services

4.1 Web Server

4.2 Database

5 Starting the Services

6 Conclusion

1. Preliminary

This howto applies to Debian systems.


In case of – as happens to be for Debian 7 ("Wheezy") – Firejail is not available as a system package, it has to be downloaded from Firejail/Download and be installed.


Furthermore, we need a package to set up a minimal filesystem


apt-get install debootstrap


and tools to set up the bridge:


apt-get install bridge-utils

In /etc/network/interfaces the public IP address has to be configured, e.g.,

iface eth0 inet static
    address yyy.yyy.yyy.yyy
    gateway yyy.yyy.yyy.1
    up ip addr add dev eth0 label eth0:0

where yyy.yyy.yyy.yyy is the "main" IP addresse of our machine and is the public address of the host to be configured by this howto.

If still an Apache is running on this system that shall listen to IP addresses different from the one set up here, it has to be configured to listen to these addresses exactly. To achieve this, in its configuration files any  _default_ entries  have to be changed to the actual IP address, respectively. As a template, you can use the entries for ports.conf and sites-available/* of the table in the Webserver section.

2. Constituting the Traffic

We start with some definitions.

Of course, the public target address of our services has to be accommodated individually:


Further definitions – as a general rule –  can be taken over as they are:

TCP_SERVICES="80 443" #web browsing

We create and configure our bridge:

brctl addbr br0
ifconfig br0 $BRIDGE_HOST_IP_RANGE

The operating systems allows forwarding of IP addresses by:

echo "1" > /proc/sys/net/ipv4/ip_forward

We allow traffic that is dedicated to our web server to be forwarded through our bridge:

  iptables -t nat -A PREROUTING -d $DESTINATION_IP -p tcp --dport $PORT -j DNAT --to $BRIDGE_WEBSERVER_IP:$PORT

Vice versa, traffic that is going to the outside world shall be masqueraded by the public address:


Traffic coming from the dedicated ports is allowed:

  iptables -A FORWARD -i $INTERFACE -o br0 -p tcp -m tcp --dport $PORT -j ACCEPT

Also, incoming traffic of already established connections is allowed:

iptables -A FORWARD -i $INTERFACE -o br0 -m state --state RELATED,ESTABLISHED -j ACCEPT

We allow all traffic that comes from our bridge to put the jails in a position to communicate with each other and the outside world:

iptables -A FORWARD -i br0 -j ACCEPT

All other forwarding will be dropped:

iptables -P FORWARD DROP

Optionally, we may allow traffic originating from our host to access the database server:


This is convenient if we want, let's say, to administrate the database with a service like phpPgAdmin over a separate IP address.

A template for an /etc/init.d script is attached below. Before running it, at least DESTINATION_IP has to be set to the public IP address of the host.

3. Creating the File Systems

We create a directory for the filesystems that later will be caught in a andbox and change to this:

mkdir /jail
cd /jail

Then, we create a minimal filesystem for the web server

debootstrap --arch=amd64 stable www

and make a copy for the database server

rsync -a www/ db

The latter is less expensive than a new debootstrap which would be an alternative .

By the way, the sandbox operating system does not necessarily need to be equal to to the host operating system. At the time of writing this document, my Debian-7 host system already was "oldstable"; regardless, it worked with "debootstrap ... stable ..." and it was possible to install and operate software from the stable branch within the jails.

4. Setting Up the Services

4.1  Web Server

To set up the web server, we start firejail within the filesystem created before

firejail --chroot=/jail/www/ --name=www

and install the web-server software (in this case Apache):

apt-get update
apt-get install apache2

Next we have to configure Apache for the environment in which it shall run. We can do this within the enviroment started by firejail (then we are working within the directory /etc/apache2) or from outside (then the configuration directory is /jail/www/etc/apache2); in both cases we are working on the same directory, only the root of the filesystem is different.

The following table shows the important entries to be made:

File Change
ports.conf Set Listen directive to resp.
conf-available/servername.conf (create file if necessary) One entry only: ServerName
sites-available/000-default.conf Set VirtualHost and ServerName to
sites-available/default-ssl.conf Set VirtualHost and ServerName to
envvars (optional) export APACHE_LOG_DIR=/var/local/log/apache2$SUFFIX (see below)

On starting of the jail the directory /var/log will be moved to a temporary location. To get the Apache log files permanently, we can change the log directory in the configuration file envvars as described above (here to /var/local/log/apache2), but have to create the corresponding directory structure manually: mkdir -p /var/local/log/apache2 (within the jail).

To enable the server name. we call the command (within the jail)

a2enconf servername.conf

Besides, all "normal" configurations have to be done that are required or desired for the opration of the web site.

If the web server is already running, we should stop it by

service apache2 stop

before leaving the firejail environment by


4.2 Database

Regarding to the database server, we proceed accordingly, i.e., first

firejail --chroot=/jail/db/ --name=db

(now specifying db as chroot and name) and then (for PostgreSQL)

apt-get update
apt-get install postgresql

In the file /etc/postgresql/9.4/main/postgresql.conf (within the jail, adjust the version number accordingly) the following configurations have to be made:

Entry Value Remark
listen_addresses '' Listen to our bridge IP.
logging_collector on Optional for logging.
log_directory '/var/local/log/postgresql' Optional, to be consistent with www logging.

When enabling the alternative log_directory, we have to create it and set appropriate permissions (within the jail):

mkdir -p /var/local/log/postgresql
chmod -R g-rwx,o-rwx /var/local/log/postgresql
chown -R postgres:postgres /var/local/log/postgresql

Of course, all other "normal" operation configurations have to be done here too.

If the database server is already running, we should stop it by

service postgresql stop

before leaving the firejail environment by


Note Despite intensive attempts using diverse configurations and doing exhaustive investigations over several days I failed to set up a MySQL server with Firejail. Although the server started, it always immediately was shutting down without leaving useful error or log messages. For this reason, I finally chose PostgreSQL.

5. Starting the Services

For set-up purposes we could start the jails as described above. To provide a network connection and to start the servers automatically in the background these commands are needed:

firejail --name=www --chroot=/jail/www --private --net=br0 --ip= sh -c "service apache2 start; sleep inf" &
firejail --name=db --chroot=/jail/db --private --net=br0 --ip= sh -c "service postgresql start; sleep inf" &

The option --private ensures that certain directories of the original filesystem like /root or /home are not visible, --net and --ip establishes the routing and the succeeding command is executed in the jail using those parameters: web and database server are started and sleep inf ensures that the sandbox is still running even when the Apache or PostgreSQL process ends or crashes (the latter might be of interest for administrative ends). Note: sh -c is emphasized because in contrast to other descriptions it didn't work for me without this.

The running firejail processes can be listed by:

firejail --list

To join a running sandbox we call

firejail --join=www


firejail --join=db

Stopping of the servers can be done by

firejail --shutdown=www
firejail --shutdown=db

Scripts for init.d are attatched below.

6. Conclusion

Using the described procedure one gets a web server and a database server, both running in a sealed jail, on the same machine within an internal IP range.

Within the web-server jail one can install a CMS like Drupal or WordPress as usual now, merely paying attention to the fact that the database has to be accessed by an internal IP address (

By the sealing it is achieved that, e.g., an attacker that could intrude by SQL injection only can cause damage within the database jail, but not in the rest of the system. Although such security measures never can be perfect, at least we have made the attacker a very hard work.

Binary Data jailbridge.template.gz1.24 KB
Binary Data jail-www.gz472 bytes
Binary Data jail-db.gz471 bytes


Debian GNU/Linux
A web server and related services as a database server shall run in jailed environments, so that they have no or or only limited access to other processes or filesystem paths of the operating system.

Improving Mailserver Reputation

It is quite easy to set up a mailing system on your own server, but more often than not the recipients of your mails find them in their spam or junk folder – if at all. The reason is that most mail providers establish procedures to block unsolicited mails we have to deal with nowadays.

There are a few measures one can take to improve the reputation of a mailserver so that outgoing mails are accepted from most providers. Almost surely it is not enough to implement some of them, instead all of them are required.

  1.  Reverse DNS
    Ensure that reverse DNS lookups yields the right domain for the server IP.. Usually this can be set by the administrative interface of the webspace hoster (not the domain hoster): Connect the IP of the webspace your mailserver is running on to the domain from which the mails are sent from.

  2. Hostname
    Ensure that the hostname command yields the right server name, e.g.,

  3. SPF (Sender Policy Framework)
    Add a TXT record to the domain containing
    v=spf1 a mx ~all
    This usually can be done by the administrative interface of the domain hoster. It ensures that mails that claim to be sent from your domain must origin from an IP equal to the A or MX records of your domain.

  4. DKIM (Domain Keys Identified Mail)
    This is dependent on the mail program you are using. We give an example here for exim4 running on a Debian system.
    Generate a private and public key in
      openssl genrsa -out 2048
      openssl rsa -in -out -pubout -outform PEM

    Add a TXT record named <selector>._domainkey to the domain containing
    following the actual public key generated above. Choose some arbitrary string for <selector>; it must conform to the corresponding entry in /etc/exim4/exim4.conf.localmacros (see below). If the domain service refuses the record because of its length, separate it into chunks included in quotation marks.
    After that adapt
    /etc/exim4/exim4.conf.localmacros like so
      DKIM_CANON = relaxed
      DKIM_SELECTOR = 20190215
      DKIM_PRIVATE_KEY = /etc/exim4/dkim/

    and run
      service exim4 restart


Mails sent from your server go into the recipient's spam or junk folder.

Extracting Multiple Subtrees from a Git Repository

Git provides different approaches to solve this task. Here, we use the git-subtree command to create branches from the subtrees to be detached. Then, these branches will be imported from another repository. Sources were the manpage (man git-subtree) and this answer on stackoverflow along with some comments listed there.

Let's start with a Git repository bigrepo.git containing the following directory structure:



Maybe, it turned out that the files below lib comprise a library that is useful not only for this project but could serve general purposes. Thus, this should become a project of its own and the lib subtrees below main and test shall be moved to a separate repository.

For the following steps, first we need a working copy of the repository. So, if bigrepo.git is a bare repository, we create one in the current directory by

    git clone /srv/git/bigrepo.git

(provided that bigrepro.git resides below /srv/git). If we have no bare repository, it is recommended to make a copy nonetheless to simplify the clean-up in the end.

Now we go into the the new archive that we just cloned and create two new branches—one for each subtree to extract, say split-lib-main and split-lib-test—using the subcommand split of git-subtree:

    cd bigrepo

    git subtree split -P src/main/lib -b split-lib-main

    git subtree split -P src/test/lib -b split-lib-test

These new branches contain all files and directories below the path specified by the -P option including the whole version history, respectively; the path specified by -P must not contain any leading or trailing slashes. The option -b specifies the name of the branch which you can choose arbitrarily.

Though, following this procedure, the path prefixes of the files in the new branches are lost. This means, for instance, the files and directories in split-lib-main do not preserve any information about their original parent src/main/lib. The directory structure below, however, remains intact.

Keeping this in mind, we continue by creating the new repository—let's call it librepothat will contain the extracted subtrees only. For simplicity, the new repository will be created at the same directory level where bigrepo resides. Since we are still in  bigrepo, we do the following::

    cd ..

    mkdir librepo

    cd librepo

    git init

From this we have a new repository that is able to incorporate the files extracted above.Unfortunatly, the command we will use later doesn't work on an empty repository. Therefore, we just commit some arbitrary file, say some that we will need in the future anyway:

    touch .gitignore

    git add .gitignore

    git commit -m "Ignore file added." .gitignore

Now, we can complete the extraction:

    git subtree add -P src/main/lib ../bigrepo split-lib-main

    git subtree add -P src/test/lib ../bigrepo split-lib-test

By -P we re-add the path root that had been lost; if we had the need, we could specify any other root instead.

If there are additional files that belong to librepo that cannot be extracted by this way, e.g., configuration files for Maven, license or README files from the project's root directory, add them manually. The version history will be lost for those files, but this should not be too problematic in this case.

Our new librepo now contains all files and directories that are stored in bigrepo below scr/main/lib and src/test/lib including all version information. We can create a bare repository from this by the usual Git commands and put it at any location.

At this point, git-subtree even offers the opportunity to keep the component parts in both repositories (bigrepo and librepo) and work on them concurrently. However, this is beyond the problem definition elaborated here and might cause consistency problems anyhow. Thus, this scenario is not considered here further.

What remains is the clean-up: Since we made a copy of the original repository in the beginning, we just can throw away that copy. Alternatively, we can push the splitted branches to the original repository for reference. In any case, we might want to delete the extracted components from the original archive (of course, after we conviced ourself that everything is in librepo), to avoid getting drifting-apart concurrent developments.






Multiple subtrees of a Git repository shall be extracted and transferred to a new archive while keeping the version history.

Crontab without Continuous Operation

Unix systems including Linux have a mechanism that automatically can execute programs at a specified time – for instance daily at a certain time of day or weekly or monthly at a given day and here again at a certain time of day. All of this is triggered by a file named crontab which usually resides in the directory /etc (i. e., /etc/crontab) and contains the particular indications.

Unix workstations, on which this mechanism has been introduced initially, used to operate continuously 24 hours a day. Therefore, the crontab triggering takes effect only exactly at the specified point in time. If the computer is not operating at this time, the item decays and the program in question will not be started.

Linux PCs often are operating non-continuously. What is needed here is a procedure where the crontab trigger is reinserted at a later point in time, in case the PC was off at the actually configured time. Programs whose execution by crontab has been missed at the proper time should be started later on as soon as the PC is operating again.

There are alternative solutions for this like anacron, but apparently these do not fit quitely into current systems' topography. OpenSUSE uses another approach: /etc/crontab contains an item that runs the program /usr/lib/cron/run-crons in short intervals (every 15 minutes). This in turn looks for program scripts in the directories /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly, which it starts when at least 60 minutes, 24 hours, 7 days or 30 days have been gone by since the last run, respectively.

In this way, one can put up programs into the concerning /etc/cron.* directories that shall be executed in regular time intervals. Though, for the system administrator it might not be the best choice to touch these system directories. For this reason, under OpenSUSE by default there is a script like in the directory /etc/cron.daily that in turn calls the script /root/bin/cron.daily.local (if that exists). Hence, the administrator is able to define tasks to be executed daily by his own /root/bin directory.

Unfortunately, there are no corresponding scripts for hourly, weekly or monthly intervals by default. However, can be enhanced so that it is applicable universally in the sense mentioned. In the attachment below I provide a shell script cron-local that realises such an enhancement.

After downloading and unpacking of cron-local.gz just copy or link cron-local into the directories /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly (as required); also, don't forget to make the respective file executable. (Either skip /etc/cron.daily or delete the default script or the like – if existent – to avoid multiple executions!)

After that, the administrator can set up scripts cron.hourly.local, cron.daily.local, cron.weekly.local or cron.monthly.local (according to the directories chosen above) in his /root/bin directory which then will be executed in the appropriate time intervals. Of course, all of these are still optionally: if a file is absent, this simply will be ignored.

Binary Data cron-local.gz458 bytes



Events triggered by crontab will not happen, if the PC is off-state at the time specified.

AWStats: Taking Over Data from Other Profile

When AWStats data shall be taken over from another, e.g. older, profile do the following.

First, ensure that during this procedure no unwanted updates of profiles take place by, say, removing all AWStats update commands from crontab.

Copy the old data files into the new profile data directory. The location of that directory is defined in the configuration file by the DirData directive, usually as /var/lib/awstats/<config>, where <config> denotes the profile name. The configuration file itself commonly is found in /etc/awstats as awstats.<config>.conf.

Rename the copied files according to the pattern


Here, <MMYYYY> stands for a 6-digit code comprising 2 digits for the month and 4 digits for the year (this code should already be there in the old name and should not be altered) and <config> denotes the new profile name.

On overlapping (i.e., when there are multiple profile versions of a file, e. g. awstats032018.profile1.txt and awstats032018.profile2.txt) favour the older one provided that the corresponding newer data are still present in the current log file; the newer data then will be recovered at the next update.  If, however, the multiple data are from before the latest logrotate, the business is more complicated and not covered by this tutorial.

Then delete


and call

/usr/lib/cgi-bin/ -config=<config> -update

If everthing is okay, the AWStats update commands can be reactivated in crontab.




The AWStats profile has changed and older statistics shall be taken over.

Repairing AWStats

The following situation arises (for me the first time after years of usage): AWStats runs by crontab,  but all of a sudden no statistics are generated anymore; an error related to the execution of crontab can be ruled out. A manual call like

/usr/lib/cgi-bin/ -config=<config> -update

where <config> stands for the configuration in question,  results in an error message like

AWStats did not find any valid log lines that match your LogFormat parameter, in the 50th first non commented lines read of your log.
Your log file /var/log/apache2/ssl_access.log must have a bad format or LogFormat parameter setup does not match this format.
Your AWStats LogFormat parameter is:
This means each line in your web server log file need to have "combined log format" like this: - - [10/Jan/2001:02:14:14 +0200] "GET / HTTP/1.1" 200 1234 "" "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"
And this is an example of records AWStats found in your log file (the record number 50 in your log): - - [31/May/2017:20:45:53 +0200] "-" 408 1447 "-" "-"

Apparently, AWStats evaluates log-file entries with blank fields as - - [31/May/2017:20:45:53 +0200] "-" 408 1447 "-" "-"

(where stands for a concrete IP address) as invalid.

Such entries are appearing from time to time in webserver log files and are no problem normally. However, if more than 50 of that kind happen to occur without a break, AWStats stops working completely from this point on and for the future (exactly: until the log file is exchanged).

As a remedy, one can delete the lines in question from the log file (at least so many that less that 50 remain consecutively). Before you should stop the webserver (e.g., service apache2 stop) and afterwards start it again (service apache2 start). Then, AWStats continues its work (if configured as cron job) and updates the statistics from the formerly offending point on.


AWStats doesn't update statistics anymore although it worked fluently so far. The configured cron jobs are still running and manual calls yield no success either.

Upgrading openSUSE from a Running System

Update: The script talked about in the text has been replaced by a new version with simplfied usage. For details see the comment New Version with Simplified Usage.

Upgrading a system is always a potential cause of pain and damage. This is only for experienced administrators who are at least able to do a “normal” upgrade of an openSUSE system and can take appropriate measures if anything goes wrong. So, this is definitely nothing for newbies, unless you are just using the steps described below  for learning. Please regard the following warnings.

Warning: Do not use the attached script or the hints provided here if you don't know what you are doing! Check the script and the hints for obeying your needs and your personal system before. Otherwise, it can damage your system!

Regarding to the script, when you continue, in particular, after a message like "***Now disabling old repositories. Continue (y/n)?" has been shown and break the procedure afterwards, all repositories of the old system will be lost.

Moreover, when you continue with a message like "
***Finally starting the upgrade. Continue (y/n)?" and break the procedure afterwars, you will get an incomplete and definitely unusable new system.

So far for the warnings. We will develop a procedure here to upgrade openSUSE (currently to version 12.1) from a running system without media. It was inspired by a piece found at; unfortunately, I can't find the exact URL anymore, because's documentation section apparently got mixed up at last. In the end, a complete script is attached. Here we will describe some important portions.

Initially, we define some symbols: the operation system (OS) which is in question, the server that hosts the files to be loaded and the version (NEW_VER) which is to be installed.



At next, we should set a reliable starting point for the distribution upgrade. This means to do a usual update of the current version. Refreshing the current repositories and updating by zypper will do it.

zypper refresh
zypper update

Now, all old repositories have to be disabled. This is, because the upgraded system can't be got from those but have to be gotten from new repositories. Warning: Beginning at this point we are in upgrading mode and cannot cancel the procedure without problems or even damage.

zypper modifyrepo --all --disable

So, we add the new standard repositories – which are almost mandatory for the new installation. The general format for that is

zypper ar -cf -n <Name> <URI> <Alias>

where ar  means addrepo (add repository) and -cf involves checking the URI and refreshing the repository automatically. Also, you can change the priority of a reposity by

zypper mr -p <Priority> <Alias>

The default priority is 99. It is not so obvious what these priorities really do; it seams that a package of a given version would be loaded from the repository with the lowest priority number.

To be concretely, the following three repositories are standard and should be included always.

zypper ar -cf -n "${OS}-${NEW_VER} OSS" ${SERVER}/distribution/${NEW_VER}/repo/oss/ ${OS}-${NEW_VER}-oss
zypper ar -cf -n "${OS}-${NEW_VER} Non-OSS" ${SERVER}/distribution/${NEW_VER}/repo/non-oss/ ${OS}-${NEW_VER}-non-oss
zypper ar -cf -n "${OS}-${NEW_VER} Updates" ${SERVER}/update/${NEW_VER}/ ${OS}-${NEW_VER}-update


Optionally, further standard repositories (source, debug) could be added. In this case, we use the format

zypper ar -cdf -n <Name> <URI> <Alias>

where ar means addrepo and -cdf involves checking the URI, but disabling, although refreshing the repository automatically when appropriate. Exclude 'd' from "-cdf" to activate a repository. These repositories probably are useful only for developers. Also you can change the priority of a reposity as before; the default is 99 again.

Applied to the rule we have these source and debug repositories:

zypper ar -cdf -n "${OS}-${NEW_VER} Source" ${SERVER}/source/distribution/${NEW_VER}/repo/oss/ ${OS}-${NEW_VER}-source
zypper ar -cdf -n "${OS}-${NEW_VER} Debug" ${SERVER}/debug/distribution/${NEW_VER}/repo/oss/ ${OS}-${NEW_VER}-debug
zypper ar -cdf -n "${OS}-${NEW_VER} Debug Updates" ${SERVER}/debug/update/${NEW_VER}/ ${OS}-${NEW_VER}-debug-update


The script attached below includes many other repositories like openSUSE BuildServices, Packman, etc. All of these are commented out by default, so just remove the '#' sign to add a repository and switch the "-cdf" argument to "-cf" to get the appropriate activation or vice versa. Also, this is the right place to add more repositories or to uncomment or to delete existing ones according to yout needs.

Let's get a listing of the new repositories:

zypper repos --uri

Before we proceed, the new repositories should be refreshed.

zypper refresh

Now, update zypper itself to the version to be upgraded first. This is probably a good idea, however it may upgrade a considerable portion of the whole new system already. As a consequence, it may take some notable time.

zypper update zypper

Finally, start the distribution upgrade. (Takes assumably several hours!).

zypper dist-upgrade

When finished, the system should be rebooted as soon as possible.

Since the attached file is only a script, it is provided – despite the statements in the blog – under this simple licence:

This program is distributed WITHOUT ANY WARRANTY, express or implied.

Permission is granted to make and distribute verbatim copies of this document provided that the copyright notice and this permission notice are preserved on all copies.

Permission is granted to copy and distribute modified versions of this document under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.


Finally, note that the script is not intended for batch processing. It will ask many questions – some coming from the script itself, others induced by the zypper program called – you have to answer (usually by yes or no). So, you have to sit at your terminal and look what happens.

Binary Data suse_upgrade.sh_.gz1.42 KB


12.1 - 13.2

Want to upgrade system to new openSUSE distribution without using media like DVD, CD, USB sticks or the like. Instead, just performing an upgrade from the running system.

Updating a Drupal-6.x System

This howto is a short description for experienced Drupal administrators who want a check list for system updates. Thus, it is assumed that the basic update mechanism including the server directory structure is known. Furthermore, it is assumed that all administrative actions on the website are performed with appropriate permissions. Finally, it shall be emphasized that here only updates of a Drupal core version 6.x1 to 6.x2 or of custom modules for 6.x are considered (minor updates); major updates (e. g., from 5.x1 to 6.x2 or 6.x1 to 7.x2) as well as updates of custom modules between different major core versions are not considered here.

A comprehensive guide for updating especially the core system (inbetween the 6.x series) can be found as part of the Administration Guide on at this Howto. Beginners should read that first.

The steps for updating custom modules are basically a subset of the corresponding steps for updating the core system. Therefore, both are collected here together, where steps that only apply to the core system are marked by colour and text [core].

Updating a system is always dangerous. It can end in a non-functional system or can cause arbitrary damages. Therefore, the following steps are done on your own risk and without any warranty!

  1. Download the files to be updated and unpack them if appropriate. [core] From the local copy, delete the files .htaccess, robots.txt and the directory sites, if not has been stated that these were updated. If they were updated however, you probably have to do manual changes to them; in this case, you should include the files in question into the backup from the server (see below).
  2. Switch the website into maintenance mode.
  3. [core] Make a Backup of the files from the server. Use a suitable FTP pogram like FileZilla or Konquerer. It is on your choice whether to save all files from the server or only the sites/default directory (the latter contains among others the settings for the website and uploaded files). It is not an error to apply this step to updates of contributed modules also, although not imperative when treading very carefully.
  4. Make a Backup of the database. This is unconditionally required, because at each update unforeseen things can happen You can use, e. g., the backup_migrate module or even phpMyAdmin. It is no bad idea, from time to time to use the one or the other method to have different restore options if the case arises.
  5. [core] Disable all custom modules. Before, take down the current state as – for instance – screenshot (maybe by virtue of Firefox Shooter) or manually. You will need this for re-enabling the modules!
  6. When updating custom modules, delete the concerned old files from the server. This is recommended, since otherwise non-overwritten files may cause inconsistencies. [core] When updating the core system alone, this is generally not required, as long as a minor update within a major series (here: 6.x) is concerned.
  7. Copy the new files to the server into the dedicated directory, respectively (generally under sites/all/modules or [core] the document root directory).
  8. Call update.php.
  9. [core] Re-enable all previously disabled custom modules and call update.php again.
  10. Switch the website  maintenance mode off.
  11. Check the updated website.



A Drupal system shall be updated within the 6.x series – core, custom modules or both.

Problems when Configuring the Login Screen by the KDE4 Login Manager

After updating a KDE3-based Linux system (in my case openSUSE) to a new version based on KDE4, it may happen that the login screen cannot be changed by the login manager of KDE4 (System Settings|System|Login Manager) anymore. Even though the settings taken there are stored correctly at the dedicated location


it has apparently no effect. In contrast, a configuration by the login manager of KDE3 is still possible; however, after installation of KDE4 this one is only callable from command line by

kcontrol &

(and then System Administration|Login Manager).

To carry out such configurations by the KDE4 login manager, in /etc/sysconfig/displaymanager the entry


has to be set. (If the value is simply equal to "kdm", the system apparently calls the KDE3 version always.) With openSUSE, the simplest way to set this value is to use YaST: choose System|/etc/sysconfig Editor and then Desktop|Display manager|DISPLAYMANAGER (enter kdm4 manually here, if it is not available by the selection list).

With this, the KDE4 login manager should work. However, if you want to change the design of the login screen also, with openSUSE in addition the default entry


in /etc/sysconfig/displaymanager has to be replaced by


which again can be undertaken by YaST: System|/etc/sysconfig Editor and then Desktop|Display manager|DISPLAYMANAGER_KDM_THEME (enter an empty value here).


KDE 4.3 / KDE 3.5 / KDM
openSUSE 11.2 (x86_64)

Settings taken by the KDE4 login manager have no effect on the login screen.

Sound problems after updating KDE

After updating from openSUSE 11.1 to version 11.2 and the incorporated transition from KDE 3.5 to 4.3 on one PC the sound generated by KDE applications (including system notifications) only worked every once in a while: when logging in it was there or it was not until the end of the session, where this behaviour appeared to be arbitrary and non-reproducible. At the same time, non-KDE programs did not show such bugs, their sound happened to perform faultless.

Apparently, there was an initialization problem: something disturbed the KDE sound system during the session start and depending on whether KDE or the other system won, the symptoms described occured.

There was no contribution in KDE help for this problem. Various internet forums describe similar symptoms and offer approaches, I tried many of them without success, until I finally got the crucial point (unfortunately, I don't know the source anymore): KDE4 and PulseAudio—as of the current versions—do not get along with each other.

Indeed, on my system, PulseAudio was activated. This setting was taken over by the update from the former installation; when doing a new installation of openSUSE 11.2 from scratch, PulseAudio stays uninstalled by default (as it appeared to be at least on another PC), so that the problem in question does not arise in such a case.

Once addressed the cause of the problem, the solution is rather simple:

  1. Deinstall pulseaudio along with the packages depending on it. By this opportunity, check if phonon and phonon-backend-xine are installed and all dependencies are resolved.
  2. Unfortunately, now non-KDE sound is not working anymore. This is because the system sets two environment variables, SDL_AUDIODRIVER and ALSA_CONFIG_PATH, that still refer to the PulseAudio system. On openSUSE, this takes place in the file /etc/environment, where you should delete or comment out the corresponding entries:

    After a relogin both KDE and non-KDE applications are expected to provide sound.
  3. If it still happens to malfunctioning, you could try for instance one of the following: deinstall phonon-backend-gstreamer and arts, set the system variable PULSEAUDIO_ENABLE to "no" (in /etc/sysconfig/sound or by the item Hardware/Soundcard in sysconfig editor), add the user to the group audio.



KDE 4.3.1
openSUSE 11.2 (x86_64)

KDE sound ringing out only sporadic, although working with non-KDE programs.

Navigation by Maps on Smartphones with GpsMid

GpsMid is an excellent navigation software for Java smartphones. It is not only free and open-source by itself, but also uses the free map data from OpenStreetMap. Software and map data – packed in a JAR file as MIDlet – can be uploaded from the PC to the mobile phone; as long as one doesn't reload further data from there, no charge for mobile connections will accrue. Furthermore, the program can be used without GPS device also – even though the name suggests something else; in this case, one has to do the navigation manually, what will be mitigated by the fact that the on-location navigation is only a special case of general map use anyway.

So, how do I get the map on the smartphone? The simplest way is to use one of the ready-to-use packages from the GpsMid project page. If you find one that is in accordance with your needs, just download the corresponding JAR and optionally JAD file to your PC, copy them to the phone, install them there and you are through. (How to install software on a smartphone depends on the model and is not discussed here further.)

But what if just my desired region is not there or the operation system of the mobile phone refuses to install the JAR file? The latter can happen even if the memory card has enough capacity, because often there are limitations on the size of a single JAR file to be installed. So, if this way fails, on your PC you can create your own MIDlet consisting of a JAR and corresponding JAD file that contains the appropriate map region; it is also possible to include several regions in one file.

For that, first of all you have to have the following on your PC:

  1. The Java Runtime Environment (JRE), at least version 1.5.

  2. The program Osm2GpsMid from the GpsMid download page (the current version of the file is Osm2GpsMid-0.6.jar at the time of this writing).

  3. An OSM file that contains OpenStreetMap XML data at least for the desired region. The Osm2GpsMid project page refers to some available download sources. I made good experiences by myself with Geofabrik.

  4. A properties file that controls the generation of the resulting map material by Osm2GpsMid. That configuration file is required to have the filename extension .properties. You have to create this file manually, should the occasion arise by a template. How to do that will be explained immediately.

The most important setting in the .properties file is the desired region. You define it by specifying the minimum and maximum values for the latitude and longitude, respectively; for example = berlin = 52.5008 = 52.5406
region.1.lon.min = 13.4047
region.1.lon.max = 13.4671

One can specify up to 9 regions serially one after the other that will be extracted from the raw data downloaded according to item 3. The best way to gain the needed values for latitude and longitude is to go to, choose the desired region there, click on Export, adjust the area again manually if needed and then look for the area data displayed: bottom and top for the latitude degrees (lat), left and right for the longitude degrees (lon), respectively. Of course, the export itself is not to be performed at this place, you just need the numbers. In case the JAR file resulting at the end of the following procedure should come out to be too big as mentioned above, one should choose a smaller region; if nothing else helps, one may divide the region on several MIDlets (see below).

Other settings that may (optionally) be taken in the .properties file, do concern special target systems (app), the specification of a style file (style-file) or the option to switch on and off routing or editing (useRouting and EnableEditing); more on this is described on the Osm2GpsMid project page. Most of these settings have influence on the size of the resulting JAR file.

A useful eventuality finally is given by the setting By this you can specify a dedicated name for the resulting MIDlet (instead of the default GpsMid). This allows for uploading multiple instances of the program with different map data to the mobile phone, e. g.,

An example for a full .properties file (optimized for small size) is given in the attachment. Using the raw data of berlin.osm.bz2 from the download page for Germany of the Geofabrik (cf. item 3 above) and in accordance with the procedure described below, it produces a file GpsMidBerlin-0.6.jar with a size close to 800 KiB.

Now, after you have all required prerequisites collected, copy the files you got from item 2 to 4 in a directory on your PC. This directory should then contain:

  • The Java archive Osm2GpsMid-0.6.jar (or a newer version of it, if available).
  • An OSM file containing the map region(s) to be generated (for this example berlin.osm.bz2).
  • The configuration file created, e. g., from the attachment (in case adjusted to your own needs).

From this directory call the command

java -Xmx1024M -jar Osm2GpsMid-0.6.jar berlin.osm.bz2 berlin

It starts – for this example – the program Osm2GpsMid to generate a MIDlet for map navigation from the raw data of berlin.osm.bz2 configured by the data in; the parameter -X allocates (hopefully) enough memory for the Java machine.

The result will be – for this example again – two files, GpsMidBerlin-0.6.jad and GpsMidBerlin-0.6.jar, that can be uploaded to and installed on the smartphone as described above.

Binary Data berlin.properties1.28 KB


GpsMid 0.6
Java ME/MIDP 2.0
(getestet auf LG KP500)

GpsMid including map data shall be installed on a Java smartphone. Under circumstances the resulting JAR file exceeds a limit set by the operating system.