Customizing your Spamassassin mail filtering

Thousands of mail servers everyday pass millions of messages through Spamassassin, yet most of these installations are using only the standard Spamassassin rules. Many servers aren’t taking advantage of many of the great features Spamassassin has to offer.

The first thing you need to know about enabling custom rules and setting up Spamassassin in general is this: do your work in or unique files call from and do NOT change the default files for the rules. When there are updates your changes will be overwritten if you modify other rules.

If you open you will see a default file with some commented out common features, the first feature we want to enable is Bayesian filtering. You can do this by adding the following to the bottom of the file:

use_bayes 1
#bayes_auto_learn 1
bayes_ignore_header X-Bogosity
bayes_ignore_header X-Spam-Flag
bayes_ignore_header X-Spam-Status

This will enable Bayesian filtering and prevent the Bayesian filtering from relying on headers that maybe forged or set by spamassassin itself in other areas. In addition we tell the Bayesian filters to NOT auto classify specific messages as spam or not spam based on statistical analysis… this is a feature will will want eventually but not until we have at least a few hundred of messages classified each as spam and ham.

How do we do manual classification of messages? We use the sa-learn program… it’s very easy. On the server open your mailbox in mutt (mutt -f /path/to/box/or/Maildir) then we can save the messages that are SPAM into a spam-folder and the messages that are HAM into a ham folder. Once we have done this for a few hundred messages we run sa-learn against them and classify them.

core:~# sa-learn --showdots --spam /home/testuser/Maildir/spammy
core:~# sa-learn --showdots --ham /home/testuser/Maildir/goodmail

This will teach spamassassin about the specific messages we are receiving… and teach it about mail it shouldn’t mark as spam. Once we have at least 100 of each we can turn the bayes_auto_learn on by uncommenting it from the file. Once we do this spamassassin will then mark items as spam and ham automatically when they are on the extreme ends of the scale. There are some additional useful items to add to the for bayes filtering as well:

bayes_auto_learn_threshold_spam 10
bayes_auto_learn_threshold_nonspam 0.50

These tell spamassassin to use a threshold score of 10 before adding a message to the bayes database as spam, and it must have a lower score than 0.50 to be auto classified as ham. These are good starting values that can be later tweaked downward as the system becomes more familiar with both types of messages your network receives.

If your mail server is single purpose and doesn’t have a wide variety of perl libraries you should probably have at least Net::DNS and its pre-requisites installed to correctly utilize spamassassin’s rbl checks.

The next item to review is dns block-lists. Many people feel strongly regarding these either in the positive or negative. To enable them you simply add the following to the file:

skip_rbl_checks 0

to disable:

skip_rbl_checks 1

You can further customize which lists you wish to set scores for by setting the actual rule name of the list with a score of zero. For example if you wish to disable the dynamic ip address check for sorbs, you would find it in the

header RCVD_IN_SORBS_DUL        eval:check_rbl('sorbs-lastexternal', '', '')
describe RCVD_IN_SORBS_DUL      SORBS: sent directly from dynamic IP address
tflags RCVD_IN_SORBS_DUL        net

Which shows the name is RCVD_IN_SORBS_DUL. To disable it in the you would add a line like this:


while still keeping the rest of the dnsbls intact.

A further thing which many people neglect to do is alter the weighting of the rules. This is as easy as determining the rules name and applying a score to it in the file.

score RULE_NAME newscore

For example, if you wanted to alter the weighting for the rules related to some types of viagra spam you might add the following.

score FM_VIAGRA_SPAM1114 4
score DRUG_ED_CAPS 2

The last thing you might want to do is write a filter for a custom type of spam you’re receiving. Lets make the assumption you’re frequently getting a spam for fooglesnaps. The rule for this is very easy to write.

body LOCAL_FOOGLESNAPS_RULE /\bfooglesnaps\b/i
describe LOCAL_FOOGLESNAPS_RULE  This is to help catch the fooglesnaps spam.

What this rule says is anytime you see fooglesnaps regardless of case (case insensitive) with a word break on both sides, this rule applies. The next line says what the base scoring for the rule is, in this case half a point. Always start around 0.1-0.5 and work your way up to make sure you don’t mark mail incorrectly. The final line is just a human readable description of the rule. You can use pretty much any perl regex rule to filter your spam. The sky is the limit, however you should be sure you NEED a new rule and can’t just use a black or white-list to correct your delivery issue because rules are more expensive on the cpu and memory than simple black and white lists.

This covers some of the basic and common setup tasks for a new spamassassin installation that many users miss or or don’t enable. As always if you need further assistance with this or any other open source application or issue, the experts at Pantek Inc. are available 24/7 at, 216-344-1614, and 1-877-LINUX-FIX! We look forward to working with you.

OpenVZ, backups, and you

Virtualization technologies are moving to the forefront of computing. VMWare, OpenVZ (Virtuozzo), VirtualBox, Xen, KVM, Hyper-V… the list grows. Today we’ll take a look at the most basic and critical of functions in any environment – backups. In today’s example we’ll take a look specifically at backups in OpenVZ. First, lets talk about what kind of virtualization OpenVZ provides.

OpenVZ is OS level virtualization, as opposed to the full virtualization you see in VMWare or the paravirtualization options you see with KVM. The system functions as a series of chrooted jails that completely segregate each container from the other containers on the system. Because they share most common files there is often a significant savings of disk space on a host with many containers. Because there is less emulation of hardware occurring, this also tends to yield good performance compared to paravirtualzation. The ability to update the host alone and subsequently all guests relying on the host as well has made this an attractive option to many hosting providers and companies with a large infrastructure to maintain.

So let’s dig into backups a bit. OpenVZ provides vzdump for working with containers. Vzdump takes a snapshot of the configuration and private space of a container and backs it up to a directory you specify as a tar ball. This backup can be later restored to the same or a different machine with little difficulty.

There are three basic forms that vzdump can operate in. The first is stop the container and back it up, which requires significant downtime. The second is rsync and suspend/resume the container, requiring minimal downtime. The third involves the use of lvm2 shadow snapshots, which requires no downtime.

In it’s most simple form it consists of the following:

vzdump [container-id-number]

This will simply take the private area and configuration file and dump them as a tar ball to the default directory (/vz/dump). It does stop the container to perform this operation which can take anywhere from five minutes to several hours to complete. For development servers this is an acceptable option, but for most production servers, this kind of downtime is unacceptable.

The second way we can perform backups is to suspend the vm long enough for rsync to take a snap shot of the differences between the start of the backup and the completion. This still results in downtime but it is minimal. This makes this option much more desirable.

vzdump --suspend [container-id-number] --dumpdir /backups

The options here are fairly simple. The –suspend tells the vzdump program to use rsync and the –dumpdir parameter tells vzdump to place the backups outside of the default area in the /backups directory. The short amount of downtime makes this a much more acceptable method of backing up for most companies. Most of us want no downtime, which brings us to our third option.

The third option is to use lvm2 to create a snapshot of the os. This option isn’t syntactically more difficult than the previous two options, but it does offer a full backup with no downtime.

vzdump --snapshot [container-id-number]

Here we take a simple backup and dump it to the /vz/dump directory (the default), and we tell vzdump to make it a snapshot backup so there is no downtime involved in getting the backup. This is by far the best option for most operators, as host downtime can often lead to lost revenue.

So now you have a backup made, what do you do with it? Well, vzdump is the answer there as well. The tool that makes the backups also restores them. The syntax for that is what you would expect.

vzdump --restore /backup/vzdump-containeridnumber.tar [container-id-number]

Some versions of OpenVZ do not support vzdump –restore and if this is the case then you should look at using vzrestore like so:

vzrestore /backup/vzdump-containeridnumber.tar [container-id-number]

This will provide exactly the same effect as using vzdump –restore.

If you’re unsure of the id of your container you can get a full list of ids by running vzlist. There is always the option of using –all instead of a specific container id which will perform that operation on all containers running on that specific host.

This covers the basics of using vzdump and how to restore the backups created with it. Virtuozzo offers a lot of flexibility in performing backups and if the option you prefer isn’t available for your os, there are a lot of third party tools available as well. For example, vzbackup from Parallels for Virtuozzo which supports backing up to remote sites. You can even schedule simple tar backups, remote rsync backups, or code your backups to fit into your existing backup scheme. Just be sure to correctly backup and test your restores no matter what method you use, any backup is only as good as your ability to restore it!

As always, if you need further assistance with this or any other open source application or issue, the experts at Pantek Inc. are available 24/7 at, 216-344-1614, and 877-LINUX-FIX! We look forward to working with you.

Tar, from basic to advanced, a high quality reliable backup tool

Tar is one of the earliest applications used for backups in unix and its still a very functional backup tool. Tar is both the file format the tar application generates and the application itself and for that reason the files generated by tar are generally referred to as tarballs.

Tar stands for Tape ARchiver and it backs up in a sequential manner, storing permissions, directory structure, and filesystem information as well as special attributes by default. Typically a tarball will have .tar as an extension unless it’s compressed in which case you’ll typically have .tar.ext, in windows the 3 letter extension convention is typically followed by shortening the file names .tar.gz -> .tgz, .tar.bz2 -> .tb2, etc. The actual structure of a tarball is fairly simple and is a concatenation of the files being archived each with a small header (512 bytes) and ending with zero padding rounded up to the next number divisible by 512 plus 1024 bytes of 0’s. This makes the record divisible without remainder by 512 bytes and delineates the end of the record. Therefore, anything backed up in a tar archive will be a minimum of 1536 bytes per file plus necessary header, ownership, and file system information. This yields a realistic minimum size of ~8-12kb. The biggest problem with tar is that it is entirely sequential, and there is no metadata relating to where to find each individual piece of information in the tarball (like an index or table of contents), and thus its often hard to extract a single piece quickly in a large tarball.

So now that we understand the basics of what tar is, lets talk about how to use it…

The most common options we’ll use with tar for creation of an archive are c (create), f (file), v (verbose), z (gzip), p (preserve acl), and j (bzip2). Lets talk about what a few of these do. Any time you create a new tarball you will use the c option, this means create. Most often, unless you are backing up to tape, you will use the f option as well. This means read and write your input to and from a file. If you’ve got ACLs you would like backed up you need to use the p option. The z and j options are mutually exclusive. Although less common, you may also see other extensions that denote alternate compression formats (lz for example), these however are not standard and change from version to version of tar.

The most common options for decompression of a tarball are x (extract) and t (test). Extract does exactly what you imagine it would do– it extracts the contents of the tar ball to the current location unless you specify an alternate location with the -C option. Test shows you what files are in the archive and where they extract to.

You will also occasionally see the u (update) option and it cannot be used in conjunction with the x (extract) or c (create) options. It updates the archive with additional files and changes to the existing files on a file level (not block). So lets see a few examples of commands you will commonly use with tar.

tar -xvzf tarball.tar.gz

This command will extract a gzip compressed tarball into the current location with the stored path.

tar -cvzf tarball.tar.gz .

This command will store the current directory as a gzip compressed tarball.

tar -tvzf tarball.tar.gz

Will display the contents of the tarball without extracting anything.

Those are the bare minimum basics of tar and with them they cover 90%+ of all times you will use tar… but tar has many more features available.

For instance you can exclude files by using the –exclude “path/filename” syntax.

tar --exclude "/etc/resolv.conf" --exclude "/dev" -czvf tarball.tar.gz /

One caveat for the –exclude syntax, some versions of tar require the excludes to occur before the path to tar, some require it after. Check the man page for the syntax for the version of tar you’re using.

You can use -C to extract a tarball to a different directory.

tar -xzvf tarball.tar.gz -C /usr/local/src

You can utilize the -f option to extract to standard out by specifying the filename as –

cat /usr/local/src/tarball.tar.gz | tar -xzvf -

The last brings us to one nice option you can use with tar, because it correctly handles piping you can pipe it over remote connections like netcat or ssh–

cd /usr/local/configuration && ssh root@hostname.domain.tld "cat /backups/server1/configuration.tar.gz" | tar -xzvf -


tar -cvzf - /usr/local/configuration | ssh root@hostname.domain.tld "cat > /backups/server1/configuration.tar.gz"

The last thing people generally want to do with tar is either update an existing tarball with additional files or create an incremental backup from an existing tarball and directory structure this is fairly easily done by using the following commands, first to update a tar ball…

tar -cvzf tarball.tar.gz /path/to/backup

First create an initial tarball as above

tar -uzvf tarball.tar.gz /path/to/backup

Then update it with the changed contents.

Easy enough, but what if you need previous revisions? As easy as setting up tar to do incremental backups.

tar --listed-incremental /backups/server1/index.snar -czvf $(date +%Y%m%d%H%M%S) /path/to/backup

This command will update the original tarball to make it current (this will overwrite existing files)

tar --listed-incremental /backups/server1/index.snar -cfvz $(date +%Y%m%d%H%M%S).tar.gz /path/to/backup

The major requirement if you’re using this method is that you decompress all of the files in order as tar will remove and create files as required which is fairly trivial with a simple for loop.

As you can see tar is a very functional backup tool that exists on almost every unix distribution world wide, so the next time you’re looking for a quick backup solution without installation of additional applications take a look at tar, it really is a high quality, tested backup tool that enables you to reliably archive and restore data in a variety of ways.

Remember, backup, backup, backup– backup before you perform any system changes and you can save yourself dozens of hours of headache.

As always if you need further assistance with this or any other open source application or issue, the experts at Pantek Inc. are available 24/7 at, 216-344-1614, and 877-LINUX-FIX.

FOP2 Installation on Trixbox 2.8

Trixbox 2.8 is a nice program but the FOP that comes with it is heavily outdated and lacking the visual and functional appeal you would expect in a project that is as slick as Trixbox. Never fear however, that can be solved by installing FOP2 from the same author. The only disadvantage of FOP2 over FOP is that it requires a site license. This costs $40 for the entire site, which is a steal considering how much better it functions than the original FOP and how affordable that is compared to other operator panels. If you don’t have more than 15 extensions, queues, conferences and trunks you can use it in trial mode and see all the features as well. You can download a copy of FOP2 and register it at the FOP2 site which is located at Without further ado here is the basic installation procedure.

First and foremost before any changes to a system… ensure you have a valid working backup!

Make a temporary directory and download the package and uncompress it.

mkdir -p /usr/local/src/packages/fop2
cd /usr/local/src/packages/fop2
curl > fop2.tgz
tar xfvz fop2-2.11-centos5.i386.tgz

Then change directory to fop2 and run the installation, this requires make to be installed.

cd fop2
make install

This will copy the necessary files to various locations on the system, mostly under /usr/local/fop2 and /var/www/html/fop2.

Next stop the original FOP and copy the extensions override to the appropriate file in asterisk.

amportal stop_fop
cd /usr/local/fop2
cat extensions_override_freepbx.conf >> /etc/asterisk/extensions_override_freepbx.conf

After that step is complete we need to install the mysql database for the visual phone book.

cd /var/www/html/fop2
mysqladmin -u root -p create fop2
mysql -u root -p fop2 < mysql.db
mysql -u root -p -e "grant all privileges on fop2.* to fop2@'localhost' identified by 'xxxxxxxxxxx'"

Next lets edit a few files to set credentials and privileges, the first file is the config.php file in the current directory, it needs to have the database credentials you just setup put in it.

vi config.php

The asterisk manager should have originate rights so edit…

vi /etc/asterisk/manager.conf

and add “,originate” to the end of the read and write lines. We need to make sure we generate callevents so add “callevents=yes” to to the sip_custom.conf file.

vi /etc/asterisk/sip_custom.conf

For the queues_custom.conf file you need to add “eventwhencalled=yes” to the queues as an example:

vi /etc/asterisk/queues_custom.conf

Finally verify that the auto config script for users will run correctly by executing…


If you have extensions that start with the number 0 you will likely need to add 10# to force it to use decimal and not octal numbers. Finally have FOP2 perform an internal self test by running…

/usr/local/fop2/fop2_server --test

If all is well at this point we’re ready to move onto the final step to make amportal start FOP2 instead of FOP.

cd /var/lib/asterisk/bin
mv freepbx_engine freepbx_engine.orig

You can either download the updated script from or alternately paste it in from here

#!/usr/bin/env bash

ROOT_UID=0       # root uid is 0
E_NOTROOT=67     # Non-root exit error

# check to see if we are root
if [ "$UID" -ne "$ROOT_UID" ]
echo "Sorry, you must be root to run this script."

# make sure config file exists
if [ ! -e "/etc/amportal.conf" ]       # Check if file exists.
echo "/etc/amportal.conf does not exist!";
echo "Have you installed the AMP configuration?";
# Set some defaults which can be re-defined in amportal.conf

. /etc/amportal.conf

if [ $ASTRUNDIR = /var/run ]
echo "astrundir in /etc/asterisk/asterisk.conf is set to '/var/run' - THIS IS WRONG."
echo "Please change it to something sensible (eg, '/var/run/asterisk') and re-run"
echo "install_amp"

if [ ! -d "$ASTRUNDIR" ]
echo "astrundir in /etc/asterisk/asterisk.conf is set to $ASTRUNDIR but the directory"
echo "does not exists. Attempting to create it with: 'mkdir -p $ASTRUNDIR'"
mkdir -p $ASTRUNDIR
if [ $RET != 0 ]
echo "Attempt to execute 'mkdir -p $ASTRUNDIR' failed with an exit code of $RET"
echo "You must create this directory and the try again."

chown_asterisk() {

chmod -R g+w /etc/asterisk
chmod -R g+w $ASTVARLIBDIR
chmod -R g+w $ASTLOGDIR
chmod -R g+w $ASTSPOOLDIR
chmod -R g+w $AMPWEBROOT/admin
chmod -R g+w $FOPWEBROOT
chmod -R g+w $AMPWEBROOT/recordings
chmod u+x,g+x $ASTVARLIBDIR/bin/*

if [ "$ASTAGIDIR" != "" ]; then
chmod u+x $ASTAGIDIR/*
chmod u+x $ASTVARLIBDIR/agi-bin/*

chmod u+x,g+x $AMPBIN/
chmod u+x,g+x $FOPWEBROOT/*.pl
chmod u+x $FOPWEBROOT/safe_opserver
chown $AMPASTERISKUSER /dev/tty9

# Ensure that various hardware devices are owned correctly.
[ -e /dev/zap ] && chown -R $AMPDEVUSER:$AMPDEVGROUP /dev/zap
[ -e /dev/dahdi ] && chown -R $AMPDEVUSER:$AMPDEVGROUP /dev/dahdi
[ -e /dev/capi20 ] && chown -R $AMPDEVUSER:$AMPDEVGROUP /dev/capi20
[ -e /dev/misdn ] && chown -R $AMPDEVUSER:$AMPDEVGROUP /dev/misdn
[ -e /dev/mISDN ] && chown -R $AMPDEVUSER:$AMPDEVGROUP /dev/mISDN
[ -e /dev/dsp ] && chown -R $AMPDEVUSER:$AMPDEVGROUP /dev/dsp

echo Permissions OK

check_asterisk() {
# check to see if asterisk is running
# Note, this isn't fool-proof.  If safe_asterisk is constantly restarting a dying asterisk, then there is a chance pidof will return non zero.  We call this twice to reduce chances of this happening
pid_length=`pidof asterisk|awk '{print length($0)}'`
if [ "$pid_length" == "0" -a "$pid_length" != "" ]
killall -9 safe_asterisk
killall -9 mpg123 > /dev/null
echo "-----------------------------------------------------"
echo "Asterisk could not start!"
echo "Use 'tail $ASTLOGDIR/full' to find out why."
echo "-----------------------------------------------------"
exit 0

run_asterisk() {
# check to see if asterisk is running
pid_length=`pidof asterisk|awk '{print length($0)}'`
if [ "$pid_length" != "0" -a "$pid_length" != "" ]
echo "Asterisk is already running"
# su - asterisk -c "export PATH=$PATH:/usr/sbin && export LD_LIBRARY_PATH=/usr/local/lib && /usr/sbin/safe_asterisk"
export LD_LIBRARY_PATH=/usr/local/lib
/usr/sbin/safe_asterisk -U asterisk -G $AMPASTERISKGROUP
sleep 5
sleep 1
echo "Asterisk Started"

stop_asterisk() {
pid_length=`pidof asterisk|awk '{print length($0)}'`
if [ "$pid_length" != "0" -a "$pid_length" != "" ]
/usr/sbin/asterisk -rx "core stop gracefully" | grep -v "No such command"
/usr/sbin/asterisk -rx "stop gracefully" | grep -v -E "No such command|deprecated"
echo "Asterisk Stopped"

check_fop2() {
#check to see if FOP2 is running
pid_length=`pidof -x fop2|awk '{print length($0)}'`
if [ "$pid_length" == "0" -a "$pid_length" != "" ]
ps -ef | grep fop2 | grep -v grep | awk '{print $2}' | xargs kill -9
echo "-----------------------------------------------------"
echo "The FOP2's server could not start!"
echo "Please correct this problem"
echo "-----------------------------------------------------"
exit 0

run_fop2() {
# check to see if FOP2 is running
pid_length=`pidof -x fop2|awk '{print length($0)}'`
if [ "$pid_length" != "0" -a "$pid_length" != "" ]
echo "FOP2 server is already running"
/etc/init.d/fop2 start
# Don't really like to run fop2 with root privileges. should be user: AMPASTERISKUSER = asterisk

stop_fop2() {
/etc/init.d/fop2 stop

kill_amp() {
killall -9 safe_asterisk
killall -9 asterisk
killall -9 mpg123
ps -ef | grep safe_opserver | grep -v grep | awk '{print $2}' | xargs kill -9
killall -9

case "$1" in
if [ -z "$FOPRUN" -o "$FOPRUN" == "true" -o "$FOPRUN" == "TRUE" -o "$FOPRUN" == "True" -o "$FOPRUN" == "yes" -o "$FOPRUN" == "YES" -o "$FOPRUN" == "Yes" ]
if [ -z "$FOPDISABLE" -o "$FOPDISABLE" == "false" -o "$FOPDISABLE" == "FALSE" -o "$FOPDISABLE" == "False" -o "$FOPDISABLE" == "no" -o "$FOPDISABLE" == "NO" -o "$FOPDISABLE" == "No" ]
sleep 1
if [ -z "$FOPRUN" -o "$FOPRUN" == "true" -o "$FOPRUN" == "TRUE" -o "$FOPRUN" == "True" -o "$FOPRUN" == "yes" -o "$FOPRUN" == "YES" -o "$FOPRUN" == "Yes" ]
if [ -z "$FOPDISABLE" -o "$FOPDISABLE" == "false" -o "$FOPDISABLE" == "FALSE" -o "$FOPDISABLE" == "False" -o "$FOPDISABLE" == "no" -o "$FOPDISABLE" == "NO" -o "$FOPDISABLE" == "No" ]
if [ -z "$FOPRUN" -o "$FOPRUN" == "true" -o "$FOPRUN" == "TRUE" -o "$FOPRUN" == "True" -o "$FOPRUN" == "yes" -o "$FOPRUN" == "YES" -o "$FOPRUN" == "Yes" ]
if [ -z "$FOPDISABLE" -o "$FOPDISABLE" == "false" -o "$FOPDISABLE" == "FALSE" -o "$FOPDISABLE" == "False" -o "$FOPDISABLE" == "no" -o "$FOPDISABLE" == "NO" -o "$FOPDISABLE" == "No" ]

echo "-------------FreePBX Control Script-----------------------------------------------"
echo "Usage:       amportal start|stop|restart|${FOPUSAGE}kill|chown"
echo "start:       Starts Asterisk and Flash Operator Panel server if enabled"
echo "stop:        Gracefully stops Asterisk and the FOP server"
echo "restart:     Stop and Starts"
if [ -z "$FOPRUN" -o "$FOPRUN" == "true" -o "$FOPRUN" == "TRUE" -o "$FOPRUN" == "True" -o "$FOPRUN" == "yes" -o "$FOPRUN" == "YES" -o "$FOPRUN" == "Yes" ]
if [ -z "$FOPDISABLE" -o "$FOPDISABLE" == "false" -o "$FOPDISABLE" == "FALSE" -o "$FOPDISABLE" == "False" -o "$FOPDISABLE" == "no" -o "$FOPDISABLE" == "NO" -o "$FOPDISABLE" == "No" ]

echo "start_fop2:   Starts FOP server and Asterisk if not running"
echo "stop_fop2:    Stops FOP serverg"
echo "restart_fop2: Stops FOP server and Starts it and Asterisk if not running"
echo "kill:        Kills Asterisk and the FOP server"
echo "chown:       Sets appropriate permissions on files"
exit 1

Last but not least, lets get permissions set and make changes to the interface to show FOP2

chmod 770 freepbx_engine
chown asterisk:asterisk freepbx_engine
cd /var/www/html/admin/views
cp panel.php panel.php.orig
vi panel.php
cd ../../user/templates/modules/04_fop
cp fop.tpl fop.tpl.orig
vi fop.tpl

In the panel.php change the src to “../fop2/index.html” and in the fop.tpl change the src= to “../fop2/” Now restart asterisk

amportal stop
amportal start

Now you can test your installation by visiting http://host.domain.tld/fop2 and login with your extension.

I would also suggest you install the admin module which is available on the fop2 site for changing passwords and managing what you want on and off the panel.

As always if you need further assistance with this or any other open source application or issue, the experts at Pantek Inc. are available 24/7 at, 216-344-1614, and 877-LINUX-FIX.

Netcat: Network Swiss Army Knife

Netcat is a networking utility which reads and writes data across network connections, using the TCP/IP protocol, and because of its vast diversity of function it has been called the network Swiss army knife,

It has a full suite of port-scanning capabilities and can be utilized in much the same way as nmap for standard security scanning, but unlike nmap it has features that far exceed network scanning. It was designed to be a tool that can be used directly or easily used by other programs and scripts. It can create and accept almost any kind of connection you can dream of. Have you ever piped a drive image over the network? With netcat you can. Netcat has been called the network swiss army knife for good reason; it does just about everything.

Lets start with a fairly simple example; a proxy. Lets say for some reason you need a basic proxy. With netcat this is simple. A uni-directional proxy is easy, just use a pipe like this:

rweaver@core:~$ nc -l -p 5555 | nc 80

However, this is of limited use in most cases so we need to look at making it bi-directional and the easiest way to do that is with the -c option like this:

rweaver@core:~$ nc -l -p 5555 -c 'nc 80'

You now have a bi-directional proxy setup between port 5555 on the local machine and port 80 on pantek’s website.

How about some simple port scanning ala nmap? Easy enough…

rweaver@core:~$ nc -z core.domain.tld 1-32768

Need to do some simple bi-directional testing of a tcp port, say when you connect it sends some simple data back to you?

rweaver@core:~$ nc -l -p 5555 -c 'echo "You are the weakest connection, goodbye!"'

If your version of netcat is compiled with ‘GAPING_SECURITY_HOLE’ defined, which allows remote executions that may be a security risk to the system if used incorrectly, it can execute this command:

rweaver@core:~$ nc -l -p 5555 -e '/bin/bash'

Giving you the ability to open up a shell on port 5555 that is remotely connectible. Up to this point we’ve been using netcat in single fire mode, you execute the command and it waits for a connect, does what it’s asked and then terminates netcat. There is a very good reason for this… netcat mis-used is a gaping security hole as listed in the previous example. You might find a need to perform an operation like that, but you absolutely wouldn’t want to let multiple connections occur to that unrestricted. If the script you are proxying needs to be reconnectable (for example a server that sends a file multiple times) you can use a -L instead of -l.

Speaking of the ability to send a file via netcat, how do we go about this operation?

rweaver@core:~$ cat filename | gzip -9 | nc -l -p 5555

Then on the client machine you would execute…

rweaver@mail:~$ nc core.domain.tld 5555 > filename.gz

The reason we compress the output is netcat does no compression on its own internally. This is not absolutely necessary but it typically speeds transfer times a notable amount.

What about imaging a hard drive from one host to another? Easy with netcat.

root@core:~# dd if=/dev/sda | gzip -9 | nc -l -p 5555

Then on the client…

root@live-cd:~# nc core.domain.tld 5555 | gzip -d | dd of=/dev/sda

You should consider adding a block size option to the dd operation as it will drastically reduce the transfer times. The second operation should be performed from a live cd so you’re not dealing with live information being overwritten.

One last trick with netcat before I close. Short a telnet client? Try netcat…

rweaver@mail:~$ nc -t host.domain.tld 23

With a bit of fore thought netcat is an extremely versatile application that can network enable many operations that would normally be available only locally. It can substitute for a wide variety of applications making it a great tool for a small distribution where you need to reduce the system foot print. Furthermore because it handles pipes exceptionally well when mixing standard networking and system level operations often times it can become an impromptu solution for a problem that might have otherwise required a traditional development effort.

If you need further assistance with netcat or any other open source application, the experts at Pantek Inc. are always available at, or 216-344-1614, and 877-LINUX-FIX.

ClamAV Segmentation Faults

We started receiving a high volume of calls regarding Spamassassin, Amavisd, and ClamAV being broken in many mail systems today and in many cases “ClamAV Segmentation Faults”. Additional research yielded that the cause of this issue was a new signature added to the ClamAV daily definitions that exceeds the length a signature can be in all versions of ClamAV prior to and including version 0.94. You can read the announcement ClamAV made regarding this issue on their website.

Verbatim the announcement was as follows:

All ClamAV releases older than 0.95 are affected by a bug in freshclam which prevents incremental updates from working with signatures longer than 980 bytes.
You can find more details on this issue on our bugzilla (see bug #1395)

This bug affects our ability to distribute complex signatures (e.g. logical signatures) with incremental updates.

So far we haven’t released any signatures which exceed this limit.
Before we do we want as many users as possible to upgrade to the latest version of ClamAV.

Starting from 15 April 2010 our CVD will contain a special signature which disables all clamd installations older than 0.95 – that is to say older than 1 year.

This move is needed to push more people to upgrade to 0.95 .
We would like to keep on supporting all old versions of our engine, but unfortunately this is no longer possible without causing a disservice to people running a recent release of ClamAV.
The traffic generated by a full CVD download, as opposed to an incremental update, cannot be sustained by our mirrors.

We plan to start releasing signatures which exceed the 980 bytes limit on May 2010.

We recommend that you always run the latest version of ClamAV to get optimal protection, reliability and performance.

Thanks for your cooperation!

If your ClamAV segmentation faults when it attempts to start and it didn’t yesterday you’re likely experiencing the bug described here. When you attempt to start the daemon you should see an error that looks like this displayed on the console or in your logs:

LibClamAV Error: cli_hex2str(): Malformed hexstring: This ClamAV version has reached End of Life! Please upgrade to version 0.95 or later. For more information see and (length: 169)
LibClamAV Error: Problem parsing signature at line 742
LibClamAV Error: Problem parsing database at line 742
LibClamAV Error: Can’t load /tmp/clamav-cb68c98144521c30/daily.ndb: Malformed database
/etc/init.d/clamav-daemon: line 57: 2803 Segmentation fault start-stop-daemon –oknodo -S -x $DAEMON

You will also likely see some log entries like this if you’re running spamassassin or amavis:

Apr 16 09:53:29 mailserver amavis[2293]: (02293-01-7) Clam Antivirus-clamd: Can’t send to socket /var/run/clamav/clamd.ctl: Transport endpoint is not connected, retrying (1)
Apr 16 09:53:30 mailserver amavis[2320]: (02320-01-7) Clam Antivirus-clamd: Can’t connect to UNIX socket /var/run/clamav/clamd.ctl: No such file or directory, retrying (2)

First of all “DON’T PANIC“, there are two simple fixes for this problem to get your mail IMMEDIATELY flowing again.

To implement either of these you first need to disable the freshclam and make sure ClamAV isn’t running. Once you’ve done that you can remove the offending signature from the ClamAV daily.cvd or remove the daily.cvd file. The first option of course is the better one as you will still get all the other updates until today. The file should be located in one of these locations depending on your distribution and mail-server setup:


After you have either edited the offending entries out of the file or deleted the file entirely you can restart ClamAV and your mail should start flowing again. Do not start freshclam again as you will likely re-download the file or additional long records that will break mail again. If your mail has been down for several hours you may want to your mailer daemon to attempt
redelivery of queued up mail:
For postfix:

postqueue -f

For Sendmail:

sendmail -OTimeout.hoststatus=0m -q -v

For Exim:

exim -q -v

For QMail:

kill -14 (pid of qmail)

or if you have supervised qmail:

svc -a /service/qmail

This however, is a short term solution with two distinct disadvantages, the first is: You no longer are receiving virus updates. The second is your virus definitions are not current if you deleted the daily.ctd meaning you will be missing
some new viruses that have been identified.

The solution is to recompile ClamAV and re-integrate it into your mail-scanner setup and this can be a complex and daunting operation for many admins especially in aged systems or extremely customized mail setups (frequently qmail has setups like these.) There is no simple easily addressed way of doing this we can tell you, every setup is somewhat unique. If you find that this is a task beyond your ability or need additional assistance with getting your mail back in a production ready clean state give us a call we’ve already helped a massive number of people get their mail-servers updated and back online and we’ll gladly help you do the same! As always give us a call if you need further assistance with this or any other open source software, you can contact Pantek at, 877-LINUX-FIX, 216-344-1614 or email us at

Emacs: The Digital Swiss Army Knife


“The purpose of a windowing system is to put some amusing fluff around your one almighty emacs window.” —

That quote does not begin to do justice about the power that the Emacs editor offers its user. From integration with source-code management tools right inside the editor to a large number of special editing modes for any number of different languages to a wide variety of other tools and utilities, Emacs takes the “kitchen sink” philosophy seriously. In this article we’ll plumb the depths of this versatile, powerful editor and get you on your way of becoming an Emacs guru.

A bit of history is in order first of course. Emacs begun as a project by Richard Stallman who was responsible for launching the GNU Project and Free Software Foundation. Emacs development begun in the 1970’s and is still actively contributed to today despite the many modern IDEs that exist. GNU Emacs, which is the version I’ll be referring to here, is currently at version 23.1. There was a fork of GNU Emacs in the early ’90s which led to the development of XEmacs,but for all practical purposes both XEmacs and GNU Emacs are quite similar. There are, of course, other versions of Emacs out there as well and you’ll likely to find a lot of overlap in terms of extension for each one.

“How do I get Emacs”, you might ask? Most modern Linux distributions ship with Vi(m) but might not come pre-installed with Emacs. Depending on your Linux distribution, you may be able to run run any of the following commands:

su emerge emacs

sudo apt-get install emacs23

su yum install emacs

su pacman -S emacs

I’ll wait while you install….


Ready? Great! Now that emacs is installed you can fire it up by simply typing ’emacs’ at your terminal. Depending on your current configuration this may take a few seconds. Of course since you just installed you really haven’t customized Emacs yet. When you start up Emacs the first thing you are probably going to see is a few lines that begin with “;;”. These are comments in elisp, the scripting language Emacs is written in. (We’ll get more into elisp later). Emacs has the notion of buffers – anytime you load up a file, create a new file, or get output from any of Emacs’ utilities the results go in a buffer. The buffer you see when you first start emacs is known as the *scratch* buffer. It is provided there for you to write down anything that you don’t want to save (though you save it to a file if you want) and as a convenient place to evaluate simple elisp expressions.

Of course the *scratch* buffer isn’t going to be where you do all your work. So how do we open or create a file? Well, before we delve into that its time to discuss the command structure of Emacs. Unlike ‘other’ editors you don’t have to hit a key before you can start typing. Which means that there must be some way to interact with the hundreds of Emacs commands. I introduce to you two keys which are known as modifier keys in Emacs, “Ctrl” and “Meta”. These two keys will become your friends during your Emacs career. I’m sure you know where the “Ctrl” key is on your keyboard, but “Where is Meta?” you might ask. Well, for many systems ‘Meta’ translates into ‘Alt’. You can also achieve ‘Meta’ by typing ESC as well. All commands will be referenced by a specific key sequence such as C-x C-f or M-x. This translates intoCtrl-x Ctrl-f (or to think of it another way, hold down control then type x f) and Alt-X. Emacs takes practice, there are many different key combinations built-in plus you have the option of specifying your own. I highly recommend you keep a cheatsheet of common sequences until you are more comfortable with the editor.

Now we move onto files. I’ve already introduce the command to create or open a file, C-x C-f. If you go ahead and enter that command in Emacs you’ll notice at the bottom of your Window the following line: “Find File: (CURDIR)”, where CURDIR is the working directory of the current buffer (which, in this case, is the directory you opened up emacs from). You can now navigate to your file by navigating to its path. Emacs features autocomplete, so if you’re not sure exactly what directory a file is in but you have some idea, you can hit the <TAB> key to have Emacs complete the path for you or show you exactly what files and directories there are in the current path. When you’ve found your file, simply hit enter and Emacs will load the file in a new buffer and at this point you can start editing. Of course, chances are you’re going to want to save your file. The command sequence for this is C-x C-s. If you enter this command you’re notice at the bottom Emacs tells you that it wrote the file or that no changes were made that needed saving.

When you’re finished editing and want to quit Emacs simply enterC-x C-c. Emacs will either exit if your files are up-to-date, or prompt you whether or not you want to save any changed files. You can enter ‘y/n’ here for “Yes, save” or “No, don’t save”, ‘q’ for give up on all saves, C-g for cancel command, or the most important of all, C-h for help.



You are now officially an Emacs user! Of course, this is only the tip of the iceburg as far as Emacs goes. So our next topic of discussion will be some basic navigation. Emacs offers a number of ways of navigating your document quickly and efficiently. Lets present some use-cases for you to consider:

  • You want to navigate to the previous line or next line?
    • C-p navigates to the previous line.
    • C-n navigates to the next line.
  • You want to navigate one character at a time?
    • C-b navigates back one character.
    • C-f navigates forward one character.
  • You want to navigate one word at a time?
    • M-f navigates forward one word.
    • M-b navigates back one word.
  • How about a sentence at a time?
    • M-a moves back one sentence.
    • M-e moves forward one sentence.
  • You want to move to beginning/end of the line?
    • C-a moves to the beginning of the line.
    • C-e moves to the end of the line.

There are other navigational commands as well. What if you know what line you need to go to? There’s a command for that! M-x goto-line will allow you to type in and navigate to any line in the current buffer. Now, I know what you’re saying. “But, I have to type all that just to go to a single line?” You’d be right, of course, that is a little much. I’ll address this in a future post!

What if you want to search for specific text? Emacs offers you multiple choices in this category as well. C-s allows you to search forward through the document for particular text, while C-r searches backwards through the document. Once you type in your search query after hitting C-s or C-r you can enter either command repeatedly to continue searching if there are more than one instance of your query present. You also have the option of doing a regular expression search, which can be accomplished using C-u c-s. Lets say I wanted to search this document in emacs for the following: “X one Y” where X is either forward or backward and Y is character, word, or sentence. I can enter C-u C-s and type
\(forward\|background\) one \(word\|character\|sentence\) and useC-s and C-r to navigate all the matches!

“But, can I replace text?” Its Emacs, of course you can! There are multiple ways of accomplishing this task depending on your need. Lets say I wanted to replace all instances of Vim with Emacs? Pretty straight forward since we have defined a simple string to replace. We can go ahead and type M-x replace-string, enter Vim as the “Replace String” and Emacs for the “”Replace String … With” and voila! All instances of Vim are now Emacs. You may also want to perform more complicated replacements using regular expressions. Lets say we have a document that references Word, Nano, Joe, and Vim and we want to replace all those text editors with a single one – Emacs. Well then, we can fire up M-xreplace-regexp and enter\(MS Word\|Nano\|Joe\|Vim\)
for the first prompt, which will be the regular expression pattern we want to match against. Enter Emacs for the second prompt, which is what we will replace each reg-ex match with and in the words of Emeril Lagasse, “BAM!”, we’ve replaced all the matches. Emacs also provides two additional commands which can be entered via M-x, query-replace and query-replace-regexp which functions nearly the same way as the above commands with the exception that it prompts you on each match how you want to proceed.

Deleting Text

Emacs also offers commands that makes it easy to delete text as well. You can use C-d to delete a single character or use M-d to delete an entire word. These perform deletions forward (meaning you have to be before the word/character you want to delete). Obviously you can use [DEL] to delete a forward delete a character and use [BACKSPACE] to perform a backwards deletion. However, you can use M-[DEL] to perform a backwards delete word. If you need to delete an entire line, C-k will delete anything on the current line that comes after the cursor. Using a C-a C-kcombination will easily erase the current line by navigating to the beginning and performing the delete.

Copy and Paste

Copying and pasting is a little bit more involving. The main point to keep in mind when you want to copy and paste is that Emacs uses the notion of “Marks” which is an indication where a region begins. If you want to copy/cut text you would first have to mark the beginning of the region then navigate to the end of the region and perform the actual cut/copy operation. You can mark the beginning of a region by using C-[SPACE]. Once you’ve navigated ot the end of the region you want to modify, you can use C-W to ‘cut’ or ‘delete’ the marked region. This will remove it from your buffer and make it available for ‘yanking’ or pasting back into the document else where. You can accomplish this by using C-y. Of course, if you want to simply copy text rather than cutting it from the document follow the same above procedure but use M-W instead to copy the text.

I wish I could say that was everything but we’re only getting started. You’re on your way to becoming an Emacs guru, so I welcome you to the growing list of Emacs user. Before you know it you’ll be dreaming in elisp and outtyping Vim users. Stay tuned for future articles which will start covering more advanced features of Emacs. If you need further assistance, Pantek Inc. is always available at, or 21-344-1614, and 877-LINUX-FIX.

Using nmap for basic troubleshooting and security auditing

One of the most basic tools and most useful tools in the network and security tool kit is nmap. Nmap is a tool designed to scan a remote host and tell you what services it has running, what operating system constructed the TCP packets on the host and what ports are being filtered by a firewall.

In its most simple form you can invoke nmap like this. (Note that I am running this via sudo because many nmap scans require root level access):

rweaver@core:~$ sudo nmap localhost

Starting Nmap 4.62 ( ) at 2010-03-18 10:48 EDT
Interesting ports on localhost.localdomain (
Not shown: 1708 closed ports
21/tcp   open  ftp
22/tcp   open  ssh
25/tcp   open  smtp
80/tcp   open  http
443/tcp  open  http
783/tcp  open  spamassassin

Nmap done: 1 IP address (1 host up) scanned in 5.680 seconds

What this does is scan the host using the defaults of the nmap program which are pretty good, if you wanted additional information you could invoke nmap like this:

rweaver@core:~$ sudo nmap -O -sV localhost

Starting Nmap 4.62 ( ) at 2010-03-18 10:48 EDT
Interesting ports on localhost.localdomain (
Not shown: 1708 closed ports
21/tcp   open  ftp          ProFTPD 1.3.1
22/tcp   open  ssh           (protocol 2.0)
25/tcp   open  smtp         Exim smtpd 4.69
80/tcp   open  http         Apache httpd 2.2.9 ((Debian) mod_ssl/2.2.9 OpenSSL/0.9.8g)
443/tcp  open  http         Apache httpd 2.2.9 ((Debian) mod_ssl/2.2.9 OpenSSL/0.9.8g)
783/tcp  open  spamassassin SpamAssassin spamd

Device type: general purpose
Running: Linux 2.6.X
OS details: Linux 2.6.17 - 2.6.23
Uptime: 461.941 days (since Thu Oct 11 12:13:58 2008)
Network Distance: 0 hops
Service Info: Host: core.domain.tld; OS: Unix

OS and Service detection performed. Please report any incorrect results at .
Nmap done: 1 IP address (1 host up) scanned in 11.680 seconds

This provides additional information on the versions of each of the services and attempts to the best of its ability to identify the operating system in use. This is useful for determining if you may be running an exploitable version of an application or if a service has been restarted after an update.

Now lets take a look at a device that denies this kind of scanning (note I altered the ip):

rweaver@core:~$ sudo nmap -sV -O

Starting Nmap 4.62 ( ) at 2010-03-18 12:10 EDT
Note: Host seems down. If it is really up, but blocking our ping probes, try -PN
Nmap done: 1 IP address (0 hosts up) scanned in 3.118 seconds

So lets re-invoke nmap with the options to work around the router blocking our probes and see what we get:

rweaver@core:~$ sudo nmap -sV -O -PN

Starting Nmap 4.62 ( ) at 2010-03-18 12:10 EDT
Interesting ports on (
Not shown: 1713 filtered ports
44/tcp   open  ssh     Dropbear sshd 0.51 (protocol 2.0)
8080/tcp open  http    Linksys wireless-G WAP http config (Name aker)
Warning: OSScan results may be unreliable because we could not find at least 1 open and 1 closed port
Device type: WAP
Running: Asus embedded, Linux 2.4.X
OS details: Asus WL-500gP wireless broadband router, Buffalo WHR-HP-G54 WAP or Linksys WRT54GL WAP running DD-WRT Linux 2.4.20 - 2.4.34
Uptime: 68.916 days (since Fri Jan  8 13:13:52 2010)
Service Info: Device: WAP

OS and Service detection performed. Please report any incorrect results at .
Nmap done: 1 IP address (1 host up) scanned in 110.097 seconds

Nmap did, in fact, accurately identify the device even while the device was attempting to prevent us from gathering information furthermore it showed the only running ports on it also.

If you want to limit nmap to certain ports are include certain ports its not scanning you can do so with the -p (portnumber or range) option, if you want to scan multiple hosts you can specify them in CIDR notation (eg: or as a list. To exclude a specific host is as easy as specifying it with –exclude.

There are also several front ends to nmap which are typically included in the repository for your distribution (CentOS/Fedora and Debian/Ubuntu at least.) The commonly used ones are ‘nmap-frontend’ or ‘nmap-fe’ and if you’re running KDE there is knmap available also. One of the spectacular features of nmap-fe is that it shows you the command line version of the command you’re about to run which is often useful long term in teaching you the more advanced features of the CLI.

As always, if you need further assistance with this or any other open source software, contact Pantek at, 877-LINUX-FIX, or

Learning the Basics of Vi & Vim

When you say ‘vi’ one of two things occur with users– a look of utter panic because they once got in the program and couldn’t get out without killing the terminal session and couldn’t figure out how to make changes to the files, or alternately, a grin as they remember taking some job that looked overwhelming and found finishing it was much easier because of vi. There are a couple strong cases for using vi as your primary editor, the first and most obvious of these is that some revision of vi is included in almost every *nix system under the sun. From BSD, to Solaris, to Linux, to SCO, to whatever… if it’s a *nix it’s likely got a copy of vi stashed on the drive for editing configuration files. Even MacOS X which is based on a heavily modified BSD core has a version of vi installed (vim). The functions of each version are amazingly similar, but do contain subtle differences in feature sets.

The first thing to understand about vi and all of its descendants is that they are ‘modal editors’. What this means is that there is a distinct insert mode and a command mode and that the editor can be manipulated through a variety of means, most of which don’t involve removing your hands from the keyboard. For an experienced vi user who is also a touch typist this greatly increases the speed you can perform your editing in vi compared to a traditional non-modal editor since your fingers rarely need to leave the home row to perform editing.

For our purposes we’re going to assume if you’re going to seriously use vi as your daily editor you’re going to want portability, configurability, extendability and syntax highlighting. The choices include both a GUI version and command line version which brings us to the big brother of the vi world… Vim and GVim (VI iMproved, and Graphical VI iMproved). They’re the default on a large number of distributions and installable in all distributions of Linux. It is also the default on mac and is available on windows as well in both a CLI and graphical mode.

Lets start with the very basics: opening and closing files. Opening a file is as easy as calling vim with an argument, like: vim filename.txt. Once you have vim open you’ll be looking at a screen that is blank (if the file was empty) or contains the file you specified. From this point if you wish to quit the editor without saving it’s very easy to do. You can simply type: :q. If you’ve made changes to the document while having it open you will need to either write the changes out before exiting or tell vim that you’re sure you want to quit without saving changes. To write the changes before quitting you can either execute a: :w, or to write and immediately quit: :wq. If you wish to tell vim to quit without saving your changes it’s as easy as: :q!. Let’s say you want to open a file while you’re in vim. This is trivial: :e filename.txt. If you want to save your modifications to a new file you can do so with the w command, like so: :w

Navigation in vim is a bit strange for new users, but once you learn the basics it’s considerably easier. While in command mode you can use either the arrow keys or the following:


If that movement isn’t rapid enough for you then you can jump full pages by using:

[CTRL]d (down)
[CTRL]u (up)
[CTRL]f (forward – scroll right)
[CTRL] b (back – scroll left)

You can also skip to the beginning and end of a line using the ^ and $commands, although in the case of ^ it actually jumps you to the first non-white space character in the line. If you want to move to the actual first character of the line you use 0. Another useful way to move through a file especially when you’re using code is by line number to do that you can enter :linenumber. For example, :303 will take you to line 303. To move to the start of a file you can use the command 1G and to the end is simply G and this can also be used to take you to a direct line number202G for instance will take you directly to line 202.

Now let’s look at actually editing the contents of a file to enter insert mode. At your current cursor location you simply press i and from that point anything you type will be entered into the document until you hit[ESC] to go back to command mode. If you want to delete a character under the cursor you can hit x in command mode. If you want to delete a line it’s as easy as dd. Deleting 10 lines would be d10d. To delete from your current line to the end of the file is as easy as dG and from current line to beginning of file it would be d1G. To delete the current word you’re on you can use de. If you wish to change a single character you can hit r and type the new character and if you want to replace all remaining characters starting at the existing character you can use R. Need to change the case of a character? Try hitting ~ while on the character.

Search and Replace
That brings us to our final important piece of editing. Vim supports regular expressions so from inside the editor you can execute just about any regex without needing to rely on an external utility. A good example would be: :%s/domain.tld/, which says search for domain.tld and replace it with for all occurrences. If you want to search for something downward in the file you can type: /whattofind and it will show you the next occurrence of that expression in the file, while: ?whattofind will show you the previous occurrence. While these examples just skim the surface of regular expressions and search and replace features in vim, the subject is broad enough on regex alone to write an entire book, and I won’t delve into that in this article.

Locales and Encodings
One thing to be very careful of in mixing environments is watching your locales and character sets that you’re using or you can end up with white space character errors that can causing inexplicable issues that are hard to diagnose.

Vim will use the operating system’s locale by default. You can change this by doing: :set encoding (this setting determines what vim uses internally when editing), :set fileencoding (what the file currently open was encoded as, and what vim will save the current file as), :set fileencodings (a list of file encodings vim will try when opening a file), and :set tenc(the terminal encoding that will be used for display).

So when creating a new file, vim will use the server’s locale by default when editing the file, and will save the file to that encoding. When opening a file already created, vim will still use the encoding setting to edit internally, but will try to automatically determine the existing file’s encoding by trying the encodings listed in the fileencodings setting, in order until it finds one that encapsulates the document. All of these should be checked to ensure that your documents are being encoded and read the way that you need them to be. These settings can also be included in your .vimrc file to load every time.

Despite what you have set the encodings to, you will not actually see the correct encoding when viewing the file if the encoding does not either match the environment you are viewing in, or does not match the translation that putty is expecting (if you are using putty to connect). It does not matter what your encodings are stored and edited as, as long as you set the :set tenc value to what the viewer is expecting. Vim will handle translating from one to the other. I recommend setting Putty to UTF8, which is set under Windows/Translation. The font should also be set to a font that includes the unicode set, for which I recommend Courier New.

So if your operating system’s locale is UTF8 (try running the command locale to find out), you can set .vimrc session to have a encoding=utf-8, an tenc=cp1252 (or whatever your putty is set to), a fileencodings=utf-8,latin1 (the default), and you will be opening files correctly, seeing them translated to your putty encoding, and saving them as they were read. An important note here is that even if your terminal (putty) is receiving the data in the encoding it is expecting (cp1252 for example), and the data was converted from utf-8 by vim, if the font you have selected cannot display the full encoding (cp1252), then you will not see what you are expecting.

There is a problem with this setup. Although it is convenient (since you do not have to change putty back and forth), and you can just set the .vimrc with a proper tenc setting, the encoding you are using on the terminal will likely be a subset of utf-8. Therefore, you cannot guarantee you will see all of the characters properly. It is therefore best to set putty to Window/Translation UTF8, leave tenc unset in vim, ensure that the Linux locale is UTF8, and that the files are being read and written (or created) as utf-8 by checking the :set fileencoding and :set encoding settings, and choose a font in putty that has a full set of characters, such as Courier New, and while not in an active session, edit the Connection/Data/Terminal Details to linux instead of xterm. This should also correctly display UTF8 line characters as well as characters.

Further Reading
A resource I can’t recommend enough is the vi/vim book by O’Reily Publishing. It’s title is “Learning the Vi and Vim Editors” by Arnold Robbins, Elbert Hannah, and Linda Lamb. It’s a spectacular resource and excellent learning tool when it comes to vi/vim.
As always, if you need further assistance with this or any other open source software, contact Pantek at, 877-LINUX-FIX, or

Flexible configurations using the .htaccess file

The .htaccess file is a distributed configuration file for Apache that provides a way for you to make changes on a per-directory basis. When placed in a specific directory, the changes dictated by the file only apply to that directory and its sub-directories, enabling the users and not the administrators to configure the behavior of Apache if allowed.

This is especially useful in a shared hosting environment where the average user doesn’t have access to the actual configuration files. If you have access to the configuration files, any change made in the .htaccess file can be made in the main configuration files which is preferable as it provides better performance.

The main configuration file of Apache is the ultimate arbitrator of what is allowable and not allowable for the .htaccess file to control and it will also yield to each directory higher than itself. If you wish for the .htaccess file to be able to change almost anything you use the directive “AllowOverride All” in the Apache configuration. You can also only grant specific rights like “AuthConfig” to allow the user to password protect specific areas.

One of the most basic features most people look for is the ability to provide password authentication to a particular directory and doing this is relatively easy. Change directory to the web accessible directory you want access to and then create an .htaccess file with the following information in it.

AuthType Basic
AuthName "Protected Location, Credentials Please?"
AuthUserFile /var/www/domain.tld/htdocs/protected/.htpasswd
Require user valid-user

Then execute the following command:

htpasswd -c /var/www/domain.tld/htdocs/protected/.htpasswd testuser

At this point it will request you type the password for the user and will create an entry in the .htpasswd file. You only need to use the -c flag when you’re doing the initial creation of the file, after the first time you can simply specify the file name and user to add to it.

If you attempt to access your site now you will be prompted to enter a user name and password, enter the user name and password you created in the previous step and it will allow you access and will not request user name and password again this session unless you clear the session cache.

Having authentication on the fly is useful but there are even more useful variations of this that can be used to provide easy access from known locations and password protected access from unknown locations. For example we’ll start with our previous example and add the ability for anyone on the local lan to access it without a password:

AuthType Basic
AuthName "Protected Location, Credentials Please?"
AuthUserFile /var/www/domain.tld/htdocs/protected/.htpasswd
Require user valid-user
Order deny,allow
Deny from all
Allow from 192.168.
Satisfy any

Now anyone attempting to access the protected content will have to meet one of two criteria– either A) Be on the subnet or B) Know the correct user name and password.

The .htaccess file however is not limited to just authentication changes you can also rewrite urls and redirect traffic if you wish:

Redirect 301 /old http://www.domain.tld/new

What this code says is redirect permanently (301) requests to /old to http://www.domain.tld/new. This is often useful if changed the management software on the website or made significant changes to the layout. That being said you rarely end up being able to use something as simple as that because frequently you use query strings in the requests (something following the base URL like: ?app=4&site=domain.tld&ref=9) to redirect those is a bit more complex:

RewriteEngine on
RewriteBase /
RewriteCond %{QUERY_STRING}    ^article=0001$
RewriteRule ^main.php$ /article1 [R=301,L]

What this says is redirect any query to http://www.domain.tld/main.php?article=0001 to http://www.domain.tld/article1 and it’s fairly easy to extend that to including part of the rewrite string in the resulting URL allowing you to redirect uniform urls en-mass to the correct location.

I’ll share one more tidbit regarding the .htaccess file, you can also use it to prevent people from hot-linking to your images and stealing your bandwidth.

This doesn’t always work due to needing to make exceptions for browsers that strip out the referer (the misspelling is intentional, it was misspelled in the RFC and has remained the same since.) The code to do this is pretty easy:

RewriteEngine On
RewriteCond %{HTTP_REFERER} !^http://(.+\.)?domain\.tld/ [NC]
RewriteRule .*\.(jpe?g|gif|bmp|png)$ - [F]

What this rule says is for any request that doesn’t originate from my domain return a forbidden response. You could further modify this to rewrite the url they are requesting and display the image of your choice. You could also allow blank referrers to access the site resources or deny any specific site you don’t want hot-linking your site specifically (for instance, facebook or a competitor’s site).

The flexibility the .htaccess file offers is massive and will allow you to write better sites and optimize your existing sites to better utilize your incoming links after a reorganization. The sky’s the limit and with a bit of care and work you can create just about any rule set you can imagine.