Simple Linux HA web Cluster with minimal resources required

All you need is a single server to feed your High Availability Network Cluster.

This example will provide for Apache HA without anything complex like session handling, and will focus on the RHEL version of Linux High Availability.

The packages required:

You will need a load balancer machine to run Linux HA. In this example we will use Piranha and set the load balancer up in a weighted least connection configuration.

On the load balancer server you need the following:
Install piranha, and ipvsadm

The required lines for the configuration of the load balancer are in the file: /etc/sysconfig/ha/

primary = ip.address.of.this.server
service = lvs
keepalive = 6
deadtime = 18
network = direct
debug_level = 0
virtual balanced_www {
     address = public.ip.address eth0:0
     vip_nmask =
     active = 1
     port = 80
     send = “GET / HTTP/1.0\r\n\r\n:
     expect = “HTTP”
     load_monitor = none
     scheduler = wlc
     protocol = tcp
     timeout = 5
     reentry = 10
     server webserver1 {
          address = ip.address.of.webserver1
          active = 1
          weight = 1
     server webserver1 {
          address = ip.address.of.webserver2
          active = 1
          weight = 1

Beware, the file above will fail if there is any additional whitespace or there are any comments in the file.
Also: do not worry about the eth0:0 ip address, as lvs will bring that up and down as required when you start the daemon.

On both webservers you need the following file:


net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.eth0.arp_announce = 2

Now run sysctl -p this will activate the new rules which will reload automatically upon reboots.

You will also need the following arptables_jf rules which you can add prior to the ifconfig line in the /etc/rc.d/rc.local file

On webserver1

arptables_jf -A IN -d ip.address.of.webserver1 -j DROP
arptables_jf -A OUT -d public.ip.address.of.webserver -j mangle --mangle-ip-s ip.address.of.webserver1

On webserver2

arptables_jf -A IN -d ip.address.of.webserver1 -j DROP
arptables_jf -A OUT -d public.ip.address.of.webserver -j mangle --mangle-ip-s ip.address.of.webserver1

On each server you need to assign an alias to the ethernet device that the ip.address.of.webserver is on and give that the public.ip.address.of.webserver which you can do by adding the following line to your /etc/rc.d/rc.local

ifconfig eth0:0 public.ip.address.of.webserver netmask.of.ip.address up

Don’t worry about having the same IP address on 3 servers, the only one recognized is the IP address on the load balancer.
Once this is done then you will need to start the load balancer by running pulse on the load balancer:

service pulse start

Now you have a webserver that is load balanced and has high availability.

To test this watch the logs on the load balancer:

tail -f /var/log/messages

Now stop one of the webservers, or reboot it. Nanny will determine there is a connection failure and it will remove that webserver from the cluster.

This configuration works the same for up to 256 webservers, you just need to replicate the lines in the and setup all the other servers the same way you setup the two above.

Pantek Achieves Green Plus Certification

The Institute for Sustainable Development ( announced on November 11th that Pantek was one of four organizations to achieve full Green Plus™ Certification in the third quarter of 2010. The Institute for Sustainable Development’s Green Plus™ program recognizes smaller enterprises. dedication to triple bottom line sustainability by measuring their business, environmental and community practices. Green Plus™ Certification is the Institute’s highest level of recognition.

“Going through Green Plus certification really opened our eyes to sustainability issues and to the environmental cost of doing business. Now that we know, we’re taking the necessary steps to minimize that cost — because it’s the right thing to do,” concluded Barry Zack, president of Pantek.

Since moving to a new building in 2008, Pantek has shown a strong commitment toward sustainability and improving their green practices. In choosing a location for their new offices, a major consideration was the amount of available natural light; all offices and cubicles have sources of sunlight. Notably, in order to reduce environmental strain from commuting, Pantek, has also developed a program in which employees who demonstrate excellence in their work, may work from home one or more days per week. As an IT company, computers are essential to Pantek’s business; to fill this need they utilize power efficient computers with rated energy saving power supplies.

“We’re proud of the progress we’ve made in reducing our environmental impact, and we’ll work diligently to expand our efforts,” said Linda Zack, V.P. of Marketing. “We’ve cut our carbon footprint significantly by making improvements in our office and our hosting center, by reducing employee commutes through work-at-home programs, and made a corporate commitment to improving the local environment through tree planting.”

About the Institute for Sustainable Development:

The Institute for Sustainable Development is a North Carolina Research Triangle non-profit initiative of universities and chambers of commerce to help small businesses and their communities benefit from sustainable practices and to foster a new generation of sustainability leaders. The Institute’s North Carolina university partners include Duke University’s Nicholas School of the Environment, Durham Technical Community College, Elon University’s Martha and Spencer Love School of Business, North Carolina State University and the University of North Carolina at Chapel Hill.

The Institute partners with the American Chamber of Commerce Executives, a 1,400 chamber of commerce network with 1.3 million business members, to offer Green Plus. nationwide, and with seventeen regional chambers of commerce to provide sustainable business education in twelve states. It also runs a national Green Plus. Sustainability Fellows program, training students from graduate schools around the country to help smaller enterprises understand and benefit from the triple bottom line.

Full story click here:

Customizing your Spamassassin mail filtering

Thousands of mail servers everyday pass millions of messages through Spamassassin, yet most of these installations are using only the standard Spamassassin rules. Many servers aren’t taking advantage of many of the great features Spamassassin has to offer.

The first thing you need to know about enabling custom rules and setting up Spamassassin in general is this: do your work in or unique files call from and do NOT change the default files for the rules. When there are updates your changes will be overwritten if you modify other rules.

If you open you will see a default file with some commented out common features, the first feature we want to enable is Bayesian filtering. You can do this by adding the following to the bottom of the file:

use_bayes 1
#bayes_auto_learn 1
bayes_ignore_header X-Bogosity
bayes_ignore_header X-Spam-Flag
bayes_ignore_header X-Spam-Status

This will enable Bayesian filtering and prevent the Bayesian filtering from relying on headers that maybe forged or set by spamassassin itself in other areas. In addition we tell the Bayesian filters to NOT auto classify specific messages as spam or not spam based on statistical analysis… this is a feature will will want eventually but not until we have at least a few hundred of messages classified each as spam and ham.

How do we do manual classification of messages? We use the sa-learn program… it’s very easy. On the server open your mailbox in mutt (mutt -f /path/to/box/or/Maildir) then we can save the messages that are SPAM into a spam-folder and the messages that are HAM into a ham folder. Once we have done this for a few hundred messages we run sa-learn against them and classify them.

core:~# sa-learn --showdots --spam /home/testuser/Maildir/spammy
core:~# sa-learn --showdots --ham /home/testuser/Maildir/goodmail

This will teach spamassassin about the specific messages we are receiving… and teach it about mail it shouldn’t mark as spam. Once we have at least 100 of each we can turn the bayes_auto_learn on by uncommenting it from the file. Once we do this spamassassin will then mark items as spam and ham automatically when they are on the extreme ends of the scale. There are some additional useful items to add to the for bayes filtering as well:

bayes_auto_learn_threshold_spam 10
bayes_auto_learn_threshold_nonspam 0.50

These tell spamassassin to use a threshold score of 10 before adding a message to the bayes database as spam, and it must have a lower score than 0.50 to be auto classified as ham. These are good starting values that can be later tweaked downward as the system becomes more familiar with both types of messages your network receives.

If your mail server is single purpose and doesn’t have a wide variety of perl libraries you should probably have at least Net::DNS and its pre-requisites installed to correctly utilize spamassassin’s rbl checks.

The next item to review is dns block-lists. Many people feel strongly regarding these either in the positive or negative. To enable them you simply add the following to the file:

skip_rbl_checks 0

to disable:

skip_rbl_checks 1

You can further customize which lists you wish to set scores for by setting the actual rule name of the list with a score of zero. For example if you wish to disable the dynamic ip address check for sorbs, you would find it in the

header RCVD_IN_SORBS_DUL        eval:check_rbl('sorbs-lastexternal', '', '')
describe RCVD_IN_SORBS_DUL      SORBS: sent directly from dynamic IP address
tflags RCVD_IN_SORBS_DUL        net

Which shows the name is RCVD_IN_SORBS_DUL. To disable it in the you would add a line like this:


while still keeping the rest of the dnsbls intact.

A further thing which many people neglect to do is alter the weighting of the rules. This is as easy as determining the rules name and applying a score to it in the file.

score RULE_NAME newscore

For example, if you wanted to alter the weighting for the rules related to some types of viagra spam you might add the following.

score FM_VIAGRA_SPAM1114 4
score DRUG_ED_CAPS 2

The last thing you might want to do is write a filter for a custom type of spam you’re receiving. Lets make the assumption you’re frequently getting a spam for fooglesnaps. The rule for this is very easy to write.

body LOCAL_FOOGLESNAPS_RULE /\bfooglesnaps\b/i
describe LOCAL_FOOGLESNAPS_RULE  This is to help catch the fooglesnaps spam.

What this rule says is anytime you see fooglesnaps regardless of case (case insensitive) with a word break on both sides, this rule applies. The next line says what the base scoring for the rule is, in this case half a point. Always start around 0.1-0.5 and work your way up to make sure you don’t mark mail incorrectly. The final line is just a human readable description of the rule. You can use pretty much any perl regex rule to filter your spam. The sky is the limit, however you should be sure you NEED a new rule and can’t just use a black or white-list to correct your delivery issue because rules are more expensive on the cpu and memory than simple black and white lists.

This covers some of the basic and common setup tasks for a new spamassassin installation that many users miss or or don’t enable. As always if you need further assistance with this or any other open source application or issue, the experts at Pantek Inc. are available 24/7 at, 216-344-1614, and 1-877-LINUX-FIX! We look forward to working with you.