Add a GoDaddy SSL Certificate to Tomcat 6

Here are some steps that should work for most installations on recent Tomcat builds.

This procedure could easily be used for any other SSL verification service, the names and number of certs to import may change.

These steps will assume that you are using the default .keystore file located in tomcat’s home directory (/usr/local/tomcat6/ for example).

If this keystore exists, I would recommend removing it.

Also, you need to ensure that you are using the keytool binary that belongs to the version of Java that Tomcat is referencing.

When starting or stopping Tomcat (/usr/local/tomcat6/bin/catalina.sh stop/start), it will output the current JRE_HOME settings. This will tell you the path you should use to point to keytool. For example /usr/java/jdk1.6.0_14/bin/keytool

To see the available binaries, try:

locate keytool

The one used by just typing keytool may not be the correct binary.

Once you have determined this, become the tomcat user (or whatever user runs Tomcat/Catalina)

su tomcat -

Using the full path to keytool, create the keystore and server key.

/usr/java/jdk1.6.0_14/bin/keytool -genkey -alias server -keyalg RSA

Answer the questions. First and last name are the CN, or website Fully Qualified Domain Name. In the case of a wildcard cert request, this would be *.domain.name

Any password should be entered the same each time.

Create the CSR:

/usr/java/jdk1.6.0_14/bin/keytool -certreq -alias server -file csr.txt

Now, this file, csr.txt, contains the text you need to copy and paste into whatever form is provided for you to submit your Certificate Signature Request.

Once this is submitted, you will be given a set of files. More specifically, a CA cert, which in the case of most verification services and distributions, should not be needed, as it is usually installed on the system. To be safe, you can import it. Also, from GoDaddy, you will receive an intermediate and cross cert. You will also receive the wildcard cert for your domains/domains.

Import the root certificate:

/usr/java/jdk1.6.0_14/jre/bin/keytool -import -alias root -trustcacerts -file valicert_class2_root.cer

Import the cross certificate:

/usr/java/jdk1.6.0_14/jre/bin/keytool -import -alias cross -trustcacerts -file gd_cross_intermediate.cer

Import the intermediate certificate:

/usr/java/jdk1.6.0_14/jre/bin/keytool -import -alias intermed -trustcacerts -file gd_intermediate.cer

Import the wildcard certificate:

/usr/java/jdk1.6.0_14/jre/bin/keytool -import -alias server -trustcacerts -file _.yourdomain.com.crt

You need to configure the Tomcat server.xml file to provide port 443 (or whatever port you want).

You can uncomment the included section for this, or you can use this example below:

<Connector protocol="org.apache.coyote.http11.Http11Protocol"
port="443" minSpareThreads="5" maxSpareThreads="75"
enableLookups="true" disableUploadTimeout="true"
acceptCount="100" maxThreads="200"
scheme="https" secure="true" SSLEnabled="true"
keystoreFile="/usr/local/tomcat6/.keystore" keystorePass="whatever"
clientAuth="false" sslProtocol="TLS" />

Route my routes without access to routes

Sometimes, resolving network issues just requires a bit of creativity. Years ago, I had an issue with a T1 connected to a Linux box through a very inexpensive ISP. Via the same ISP, I had another Linux box – with another T1 connected. To prevent confusion, we’ll call these systems Florida and New York.

One morning, Florida complained that they “could not access the internet”. This is a common thing, usually remedied by turning on the computer. But, being skeptical I opened an ssh connection to their firewall – and responded that the internet connection was fine. They said that no pages were loading, so I logged into another system behind the network and indeed – internet access was in fact degraded.

I noticed that the problem appeared to be limited only to DNS. But, after a few tests I learned something interesting – Florida just could not contact New York at all. Everything else worked – just not the connections over the same ISP. But why? I immediately dialed the number to get to the bottom of this.

Unfortunately, the tech support there wasn’t as great as Pantek, and I spent a lot of time sitting idle on the phone waiting for action. Since this was during the start of the business day – I couldn’t possibly wait for them, so I opted to figure out my own resolution.

Since I’m in Cleveland – I can’t just walk over there and re-configure the workstations (particularly since there were about 75 of them – not to mention that would need to be restored after the change, so my work would effectively be doubled. Instead, I opted to use the resources I had on hand to solve the problem.

Prior to making changes, it was necessary to identify the real business problem – people couldn’t access the internet or other web pages. Since the symptoms started with DNS failures I started there.

On the system, I installed Bind and downloaded all zone files to the system, so that it could be the master of our domain. Then, once it was started, I counted on iptables to redirect outbound port 53 traffic to the local host – so that the router/firewall combo could respond with all DNS queries.

Once this was accomplished (and people were once again able to access their GMail, it was then time to allow people to access our domain’s email and web pages (since 97% of all business functions are computed through the domain website and mail servers).

Unfortunately, you can’t just copy your mail server configuration over to the new system and just magically have email arrive. So, I counted on a combination of two packages – rinetd and of course, iptables. What I then did was this – on a server in Cleveland, I installed rinetd to listen on web and mail ports (including SSL) and to immediately forward those requests over to the original servers. Then, I configured iptables on the Cleveland server to block access to all systems but from Florida.

rinetd is a great utility – it performs TCP-based port forwarding by accepting a connection, opening a new connection and then forwarding all content received via either connection to one another. Once I confirmed that the setup was working, I then configured rinetd on the local firewall again – this time, forwarding all connections to the Cleveland system. Then, I tested by connecting to the localhost on one of the aforementioned ports.

What did this do? This essentially forwards all connections on the inbound firwall over to the Cleveland network, then back to the New York network. Responses, go over the same channel. Granted, it was super slow (as expected) – but it was working. After adding a few iptables redirect commands to the nat table, people were starting to rejoice that they were able to access their content again.

But what about their original problem?

Turns out, when you would issue a ping from Florida to New York, the New York side would see it – and respond to it, but the response would never get to the system in Florida. Why not? Turns out, the wonderful ISP in question had screwed up some of their BGP rules, causing none of the return traffic to make it. For some reason, they didn’t believe me while on the phone with them for 6 hours that the problem was related to BGP – but that’s another complaint for another day.

So, what’s the moral of this story? Sometimes, the answer to the business problem is not the technically accurate answer – and a little creativity is all that’s needed to sit back at the end of the day with a smile on your face knowing you solved a problem that originally seemed outside of your control.

There’s more than one way to s//

Being absolutely obsessed with perl, I tend to pipe everything to it – even when sed or bash will do it just the same. Just because something is a habit, doesn’t necessarily mean it’s the best way. For instance – getting the username from a directory, assuming the directory ‘/home/john/test’ translates to user ‘john’

Out of habit, I will find myself doing something like this:

  username=`echo -n "/home/john/test" | perl -pe s'/\/home\/([^\/]+)\/?.*/$1/;'`

Which will, set username to ‘john’. This is the desired output, so I would typically just move on without even thinking twice.

But, what if thinking twice can yield the same results – but not rely on perl. Sure, this hypothetical shell script is probably preparing the environment to execute a perl script – but since we’re speaking hypothetically, the shell script is really just preparing the environment for a FORTH application. How can we accomplish the same feat?

Well, the simple answer – sed to the rescue. If you’re using perl, you should be familiar with sed and awk (and if you aren’t, you skipped a step in the evolution of OSS – shame on your teacher). In sed, you can obtain the same information:

username=`echo -n "/home/john/test" | sed -e 's/\/home\/\(.*\)\/.*/\1/;'`

Looks easy enough right? But, what if in this custom linux system, you don’t have sed available at all (but for some reason, you find yourself running Bash 3). Well, since there’s always more than one way to s//, you can just use the builtin bash regular expression engine:

  [[ "/home/john/test" =~ "/home/([^/]+)/?.*" ]] && \
  username="${BASH_REMATCH[1]}"

The same principal applies here – capture the grouped result and store it in the value of ‘username’. Each one of these one-liners report ‘john’ as the result, which effectively gives you more than one way to s//. So now, your life is nearly complete…

Of course, by this time you’re asking yourself “which is the best to use? Assuming your system is the exact same make/model/speed as mine, including memory and CPU power, using perl’s wonderful ‘Benchmark’ module, we have the following report:

  [root@delta s-test]# ./timethem.pl
  Benchmark: timing 1000 iterations of just_bash, with_perl, with_sed...
   just_bash: 9.50427 wallclock secs ( 0.13 usr  0.79 sys +  3.54 cusr  5.22 csys =  9.68 CPU) @ 1086.96/s (n=1000)
   with_perl: 30.0591 wallclock secs ( 0.17 usr  0.99 sys + 13.50 cusr 15.92 csys = 30.58 CPU) @ 862.07/s (n=1000)
   with_sed: 19.2811 wallclock secs ( 0.21 usr  0.75 sys +  5.74 cusr 13.03 csys = 19.73 CPU) @ 1041.67/s (n=1000)
  [root@delta s-test]#

According to this report, bash by itself consumes less CPU (by a large factor), and is much faster at providing the output to the system. Of course, there is no overhead in executing an additional application and my server is not running entirely from memory, but somehow I feel that those results would yield similar results. (Anyone have a 1T RAMDISK hanging around?).

For reference, here are the scripts that I used in my timings:

[root@delta s-test]# cat timethem.pl
#!/usr/bin/perl
use Time::HiRes ();
use Benchmark ':hireswallclock';
use warnings;
use strict;

&Benchmark::timethese( 1000, {
                   with_perl => sub { `./with-perl.sh`; },
                   with_sed => sub { `./with-sed.sh`; },
                   just_bash => sub { `./just-bash.sh`; } }
                 );
[root@delta s-test]# cat with-perl.sh
#!/bin/bash
username=`echo -n "/home/john/test" | perl -pe s'/\/home\/([^\/]+)\/?.*/$1/;'`
[root@delta s-test]# cat with-sed.sh
#!/bin/bash
username=`echo -n "/home/john/test" | sed -e 's/\/home\/\(.*\)\/.*/\1/;'`
[root@delta s-test]# cat just-bash.sh
#!/bin/bash
[[ "/home/john/test" =~ "/home/([^/]+)/?.*" ]] && username="${BASH_REMATCH[1]}"
[root@delta s-test]#

Happy BASHing!

Optimizing Linux

Linux distributions are adding more and more features and not all of them are useful for every desktop or server application. This document helps by suggesting ways to make your Linux distribution optimized for its intended role. Many distributions have features enabled which will never be used and should be disabled or removed from the system. Some of the methods in this document use values that can be tailored to suit individual roles of the machine being optimized. The value given is a generalized value and should work for most systems.
I will not address individual distributions in this document as it will cover all distributions in a general manner.

Disable IPV6

This is a next generation protocol and if you do not use it disable it.
Performance of IPV4 DNS queries is much better without IPV6 enabled.
Most distributions enable IPV6 with a module which you can disable in the file /etc/modprobe.conf or /etc/modules.conf by adding the line:

alias net-pf-10 off

In some cases you may need to also add the line:

alias ipv6 off

And edit /etc/sysconfig/network where you change NETWORKING_IPV6=yes to NETWORKING_IPV6=no
Also make sure that there is no iptables firewall starting up by disabling any ip6tables rules the server may have, and check the /etc/hosts file where you can delete or comment out the entry

::1 localhost.localdomain localhost

A reboot is required to disable IPV6.

Disable services which are not required

Many services are enabled by default for convenience.
You can save memory, cpu cycles, and have added security if you disable all services that are not being used.
Check what services are enabled and then decide which services are required for the machines role.
Disable extra console gettys by editing /etc/inittab then comment out all but one or two gettys.

# Run gettys in standard runlevels
1:2345:respawn:/sbin/mingetty tty1
#2:2345:respawn:/sbin/mingetty tty2
#3:2345:respawn:/sbin/mingetty tty3
#4:2345:respawn:/sbin/mingetty tty4
#5:2345:respawn:/sbin/mingetty tty5
#6:2345:respawn:/sbin/mingetty tty6

It’s important to know what runlevel the machine starts off in, to verify this:
Type

grep ":initdefault" /etc/inittab cut -d":" -f2 or runlevel

This example will cover runlevel 3 and the examples will work for runlevel 5 if you substitute 3 for 5.
Type

ls /etc/rc3.d grep S

These are the services that automatically start when the machine boots up and the numbers next to the S are to signify which order the services start in. Some systems may have different orders so the numbers are only for reference.
The best way to disable the services is to uninstall the services package. You can also delete the symlink in the runlevel directory. Here is an example of deleting S06cpuspeed from runlevel 3

rm -f /etc/rc3.d/S06cpuspeed

I’d suggest the following services need to be removed from most systems:
S05kudzu is useful when you add or remove hardware but should be deleted
S06cpuspeed this could slow the cpu down and make the system less responsive
S08ip6tables you should already have IPV6 disabled so you do not need this
S09isdn unless you live in the dark ages and require ISDN then remove this also
S13irqbalance this feature uses a lot of cpu cycles and should be disabled, a great performance gain
S13mcstrans this is for selinux and you may have already or will disable that in the next section
S13named if this is not a name server delete this
S13 portmap if you are not using NFS, NIS, or rpc connections delete this one
S14 nfslock delete unless you use nfs
S15mdmonitor unless you are using software raid there is no reason to have this running
S18rpcidmapd if you don’t use nfs delete this
S19rpcgssd again if you are not doing nfs delete this
S25netfs if you don’t have any remote filesystems that are locally mounted delete this
S25bluetooth sure it could be useful in a laptop, otherwise remove this also
S25pcscd if you have smart cards or crypto tokens leave this, otherwise delete
S26apmd sure it’s nice to have the machine use less power but then you can’t use it, delete this
S26hidd you do not need Bluetooth keyboard or mouse so delete this also
S28autofs you can start and stop this if you need it but enabled it’s a security risk, wastes cpu and mem
S44acpid unless you use a laptop delete this
S80sendmail if you are running a sendmail mail server then leave this otherwise delete
S89rdisc you can delete this one also
90xfs if you use graphical login then leave this, otherwise delete
S95anacron I don’t think I know anyone who ever used this so delete it
S95atd here is another one you need to delete
S97yum-updatesd automatic updates in linux are lame delete this
S98avahi-daemon is for netzeroconf which should also be disabled and will be covered later
S98haldaemon you can delete this one also
To have the changes take place you can stop all the services manually, reboot, or in some cases if you type:

init 3; init q

That is if you are in runlevel 3 of course.

Disable netzeroconf or zero conf

This is for service discovery and should be disabled unless you want your computer to scan the network
Most distributions have it enabled in /etc/sysconfig/network
Add anything to NOZEROCONF= because the scripts only check to see if there is something there so no or yes disable it.
Other distros you can remove the package avahi-autoip

Turn off filesystem counters

This updates the times of all files and directories that are accessed within any file system and to disable them you need to edit /etc/fstab to add the options: noatime,nodiratime

/dev/hda1 /boot ext3 defaults,noatime,nodiratime 1 2

Then remount the file systems that you alter in /etc/fstab

mount -o remount /boot

You can type mount to show the file systems to verify the change

/dev/hda1 on /boot type ext3 (rw,noatime,nodiratime)

Optimize the TCP/IP stack

Some of these modifications will increase throughput and decrease cpu cycles.
Put the following commands in /etc/rc.d/rc.local or wherever rc.local is

#lets not leave sockets in fin wait for too long just close them in 30
echo 30 >/proc/sys/net/ipv4/tcp_fin_timeout
#lets not keep the connection alive longer than it would if it were live
echo 1800 >/proc/sys/net/ipv4/tcp_keepalive_time
#if you expect windows larger than 64k leave this set at 1
echo 0 >/proc/sys/net/ipv4/tcp_window_scaling
#selective acknowledgement is good only when receiving out of order packets
#and is bad for the cpu.
echo 0 >/proc/sys/net/ipv4/tcp_sack
#forward acknowledgement is also good to disable
echo 0 >/proc/sys/net/ipv4/tcp_fack
#calculation of RTT is cpu overhead
echo 0 >/proc/sys/net/ipv4/tcp_timestamps
#lowers the syn retries
echo 0 >/proc/sys/net/ipv4/tcp_syn_retries
You could also modify /etc/sysctl.cfg adding the settings then run sysctl -p
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_window_scaling = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_fack = 0
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_syn_retries = 2

Reduce kernel swapping

The kernel swaps to disk when it thinks it needs to which may be too often on a server that’s required for performance so we reduce the swappiness.
Put the following command in /etc/rc.d/rc.local or wherever rc.local is

echo 10 > /proc/sys/vm/swappiness

You could also modify /etc/sysctl.cfg adding the settings then run

sysctl -p
vm.swappiness=10

This can be a lower value if you still notice swapping.

The Jaunty Jitters – Will it work?

My firewall / desktop machine at home has been running Ubuntu Intrepid happily for some time now. The recent release of Jaunty piqued my interest, so I went ahead and upgraded. For system changes like this I usually prefer a command line solution, so I used do-release-upgrade, which needs a package installed.

sudo apt-get install update-manager-core

Then:

sudo do-release-upgrade

It will download what’s needed and ask you before it starts downloading packages.

I run shorewall, and to connect to my home network I use an atheros wireless card. This card doesn’t like the ath5k module that is the default atheros handler in Ubuntu. This machine acts as the AP, so I also run hostapd. Because of these custom tweaks, I copied some configs to be sure I wouldn’t lose anything, including /etc/shorewall, /usr/share/shorewall, /etc/hostapd, and the whole of /etc/modprobe.d.

This would prove to be a good thing.

After the upgrade and reboot, the ethernet to the world worked fine, but wireless not so much. Looking at the output of

sudo lsmod|grep ath

and

sudo iwconfig

showed that the ath0 interface was not alive, thus the madwifi driver ath_pci had never been loaded. As I said earlier, Ubuntu likes to use the ath5k driver by default.

The ath_pci madwifi driver is normally ‘enabled’ using the restrited drivers manager, or what is now just the Hardware Drivers manager (Administration/Hardware Drivers). The easy way out may have been to make sure the driver was enabled there first, but my fix was to edit the offending blacklist entry that had been added to /etc/modprobe.d in the form of the file /etc/modprobe.d/blacklist-ath_pci.conf
I edited this file and commented out the line

#blacklist ath_pci

I already have a filed called madwifi.conf in /etc/modprobe.d that blacklists the ath5k module

This took care of wireless, and a reboot brought things back to normal.

Another issue I ran into was that I was getting numerous pop up dialogs from the tracker applet complaining that my index was corrupted. Asking it to reindex just caused another failure. Obviously things are more corrupted than it can handle.

This issue was fixed by installing

sudo apt-get install tracker-utils

and as the user

tracker-processes -r
tracker-applet &
/usr/lib/tracker/trackerd &

The last two commands could be skipped if you just reboot the machine.

Now I just need to decide if I want to go through the fun of disabling the pulseaudio mess, or give it another day in court.

Pluto, why won’t you eroute and just popen already?

I was recently working on an OpenS/WAN to OpenS/WAN IPSEC configuration, when I was perturbed at the fact that the tunnel just wasn’t working.

Like many others, I pounded my head over the configurations – just looking for the slightest problem. I naturally confirmed that both left and right were identical on both nodes, secrets were correct, iptables rules weren’t botched in any manner, rp_filter – everything, it was all correct just like the other several hundred nodes I’ve configured.

When initiating the tunnel from the other side, I saw an interesting error:

117 "tun2128" #149016: STATE_QUICK_I1: initiate 
003 "tun2128" #149016: unable to popen up-client command 
032 "tun2128" #149016: STATE_QUICK_I1: internal error

Unfortunately, the only reference to this error message was with regard to memory issues. Since this server was fine on memory (and all other active tunnels were “fine”), I directed my attention to the hordes of messages scrolling across my screen when executing ipsec whack –name tun128 –debug-all. Eventually, I found a few more errors that meant absolutely nothing to me:
STF_INTERNAL_ERROR, STF_SUSPEND, and then finally this:

May  4 10:08:56 localhost pluto[25544]: | ******parse ISAKMP Oakley attribute:
May  4 10:08:56 localhost pluto[25544]: |    af+type: OAKLEY_GROUP_DESCRIPTION
May  4 10:08:56 localhost pluto[25544]: |    length/value: 5
May  4 10:08:56 localhost pluto[25544]: |    [5 is OAKLEY_GROUP_MODP1536]
May  4 10:08:56 localhost pluto[25544]: | Oakley Transform 0 accepted
May  4 10:08:56 localhost pluto[25544]: | sender checking NAT-t: 0 and 0
May  4 10:08:56 localhost pluto[25544]: | 0: w->pcw_dead: 0 w->pcw_work: 0 cnt: 1
May  4 10:08:56 localhost pluto[25544]: | asking helper 0 to do build_kenonce op on seq: 135267
May  4 10:08:56 localhost pluto[25544]: | inserting event EVENT_CRYPTO_FAILED, timeout in 300 seconds for #148978
May  4 10:08:56 localhost pluto[25544]: | complete state transition with STF_SUSPEND

A lot of crazy messages sure, but the most important string of all – “asking helper …”. It reminds me of a series of posts that I came across while debugging a ridiculous ISAKMP issue with a Cisco Router. The posts (which I can no longer remember the case for which they originated) all indicated setting nhelpers=0 in /etc/ipsec.conf (under ‘config setup’ of course)….

After making the change, calling ipsec setup –restart – all was dandy in magical OpenS/WAN world.

But what does it actually mean? From the PlutoHelper file from OpenS/WAN 2.6.19:

Pluto helpers are started by pluto to do cryptographic operations.

Pluto will start n-1 of them, where n is the number of CPUs that you have
(including hypher threaded CPUs). If you have fewer than 2 CPUs, you will
always get at least one helper.

You can tell pluto never to start any helpers with the command line option
--nhelpers. A value of 0 forces pluto to do all operations in the main
process. A value of -1 tells pluto to perform the above calculation. Any
other value forces the number to that amount.

In one translation or another, it means that pluto will handle its own encryption, and if for some reason your ipsec tunnels that use pre-shared keys start magically complaining about some obscure error that doesn’t give you an answer you like, try setting nhelpers=0 and see what happens…

Flushing the DNS Resolver Cache

Sometimes, a DNS change you made needs to be understood by the remote end very quickly – maybe because your testing your web server and you just changed remote IP addresses. When you configure your DNS server with long TTL values, waiting for the time out just takes too long. Here are a few commands on various platforms that will help clear the local resolver cache:

Microsoft Windows
C:\> ipconfig /flushdns

Linux
# rndc refresh || ndc refresh

Mac OS X
# lookupd -flushcache || dscacheutil -flushcache

Now, if only there were a command that would remind the user that they added the entry in /etc/hosts or %windir%\system32\drivers\etc\hosts, so that instead of looking at this article, they remember that the host was configured there…