Critical Vulnerability Alert: “Heartbleed Bug”

A Major Vulnerability, nicknamed the “Heartbleed Bug” by security experts, has been identified. It is especially severe because it allows anyone on the Internet access to your encrypted data sent using SSL/TLS and HTTPS technologies. This compromises the secret keys used to encrypt the traffic, the names and passwords of the users and the actual content. It allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.

You can find out more details of this vulnerability here: http://heartbleed.com

To determine if your server is vulnerable, you will need to check what version of OpenSSL is installed on your server. All OpenSSL versions 1.01 through 1.0.1f are vulnerable, but the following versions are already secure (and no further action would be required):

OpenSSL 1.0.1g is NOT vulnerable
OpenSSL 1.0.0 branch is NOT vulnerable
OpenSSL 0.9.8 branch is NOT vulnerable

If your server is vulnerable, in order to fix this vulnerability, you will need to both (a) Upgrade your version of OpenSSL; and (b) Completely re-issue and re-install all your SSL certificate(s).

All Pantek Support Engineers have been advised of this issue, and trained in the appropriate response procedure. If you would like our assistance to determine if your server(s) are indeed vulnerable, or to fix the vulnerability, please contact our support team using any of the normal methods. For fastest response, we recommend opening a Support Ticket via the Pantek Portal: https://portal.pantek.com

Pantek Clients who have purchased a Managed Service Plan will receive a separate notification, as management of these third-party security issues occurs without extra charges. You can find more details on our Managed Service Plans here:

https://www.pantek.com/managed

Install Owncloud 6 on Ubuntu

Are you interested in putting your data in the cloud so you can access it from your desktop, laptop and smart phone? Uneasy about handing your data over to a faceless company with pages of cryptic Terms of Service that cannot guarantee your privacy or even the integrity of your data?

Would you like the best of both worlds?

Owncloud is an open-source alternative to commercial offerings like Dropbox and Google Drive. You can install owncloud quickly on your own hardware and use it to access, sync and share data with other computers and mobile devices.

To install Owncloud 6 on Ubuntu, follow these steps: (NOTE: all steps below should be performed as superuser or with ‘sudo’)

Add the Owncloud repository to your system:

Visit Owncloud Packages for Ubuntu and select the instructions for your version of Ubuntu. Here we’re installing on Ubuntu 12.04:

# sh -c “echo ‘deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_12.04/ /’ >> /etc/apt/sources.list.d/owncloud.list”

# wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_12.04/Release.key

# apt-key add – < Release.key

# apt-get update

Install Owncloud, Apache, MySQL and PHP5

Note: you will be asked to provide a root password for the databases server if this is the first time your are installing MySQL

# apt-get install owncloud apache2 mysql-server php5 php5-mysql

Modify configuration files:

With your favorite editor, open the /etc/apache2/owncloud.conf file, and make sure AllowOverride All is set:

<Directory /var/www/owncloud>
   AllowOverride All
</Directory>

Enable mod_rewrite in Apache

# cd /etc/apache2/mods-available

# a2enmod rewrite

Enabling module rewrite
To activate the new configuration, you need to run:
  service apache2 restart

# service apache2 restart

Create database and user for owncloud in MySQL

We will create a database named owncloud and a mysql user named ownclouduser.

Login into mysql server and create the mysql user and database. Grant privileges to database to the user.

Start MySQL (if not already running)

# service mysql restart

Log into MySQL

# mysql -u root -p

The server will prompt for the root password, After login you will get the mysql prompt.

mysql >

Create a user and set the password. (Use a strong password!)

mysql> CREATE USER ‘ownclouduser’@’localhost’ IDENTIFIED BY ‘Password’;

Create a database named owncloud

mysql> create database owncloud;

Grant ownclouduser privileges to owncloud database

mysql> GRANT ALL ON owncloud.* TO ‘ownclouduser’@’localhost’;

mysql> exit

Open your owncloud!

Open a web browser and visit you owncloud. It will be installed at:
http://<address_of_server>/owncloud

Create an admin user and give a strong password

Click on the Advanced button

(Data Folder will be selected by-default)

Under configure the database, select MySQL and fill in the blanks:

  • MySQL user: ownclouduser
  • Password: ownclouduser password
  • Database Name: owncloud
  • Database Host: localhost

Voila!

It may take a few seconds to complete and you will be presented with the Owncloud 6 dashboard

Installation is now complete.

You can begin uploading, downloading and sharing your data. With the admin user you can create additional users and set space quotas.

Owncloud comes with a photo viewer, integrated calendar and contact manager built-in as well as dozens of user-contributed extensions.

Like the sound of owncloud but still unsure if you have the right hardware? Want to install but would like a “helping-hand”? Do you have a ownCloud instance that’s gone rogue and need some help getting things back in order? Give Pantek a call and we’ll be happy to work with you to get your cloud installed and operating correctly.

Monitoring Calls in Asterisk/FreePBX

The ability to monitor (listen in on) phone calls can be useful for purposes such as training or the monitoring of employees by company management. Using ChanSpy is one way to accomplish call monitoring with Asterisk. The ChanSpy feature has been included in Asterisk for years and is also supported by FreePBX. Later versions of Asterisk also allow the listener to optionally speak to just the local side of a call, greatly enhancing its usefulness and flexibility in training environments.

ChanSpy’s syntax is quite simple: Chanspy([<chanprefix>][|<options>]. The list of available options varies somewhat between Asterisk versions. Trying Chanspy is as simple as adding

exten => 555,1,Chanspy(‘all’|qb)

to your dialplan. Then simply dial 555 and use the # key to scan through and monitor any available channels.

To enable ChanSpy system-wide without restriction may be unwise however, because then anyone who is aware of the feature would have the ability to monitor any call and possibly obtain sensitive personal information such as credit card numbers. One easy way to solve this problem is to require a password:

exten => 555,1,Authenticate(1234)
exten => 555,n,Chanspy(‘all’|qb)
If you now dial 555, the system will ask for a password (1234 in this example) which must be entered correctly before the ability to listen is granted. Many versions of FreePBX come with ChanSpy enabled by default on extension 555 with no authentication needed! If you use FreePBX, check now by dialing 555.

Detailed explanations of all of ChanSpy’s features are readily available online, for example at http://www.voip-info.org/wiki/view/Asterisk+cmd+ChanSpy. By combining ChanSpy’s features with Asterisk’s extensive dialplan capabilities, an Asterisk administrator can easily make use of this great feature while maintaining tight control over which calls may be monitored and by which users.

Building Your Own RPM

Knowing how to build a RPM package can be useful in a number of situations. Maybe there is an important bug fix you need that will not make it into the official repositories for some time. Or maybe you want to apply your own custom patch. Or maybe you want to rebuild the package with the latest version of the software. Compiling directly from source can leave a messy system or cause conflicts with other distro provided packages.

Depending on how complex the package is, it may be a good idea to set up a separate system specifically for building packages and possibly hosting your own repositories for easier deployment of custom packages. Building packages can cause a lot of build dependencies to be installed which can clutter up the system.

Lets explore three examples: Making a package change, applying a custom patch, and packaging a new software version.

Making a Package Change

To begin with you will need some RPM tools installed.

yum install rpm-build

You never want to build rpms with the root user for security reasons, so use a regular user for building rpms.

For this example we will be using the hello package. We can use wget to get the srpm.

wget http://pkgs.repoforge.org/hello/hello-2.3-1.rf.src.rpm

Unpack the srpm.

rpm -i hello-2.3-1.rf.src.rpm

This will create a directory called rpmbuild with the contents of the srpm. For this example we will be making a simple change. We will be fixing some bad grammar in the description. The RPM .spec file contains information about the RPM and the build commands to build the package. Open the hello.spec file with your preferred editor.

vim rpmbuild/SPECS/hello.spec

First we need to bump the release number from 1%{?dist} to 2%{?dist}. Find the section called %description and we are going to delete the line that says “GNU hello supports many a lot native languages.”.

At the bottom we need to add this change to the changelog section.

%changelog
* Tue Feb 11 2014 fname lname <email@domain.tld> – 2.3-2
– Fixed bad grammar in the description

* Wed Aug 20 2008 Dag Wieers <dag@wieers.com> – 2.3-1 – 7981/dag
– Initial package.

Now we are ready to build the package. Run:

rpmbuild -ba rpmbuild/SPECS/hello.spec

-ba will make rpmbuild build a binary package as well as an srpm. Depending on your installation, you may get an error about gcc missing. You will need to install gcc.

yum install gcc

Depending on the complexity of the package you may need to repeat this step for more missing dependencies.

Rerun the rpmbuild command from above. The program should compile fine but it will fail at creating the RPM because of unpackaged files. We need to have these files included in the rpm. Open hello.spec again and find the %files section. Add the following to the end of that section.

%{_infodir}/dir

Rerun the rpmbuild command again and it should complete successfully. You will find the resulting binary RPM in rpmbuild/RPMS/i386 or rpmbuild/RPMS/x86_64 depending on your architecture. You can then use yum or RPM to install it.

yum install rpmbuild/RPMS/i386/hello-2.3-2.el6.i386.rpm
rpm -ivh rpmbuild/RPMS/i386/hello-2.3-2.el6.i386.rpm

Adding a Custom Patch

Let’s say you have a custom patch you want to apply. This assumes you already have the patch. Near the top of the spec file, add a line that says:

Patch0: hello-2.3-earth.patch
You can add more patches by listing them. Patch1: Patch2: Patch3: etc. Under the %setup section you will add the patch.

%patch0 -p1 -b .earth

See the patch man page for information on the -p option which specifies how deep to strip the prefix.

Now place the patch in rpmbuild/SOURCES. When you run rpmbuild it will apply the patch when it builds. It is best practice to apply modifications at build time through a patch than to modify the upstream source directly.

Packaging a New Software Version

If you want to rebuild a package with the latest version of the software, download the latest source archive and place it in rpmbuild/SOURCE/. Edit the spec file to update the version and release information. Follow the above instructions to build the package. Sometimes with newer versions, files may have been moved or deleted. You may have to adjust the %files section of the spec file to fix any errors that occur.

Part 2 in a Series of blogs about Cloud Computing: Cloud Services

There are three basic Cloud Service categories:

. Infrastructure as a Service (IaaS)
. Software as a Service (SaaS)
. Platform as a Service (PaaS

IaaS is the basic virtualized computing services available in the Cloud, including processing, memory and storage capabilities.

SaaS is the use of application software, which is hosted in a cloud environment. Additionally the software is structured so many different users can use it concurrently. Examples include hosted email, and Customer Relationship Management (CRM) software.

PaaS is the idea of providing a software development, application or industry specific “framework” for users to develop their own software. Similar to SaaS, these frameworks are hosted in the Cloud and are multi-user. Examples include: Google App Engine and Microsoft Windows Azure.

In the marketplace today, these services are at different levels of their growth and maturity. SaaS is the most mature, partially because the delivery of application software over the Internet, at scale to thousands over customers, has been occurring for many years. IaaS is the next mature, pioneered by Amazon and now being delivered through multiple technologies by a number of companies. PaaS is the least mature, mostly because of the nature of the software that is being hosted (i.e. good application and/or industry frameworks are in-of-themselves a more difficult undertaking).

The next blog will be a discussion of IaaS technologies.

Part 1 in a Series of blogs about Cloud Computing: What is Cloud Computing?

Definitions abound, from “the delivery of services over the Internet” to the National Institute of Standards and Technology definition “cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” and everything in-between.

The first is business focused, but is so broad that it covers anything done via the Internet. NIST’s definition largely focuses on technical issues, but seems very cumbersome. Let’s briefly explore Cloud Computing from both a business and technical perspective.

From a business perspective, Cloud Computing is about the delivery of Services over the Internet. However, not all services. For example, communications (Internet phone, chat, IM, etc.) are usually not considered the province of Cloud Computing. Hardware resources (e.g. cpu, disk), applications (e.g. hosted email and software applications, like CRM) and frameworks (e.g. hosted development environments) are what is generally included.

From a technical perspective, Cloud Computing uses modern server virtualization technology to share resources among users and scale resources to needs. The ability to rapidly provision these resources (with minimal effort) is also a key feature. What about measured service and the ability to pay by use? These features are available in the marketplace today, but I don’t think they a fundamental component of the technology; it’s more a business decision on how to measure and charge for resource use.

In summary then, Cloud Computing is the delivery of services over the Internet, based on the use of virtualization technology, providing the ability for resource utilization to be rapidly provisioned and changed according to user needs.

The next blog will be a discussion of Cloud Services

The eCommerce Community Gathers For “Free Shipping Day” Today!

Don’t miss out Free Shipping Day. The fourth-annual event will include more than 2,300 merchants. Additionally, anyone who take advantage of a free shipping deal will also be guaranteed delivery of their purchases by Christmas Eve.

FreeShipping.org, the site responsible for organizing the promotion, reports that online sales during last year’s Free Shipping Day totaled $942 million.

You can learn more by visiting FreeShipping.org

Simple Server Security Through Obscurity

There are many security procedures and policies required to maintain a secure Linux server. However, many overlook simple things that can increase the security of your servers and networks in the production environment. This blog article will focus on some Security Through Obscurity measures that are easy to implement.

Security Through Obscurity.

  1. Don’t use descriptive DNS entries. Many organizations use descriptive DNS entries (ie mail.domain.com, router.domain.com or firewall.domain.com). However, there is no DNS requirement to do this and descriptive DNS entries can direct someone to the most critical elements of your network. When you pick the names for these and other servers keep in mind that automated attacks may single out descriptive names like smtp.domain.tld. Consider simple changes to prevent automated attacks such as using 01router.domain.tld, or routera.domain.tld. You get the idea.
  1. Don’t run ssh, ftp, telnet, and webmin on a standard port. You run the risk of being the victim of a automated attack. Any service that listens can run on a non standard port. Of course www and smtp usually must use the standard ports, but most everything else can be changed to a different port.
    • Changing the port that sshd listens on.
      • Edit the /etc/ssh/sshd_conf file and change the port to something other than 22 and make sure only protocol 2 is enabled.
      • Consider setting “PermitRootLogin No” to deny root logins, this requires users to either su to root or use sudo for superuser commands
    • To change the port for Webmin:
      • Log on to Webmin
      • Click on the Port and Address icon on the modules main page
      • Change the port number by entering a number into the Listen on port field
      • Hit the Save button to use the new settings.
  1. Change the default URL on web applications. To help prevent automated attacks, change the default URL to be anything but the default URL. A good example is http://www.domain.tld/mail for a webmail interface, even using mail1 will save you from an automated attack. Other examples are /stats, /awstats, /webstats, /forum, /cart, even changing /cgi-bin to something like /cgi-bin1 can be a bit of work modifying code or config files but it’s well worth it.
  1. Change the default names of standard scripts. To help prevent automated attacks, change the default names of standard scripts. A good example is FormMail.pl, rename this to to sendmemail.pl or something so an automated attack can’t find it. This will work for things like /awstats also, you simply edit the /etc/httpd/conf.d/awstats.conf and add a 1 or something, again an automated attack can’t find it then.

In and of themselves, these suggestions will not provide a secure production environment. Utilizing these simple Security Through Obscurity techniques is recommended in conjunction with other security practices to enhance the security of your overall environment. Contact Pantek today for assistance securing your Linux servers.

SNMP Monitoring with Cacti

One of the easiest and most efficient methods of obtaining server or network statistics is to configure and install Cacti on a rpm/yum based system such as RedHat, CentOS, or FedoraCore. This post covers the basics of getting Cacti working on a RedHat or yum based system and monitoring the stats that are available on that system. I also cover the source method of installing Cacti for advance users at the bottom of the basic install/config.

Yum install method

You need to install Cacti, snmpd, httpd, php, php-mysql, php-snmp, MySQL and MySQL Server, if you have any of these installed you can omit them from the command below.

Run the command:

yum install httpd cacti net-snmp php php-mysql php-snmp mysql mysql-server

Next modify the /etc/httpd/conf.d/cacti.conf (allow from line) to allow your local network:

Allow from 192.168.0.

if that’s your network.

Restart apache:

service httpd restart

Make sure apache, mysqld, and snmpd start at boot.

chkconfig httpd on
chkconfig mysqld on
chkconfig snmpd on

Make sure that snmpd accepts public as community from localhost.

You can use the command

snmpconf -g basic_setup

to generate an snmpd.conf file.

Restart snmpd:

service snmpd restart

Yum should have setup the cacti database, if it did not then do the following:

mysqladmin create cacti
mysql cacti

Create mysql user for cacti (use a decent password, not just password):

mysql> GRANT ALL ON * . * TO 'cactiuser'@'localhost' IDENTIFIED BY 'password';

Enable the user and privileges for the user:

mysql> flush privileges;

If you are running a yum based distribution then you should be able to go to:
http://serverip/cacti

You will be presented with the Cacti Installation Guide, click Next >>

Since this is a New Install you will click Next >> again.

Be sure that the values are correct and all the fields have [FOUND] above them, if they
do not then correct them and click Finish.

The default User Name is admin, and the default Password is admin, you will be forced to change this so pick a secure password

Now you should be in the console, click Devices, click localhost, now click Create Graphs for this Host, check all the check boxes at the left in the Graph Templates, and Data Query sections then click create. Now click create again.

Now if you click the graphs and you should be able to see the start of the graphs that are being created for the local host. These will take a bit of time to populate so expect to see data in about 10 minutes.

Source Method

To install via the source method, make sure you have the following required software installed and configured:
Apache
RRDTool 1.0.49 or 1.2.x or greater
MySQL 4.1.x or 5.x or greater
PHP 4.3.6 or greater, 5.x greater highly recommended for advanced features

Yum should need to setup the cacti database, do the following:

mysqladmin create cacti
mysql cacti

Create mysql user for cacti (use a decent password, not just password):

mysql> GRANT ALL ON * . * TO 'cactiuser'@'localhost' IDENTIFIED BY 'password';

Enable the user and privileges for the user:

mysql> flush privileges;

Download cacti-xxxxx.tar.gz from sourceforge to /var/www/html or wherever your webserver DocumentRoot is located.

Untar gzip the file:

tar -zxf cacti-xxxxx.tar.gz

Rename the directory this creates

mv cacti-xxxxx cacti

Add a cacti user:

adduser -d /var/www/html/cacti cactiuser

Change the owner of the files to the user cacti:

chown -R cacti:cactiuser /var/www/html/cacti

Populate the cacti database (use your values for user and password)

mysql cacti -u cactiuser -ppassword < /var/www/html/cacti/cacti.sql

Edit the /var/www/html/cacti/include/config.php file to specify the user+password: Add the Cacti poller job as a cron job:

crontab -e

Add the following line:

*/5 * * * * cacti1 php /var/www/html/cacti1/poller.php > /dev/null 2>&1

Now you can go to http://serverip/cacti

You will be presented with the Cacti Installation Guide, click Next >>

Since this is a New Install you will click Next >> again.

Be sure that the values are correct and all the fields have [FOUND] above them, if they
do not then correct them and click Finish.

The default User Name is admin, and the default Password is admin, you will be forced to change this so pick a secure password

Now you should be in the console, click Devices, click localhost, now click Create Graphs for this Host, check all the check boxes at the left in the Graph Templates, and Data Query sections then click create. Now click create again.

Now if you click the graphs and you should be able to see the start of the graphs that are being created for the local host. These will take a bit of time to populate so expect to see data in about 10 minutes.

As always if you need further assistance with this or any other open source application or issue, the experts at Pantek Inc. are available 24/7 at info@pantek.com, 216-344-1614, and 1-877-LINUX-FIX! We look forward to working with you.

Simple Linux HA web Cluster with minimal resources required

All you need is a single server to feed your High Availability Network Cluster.

This example will provide for Apache HA without anything complex like session handling, and will focus on the RHEL version of Linux High Availability.

The packages required:
piranha
ipvsadm
arptables_jf

You will need a load balancer machine to run Linux HA. In this example we will use Piranha and set the load balancer up in a weighted least connection configuration.

On the load balancer server you need the following:
Install piranha, and ipvsadm

The required lines for the configuration of the load balancer are in the file: /etc/sysconfig/ha/lvs.cf

primary = ip.address.of.this.server
service = lvs
keepalive = 6
deadtime = 18
network = direct
debug_level = 0
virtual balanced_www {
     address = public.ip.address eth0:0
     vip_nmask = 255.255.255.0
     active = 1
     port = 80
     send = “GET / HTTP/1.0\r\n\r\n:
     expect = “HTTP”
     load_monitor = none
     scheduler = wlc
     protocol = tcp
     timeout = 5
     reentry = 10
     server webserver1 {
          address = ip.address.of.webserver1
          active = 1
          weight = 1
     }
     server webserver1 {
          address = ip.address.of.webserver2
          active = 1
          weight = 1
     }
}

Beware, the file above will fail if there is any additional whitespace or there are any comments in the file.
Also: do not worry about the eth0:0 ip address, as lvs will bring that up and down as required when you start the daemon.

On both webservers you need the following file:

/etc/sysctl.conf

net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.eth0.arp_announce = 2

Now run sysctl -p this will activate the new rules which will reload automatically upon reboots.

You will also need the following arptables_jf rules which you can add prior to the ifconfig line in the /etc/rc.d/rc.local file

On webserver1

arptables_jf -A IN -d ip.address.of.webserver1 -j DROP
arptables_jf -A OUT -d public.ip.address.of.webserver -j mangle --mangle-ip-s ip.address.of.webserver1

On webserver2

arptables_jf -A IN -d ip.address.of.webserver1 -j DROP
arptables_jf -A OUT -d public.ip.address.of.webserver -j mangle --mangle-ip-s ip.address.of.webserver1

On each server you need to assign an alias to the ethernet device that the ip.address.of.webserver is on and give that the public.ip.address.of.webserver which you can do by adding the following line to your /etc/rc.d/rc.local

ifconfig eth0:0 public.ip.address.of.webserver netmask.of.ip.address up

Don’t worry about having the same IP address on 3 servers, the only one recognized is the IP address on the load balancer.
Once this is done then you will need to start the load balancer by running pulse on the load balancer:

service pulse start

Now you have a webserver that is load balanced and has high availability.

To test this watch the logs on the load balancer:

tail -f /var/log/messages

Now stop one of the webservers, or reboot it. Nanny will determine there is a connection failure and it will remove that webserver from the cluster.

This configuration works the same for up to 256 webservers, you just need to replicate the lines in the lvs.cf and setup all the other servers the same way you setup the two above.