How To Install the nVidia X Windows Drivers Manually

Got a shiny new desktop with an nVidia card and your distribution doesn’t have built in support for automatically adding the drivers directly? Have no fear, building the drivers is not a complex process and is fairly standard across most distributions.

First go to the drivers section on the nVidia website and download the driver for your model of card and platform (32/64). I suggest you put this in a location such as /usr/local/src.

Next ensure you have a complete kit for building the required items, on Debian this is as easy as “apt-get install build-essential linux-headers-`uname -r` md5sum” on CentOS you will need to do a “yum install gcc make kernel-headers md5sum”.

Then simply execute it by typing “sh filename”

Follow the directions on screen and the rest of the process is automated. When it completes you will need to restart the X server to get the change in drivers to take effect.

Keep in mind however that it is increasingly rare not to be able to automatically install the driver for nVidia cards directly from a repository, it may not be in the main repository on some versions of Linux but it is usually available in one of the commonly used auxiliary repositories and there are definitely benefits to installing the driver via a repository such as automatic updates of the driver.

Another option for Debian based distributions like Ubuntu, Mint, and of course Debian itself is a program called “Envy“. What Envy does is automatically determine the version of drivers your nVidia card requires and installs them for you. It has both a GUI version if your GUI is at all functional and a text mode version if your GUI is not functional. You can also force it to install a specific version if you disagree with the version it’s attempting to install.

When it comes to testing graphics drivers after installation one way to go about this is with the glxinfo program which displays various information regarding your OpenGL setup and capabilities your card and driver currently are able to provide, of course the alternate more fun method of testing the video driver is to install say Open Arena which is a quake arena clone that is both active and fun to play with friends who use Windows, Linux, or Mac. You can find out more information on it at theirwiki.

If you’ve tried all of these solutions and you’re not having any luck getting your card and drivers to work, give us a call here at Pantek and one of our experts will have you up and running in no time.

How important are backups in today’s industry?

The first thing I would like to point out is that all file storage methods degrade over time.

Daily differential and weekly full backups are the only sure way to prevent data loss.
In this post I won’t go into any acronyms or real definitions because you can always Google that info up, I’m going to cover the heart of the matter only.

Many times I get calls from people who have lost files or entire file systems who are trying to recover their data from the failed medium that it was stored on. At times this may be a NAS, SCSI, IDE, SATA, SAS, hard drive, hard drive array, or a partition that’s within one of the mentioned storage mediums.

Tape drives are always a good idea, and offsite tape management makes this an even better solution which I’d recommend as the first line of defense against disasters.

The drawback of off site tape storage is turnaround time, if you require a restore you may need to drive to get a tape, pay to have it returned, or the data is downloaded via a network connection which could cost you in significant downtime.

It’s common in today’s industry that people are not tracking hardware which has a critical mean time between failures (MTBF). This is a critical mistake because you may prevent data loss with a simple hardware maintenance schedule that replaces hard drives every 3 or 4 years.

It is a good idea to always have a backup of any mission critical server, application, device or whatever. Now that disks are cheaper than tape there is no reason to not have a backup or a completely fault tolerant file system.

Let’s review some of the common methods people use for fault tolerance or redundancy:

Raid on disk:
RAID should never be relied on as backup, as RAID controllers also break down, making the disks inaccessible.

Many times people use hardware or software raid to provide a level of redundancy for their data, in some cases this is the best way to achieve redundancy and in other’s it’s not. Here are some examples:

Raid 5:
For fault tolerance is ok if you have one or more spare drives in the array that are offline and ready to automatically replace a failed drive. In cases where this is not so there may be a total loss of data. In the event of a hardware failure like a controller this could also cause a total loss of data. Often people feel safe with this configuration and I advise against using it as it’s not the safest way to store data.

Raid 1:
This is the best option for fault tolerance and should be used in most cases where data is critical and you want to ensure the highest level of raid redundancy, there are other non standard raid levels which provide different features that are like raid 1 and would provide full redundancy also.

Again I’ll repeat. The only real way to be sure your data is safe is to have a full backup that is not stored on the same device you are backing up.
While that could still leave you without data it’s your best hope.