Archive for August, 2008

Recover root Password

August 29, 2008 3 comments

You forgot your root password… Nice work. Now you’ll just have to reinstall the entire operating system. Sadly enough, I’ve seen more than a few people do this. But it’s surprisingly easy to get on the machine and change the password. This doesn’t work in all cases (like if you made a GRUB password and forgot that too), but here’s how you do it in a normal situation.

  1. Boot your computer until the GRUB screen shows up.
  2. Press enter so that you stay on the GRUB screen instead of proceeding all the way to a normal boot. 
  3. Select the kernel that you want to boot and press E to edit the line.
  4. Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters.
  5. Append the number 1 (one) to the end of the line.
  6. Press B to boot the system.

The system will boot up in single-user mode and at some point you will see the UNIX prompt:


Once here you can run the passwd command and change the password to whatever you like.

sh-3.01# passwd root
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Categories: Security Tags:

Linux Restricted Shell

August 17, 2008 1 comment

This is typical situation, you created users that were intended to stay in their /home environment, however they seem to have a knack of poking around all your server directories.

A restricted shell is a Unix shell that has been modified to allow the Unix shell user to do fewer things than a normal shell would allow him to do. Restricted shells allow you to control the user’s environment allowing only specific admin-aproved commands.

rssh behaves identically to bash with the exception that at least one of the following commands are allowed:

scp – secure copy
sftp – secure FTP
cvs – control versions system
rsync – sync filesystem
rdist – backup utility

Is available through yum in fedora and apt-get in debian. Also you can get a fresh copy from the official website (

In fedora:
# yum install rssh

In Debian:
# apt-get install rssh

Now rssh is installed by default it’s configuration will lock down everything including any sort of access. We need to set up the configuration file. The default file is located at /etc/rssh.conf

For example, I only want to allow only scp and sftp to my server. Also I’m leaving some commented lines for future usage, just in case.


There is no rssh service and the configuration is read on the fly.

Next logical step is to add some users.

# useradd -m -d /home/sara -s /usr/bin/rssh sara

Or if the user already exists, use chmod to assign the restricted shell.

# usermod -s /usr/bin/rssh sara

Now, lets say if sara tries to connect the server with ssh or telnet a message like the following will appear.

This account is restricted by rssh.
Allowed commands: scp sftp

If you believe this is in error, please contact your system administrator.

Connection to localhost closed.

rssh is a simple way to implement security on your server and rather than a unbreakable security measure, rssh is just the start to forge a secure server. It should be awesome if you could also include a unix jail or a custom restricting script written in your favorite programming language.

Just remember to never underestimate the ingenuity of your users.

Good luck!

Categories: Shells Tags:

Linux Out Of Space

August 12, 2008 2 comments

You know the scene. Your system is just too slow, processes are queuing up, you can’t create a new text file because your computer ran out of space.

This is not a system failure nor a mistake on the operating system and you’re desperate trying to find those unused and less important files to delete.

The most obvious action to take is start jumping from directory to directory removing files you don’t require, but after awhile this becomes a painful task.

Before you start removing files, you need to actually know how many free space is left and, in case you have a partitioned system, which filesystems are the affected ones. You know, its always easier to divide a problem to a smaller pieces and attack only the significant ones.

Fortunately this is performed very easily in Linux with the df command.

Open a terminal and type df -h to show the disk space usage report. The -h option tells the system to format the values in human readable form.

As you can see, the problem lies in the /usr filesystem.

Now lets have a look at the directory used space. What we want to know is exactly which folders are the ones using the most disk space; The du command will helps us a lot in this situation.

$ cd /usr
$ du -Sm | sort -n

du shows the estimated disk usage on the current folder. After executing the command you will see a list of the directories ordered by disk space usage.

In the first column you will see the directory size (in Megabytes) followed by the folder name. The last record will always be a single . (dot) representing the total size of all the files and folders where you executed the command.

Now we have delimited the problem to just a few folders, the repairing tasks should cause you no trouble.

Finally, think outside the box, nowadays storage media is cheap and there may be nothing you can do but go buy another hard drive.

ab – Apache Benchmark

August 8, 2008 4 comments

ab (Apache Benchmark) is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It provides a quick an easy way to evaluate your http serving capabilities. ab overloads the server with http requests and measures the time it takes to serve all those requests.

The benchmark is intended for all available versions of Apache through 2.x.

A very common question is how to install ab and where to get it. The real thing is that ab comes preconfigured with your apache installation. Apache by himself is just an extension command for the Apache Web Server. So if you have already installed Apache, then you should also have the ab benchmarking tool.

What we need in order to get ab to work is type the command and append the URL address we want to test. The command is issued as follows:

# ab -n100 -c10 http://localhost:8080/index.jsp

Lets see an example screen:

The -n parameter tells ab the number of connections to send to the server. In this case we are sending just 10, while the -c means the number concurrent requests to be made. The -k option activates the KeepAlive feature and the -t is the timelimit (in seconds)for apache to spend for benchmarking.

The number of connections is the most important parameter, the first times set it to a prudent level of connections and try with different values, increase this value until you get satisfied with the benchmarking; An ab test makes consumption of the server RAM, resources, bandwidth and processor so if you put a brutally high number of connections on an underpowered server, it may get out of resources.

ab is a very simple and useful command line application. It can be understood and performed with ease on almost any kind of Apache server.

As a matter of fact, the ab testing mechanism actually is a good example of a Denial of Service (DoS) attack. Of course ab is not as dangerous and stealthy, and although its very unlikely for it to effectively damage a server, the basis remains the same.

For further information, you can always check the official webpage.

Categories: Apache Tags: ,

Apachetop Monitor

August 5, 2008 2 comments

ApacheTop is a Linux tool designed to monitor Apache Web Server real-time connections and requests. It makes use of the Apache access logs to show meaningful process information.

Monitoring Apache can be tiresome, there is no easy way to give yourself an overview of what your Apache server is really doing. This is where the ApacheTop utility comes in.

You can get the needed files from livna yum repository, apt-get on Debian based systems or download it directly from its official (and unmaintained) site:


To verify the installation just type apachetop on the command line.

After the successful installation, you can pass in the parameters to ApacheTop and start the monitoring tool. By default Apachetop will use your log in /var/log/apache/access.log but for this particular example I’m not using the default file.

# apachetop -f /etc/http/logs/access_log

This will show you a screen like the next one:


Most of the columns are self explanatory, but in case you don’t get it here it is the explanation:

REQS – Number of simultaneous requests to the specified URL.
REQ/S – Number of requests per seconds served by apache on the specified URL.
KB – KB of data sent to the client.
KB/S – Data transfer rate.

For instance, the time range for data to be refreshed is 30 seconds; You can change this value with the proper parameter.

Use H to specify the maximum number of hits to be displayed:

# apachetop -H 100 -f /etc/http/logs/access_log

Or use T to specify the time (in seconds) for data to be refreshed.

# apachetop -T 120 -f /etc/http/logs/access_log

ApacheTop also gives you some simple filters: URLs, REFERRERS and HOSTS.

Much like his brother top, to enter a command just hit the appropriate key. From within the monitor screen, hit f to see the available options, and then hit a to add a filter.

You may have already noticed the little asterisk showing up at ApacheTop interface. Move your arrow keys up and down to the desired line, and then hit the right arrow key to access the details for that request. These details include the referrers and IPs of the clients that are making the selected request. To go back use the left arrow key.

As you can see ApacheTop is very simple, and as of October 2005, it is no longer maintained by his former developer, however it’s still a very useful application for log analysis. Give it a try.

Categories: Apache Tags:

Setting Up a CVS Repository

August 4, 2008 1 comment

I use Concurrent Versions System (CVS) for almost every important source code file. It acts as a backup copy for valuable information and is also very useful when multiple developers are working on the same project.

A concurrent versions system is used to manage source code changes over time and across multiple developers. These days CVS is one of the most widely used source code management systems for software development.

Although there are many applications that perform well as a CVS repository manager, “CVS” and “Subversion” are preferred by most people.

The next steps will get you started with the creation of a CVS repository with the plus that all the information is going to travel through a secure tunnel: SSH

The requirements are fairly easy: A computer with SSH capabilities and cvs installed. For this example lets suppose the IP of our server is

As always, yum and apt-get will help you get the needed files (if you don’t already have them).

On Debian based systems:
$ apt-get install cvs

On Fedora based systems:
$ yum install cvs

You can check the installation with the next command

$ cvs - v

It is strongly recommended to use CVS version 2.11 or higher. Previous versions contain bugs on some Intel architectures.

First of all you need to create a default cvs group and user.

$ useradd -g cvs cvsadmin
$ passwd cvsadmin
$ su - cvsadmin

Now create the folder where you are going to put the source files. It is best to chose one with plenty of space for the backups.

# mkdir /opt/cvsroot

It is time to define the CVS repository pointing to the newly created folder.

# cvs server -d /opt/cvsroot

This command will create a folder named “CVS” inside /cvsroot and put some files containing internal data of the cvs repository (all directories under CVS control will have this subdirectory). It is a useful practice to take a look at the content of the CVS files, but there is 99% chance you will never have to modify them.

Your CVS repository is up and ready but nobody is still using it. Your fellows John, Jane and Peter are desperate asking you to backup their source code versions.

So lets add some users. Remember to assign them to group cvs, otherwise they will have a “Permission denied” error.

$ useradd -g cvs john
$ useradd -g cvs jane
$ useradd -g cvs peter

$ passwd john
$ passwd jane
$ passwd peter

After this you will need to call John, Jane and Peter and tell them to run the following code on their machines:

For John:
# export CVSROOT = :ext:john@

For Jane:
# export CVSROOT = :ext:jane@

For Peter:
# export CVSROOT = :ext:peter@

Note the IP address pointing to the CVS server. Every CVS user has the ability to add new data to the repository.

This is it, you have set up a CVS server!. But, how to use it?

Here is a brief example. Suppose John wants to upload the stable version of the project /home/SourceCode/:

# cd /home/SourceCode/
# cvs import -m "This is the stable version" SourceCode start version1.0

The template is the following:


But then Jane and Peter want to get a copy of what Jonh uploaded to the CVS.

# cd /home/
# cvs checkout SourceCode

Checking out a project in CVS will create a CVS working copy. There is a shortcut for the command cvs checkout. Just type cvs co.

Then Jane finds a bug and edits the source code of the file index.html. To upload her changes she just have to issue the next command:

# cvs commit /home/index.html

And someday Jane calls you saying she screwed up her local copy of the index.html file and now the code is a mess. Do not worry, to recover the last stable version from CVS, then just update your files with the following command:

# cvs update /home/index.hml

This has been a fun little example. You can delete all the /opt/cvsroot/ stuff if you like now. Try putting something real into CVS, just to get yourself using it regularly.

CVS has fully compatible versions on Unix, Linux, Windows and MacOS. There are also simpler interfaces, web-based. When you’re ready, here are some other links:

CVS Official Web
CVS Official Manual

Categories: CVS Tags:

HTTP Load Balancing

August 2, 2008 Leave a comment

The following information is issued as a simple and basic tutorial of how to implement a load balancing system.

Load balancing is a technique to spread work between two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, throughput, or response time. – Wikipedia.

With load balancing we can get the power of a high availability cluster using cheap machines. Also it is easier to scale the systems horizontally and just add more servers as our needs increase.

Of course there exists hardware solutions specifically designed to take care of load balancing. Citrix and F5 being the most popular among datacenters but this are expensive and mostly targeted at the enterprise market, not the home user.

Fortunately Linux provides us with a powerful tool to develop load balancing: IPVS (Internet Protocol Virtual Server).

The procedure for the set up is fairly easy. First you need to define the machine that us gonna act as the main server and “father” of the cluster. Ideally this machine has to be one with post processing capabilities.

By default Linux comes with IPVS pre-installed, but in the case you don’t have it, just yum or apt-get it.

In this example we are going to set up a cluster with 4 machines:

Main cluster server:
cluster member A:
cluster member B:
cluster member C:
cluster member D:

Open a terminal on the machine that has installed IPVS ( and create the virtual host and assing the child machines as follows:

$ ipvsadm -A -t -s rr -p5200
$ ipvsadm -a -t -r -m
$ ipvsadm -a -t -r -m
$ ipvsadm -a -t -r -m
$ ipvsadm -a -t -r -m

A little explanation of the used parameters:

t – Use TCP protocol pointing to the IP address of the server on the specified port.
s – Use allocation of TCP and UDP connections.
rr – use rr-type balancing Robin Robin (distributes the burden equally in all servers).
p – Specify the persistence of TCP connections. If empty, 300 is assigned by default.
A – Registers a new virtual service. This is the main server.

Thats it, we have a fully functional mini cluster on port 80 (http).

But what if someday, one of your machines burns out. Don’t let the panic attacks you.

To delete a “child” machine just issue your godlike root powers and type the following command:

$ ipvsadm -d -t

Or if you want to completely remove the whole clustering, then type the following command:

ipvsadm -D -t

The d deletes the specified server, and D deletes the cluster and all related servers it may have assigned.

Categories: Clusters Tags: