When a List of Files is Too Long for a Typical “rm” Command

I was on a client’s report­ing server and noticed that an “ls” of their report logs took about 10 min­utes. The direc­to­ry had a log for every report run since June 2010, which is around 1.3 mil­lion files!

Here’s a tran­script of the error:

[root@morpheus log]# pwd
/home/morpheus/tools/birt-runtime-2_0_1/Report Engine/log
You have new mail in /var/spool/mail/root
[root@morpheus log]# rm *
-bash: /bin/rm: Argument list too long

Con­tin­ue read­ing “When a List of Files is Too Long for a Typ­i­cal “rm” Com­mand”

Using MySQL Queries to Dump/Format Data to Delimited Files

It is often use­ful to uti­lize MySQL queries, to export data from tables then write this data into files. MySQL offers the abil­i­ty to cus­tomize many of the details of the­se exports, which should cov­er most sit­u­a­tions.

In this exam­ple, we are select­ing data from a table, apply­ing some for­mat­ting and writ­ing it to a file.

SELECT code, name, REPLACE(REPLACE(REPLACE(comment,"\r\n\r\n",'||'),"\r\n",'||'),"\n",'||') 
INTO OUTFILE '/tmp/20111222_practice_comments.csv' 
FIELDS TERMINATED BY ','  
OPTIONALLY ENCLOSED BY '"' 
ESCAPED BY '\\' 
LINES TERMINATED BY '\n' 
FROM practice_table;

Con­tin­ue read­ing “Using MySQL Queries to Dump/Format Data to Delim­it­ed Files”

RSYNC a File Through a Remote Firewall

One of my recent tasks was to set-up a new auto­mat­ic back­up script, which dumps out the MySQL data­base on the remote host at a reg­u­lar time, at a lat­er time, it is RSYNC’d from the back­up server through a remote fire­wall. I must say that I was a lit­tle sur­prised, to dis­cov­er that the fin­ished script and the con­fig­u­ra­tion that goes along with it, was actu­al­ly quite sim­ple and eas­i­ly repeat­able. I was able to repli­cate the process for three sites very quick­ly and will eas­i­ly be able to scale it to many more when nec­es­sary.

SSH Tunneling and SSH Keys

In order to per­form a process on a remote fire­walled host, you need to first set up keys, to allow the trust­ed back­up server to gain access to the inter­me­di­ate host. You must also set up a key which allows the inter­me­di­ate host to gain access to the fire­walled host.

First, let’s gen­er­ate a pub­lic key on the back­up server, if we don’t already have one. Be sure to use an emp­ty pass phrase since this is an unat­tend­ed script.

[backup@lexx log]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/backup/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /backup/.ssh/id_dsa.
Your public key has been saved in /backup/.ssh/id_dsa.pub.
The key fingerprint is:
3d:48:9c:0f:46:dc:da:c3:a6:19:82:63:b1:18:91:62 backup@lexx

The default key will, by default, be locat­ed in ~/.ssh/id_dsa.pub Copy the con­tents of this file to the clip­board, you will need this to get the remote server to trust the back­up server.

Logon to the remote exter­nal server via ssh. On this server we will con­fig­ure it to trust the back­up server.

[backup@lexx ~]# ssh user@remotehost.com
user@remotehost.com's password: 
Last login: Thu Jul 14 22:57:58 2011 from 69.73.94.214
[user@remotehost ~]# ls -al .ssh
total 28
drwx------  2 user user 4096 2011-07-14 22:05 .
drwxr-x--- 12 user user 4096 2011-07-14 21:54 ..
-rw-------  1 user user 3024 2011-07-14 21:57 authorized_keys2
-rw-------  1 user user  668 2010-10-27 23:52 id_dsa
-rw-r--r--  1 user user  605 2010-10-27 23:52 id_dsa.pub
-rw-r--r--  1 user user 5169 2010-10-21 13:01 known_hosts

If the authorized_keys2 or sim­mi­lar­ly named file does not yet exist, cre­ate it and open the file in your text edi­tor of choice. Then paste the key you copied from the id_dsa.pub file on the back­up server.

To make the remote server rec­og­nize the new­ly added key run the fol­low­ing:

[user@remotehost ~]# ssh-agent sh -c 'ssh-add < /dev/null && bash'

Now we can make sure that the key works as intend­ed by run­ning the fol­low­ing com­mand, which will ssh into the server and exe­cute the upti­me com­mand:

[backup@lexx ~]$ ssh user@remotehost.com uptime
 23:57:17 up 47 days,  4:11,  1 user,  load average: 0.54, 0.14, 0.04

Since we got the out­put of the upti­me com­mand with­out a login prompt, it means the key was cre­at­ed suc­cess­ful­ly.

Now we repeat the ssh key process, this time between the remote­host server and the fire­walled server.

[user@remotehost ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/user/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /user/.ssh/id_dsa.
Your public key has been saved in /user/.ssh/id_dsa.pub.
The key fingerprint is:
3d:48:9c:0f:46:dd:df:c3:a6:19:82:63:b1:18:91:62 user@remotehost

Copy the infor­ma­tion from the .ssh/id_dsa.pub of the remote exter­nal server to the fire­walled server, then add to the authorized_keys file and run:

[user@firewalledserver ~]# ssh-agent sh -c 'ssh-add < /dev/null && bash'

Now you should be able to pass the rsync com­mand all the way through the remote fire­wall, to the fire­walled server from the back­up server.

This can be test­ed by the fol­low­ing com­mand, which tun­nels through the fire­wall and exe­cutes the upti­me com­mand on the inter­nal server:

[backup@lexx ~]$ ssh user@remotehost.com ssh user@firewalledserver uptime
 23:52:17 up 41 days,  4:12,  1 user,  load average: 0.50, 0.13, 0.03

RSYNC The Data From the Backup Server, Through The Firewall

Now that we’ve got all of our keys set-up, most of the work has been done. I’m assum­ing you have a cron job on the inter­nal server which dumps the mysql data­base at a speci­fic time. You should set-up your rsync com­mand late enough, so that the cron job has had enough time to dump the data­base.

Here is the rsync com­mand which puts you through the fire­wall to down­load the remote mysql data­base dump. The –z flag allows you to do this with com­pres­sion, which can sig­nif­i­cant­ly speed up the process.

[backup@lexx ~]$ rsync -avz -e "ssh user@remotehost.com ssh" user@firewalledserver:/home/user/rsync-backup/mysqldump.sql /home/backup/

This will cre­ate a hid­den file that will be named some­thing like .mysqldump.sql.NvD8D, which stores the data until the sync is com­plete. After the sync is com­plete you will see a file named mysqldump.sql in /home/backup/ fold­er.

Just set up the nec­es­sary cron scripts to make sure every­thing hap­pens at the right time, pos­si­bly put some log­ging in there so you can see what has hap­pened and you’re done!

Here’s an exam­ple of what I did on the back­up server, to call the back­up script. It appends the out­put of both STDOUT and STDERR to the /var/log/remote_backuplog file each time it is run. It also runs the script as the back­up user so the files it gen­er­ates have the cor­rect per­mis­sions for the back­up user to access.

01 6 * * * backup /home/backup/run_backups.sh >> /var/log/remote_backuplog 2>&1

Here is what my rsync script run_backups.sh looks like.

#!/bin/bash
 
echo "running backups"
# print the date into the logfile
date
 
# backup server 1
echo "backing up server1"
ssh user@externalserver1 ssh user@internalserver1 ls -l /home/user/rsync-backup/mysqldump.sql
/usr/bin/rsync -avz -e "ssh user@externalserver1 ssh" user@internalserver1:/home/user/rsync-backup/mysqldump.sql /home/backup/server1/
 
# backup server 2
echo "backing up server2"
ssh user@externalserver2 ssh user@internalserver2 ls -l /home/user/rsync-backup/mysqldump.sql
/usr/bin/rsync -avz -e "ssh user@externalserver2 ssh" user@internalserver2:/home/user/rsync-backup/mysqldump.sql /home/backup/server2/
 
# backup server 3
echo "backing up server3"
ssh user@externalserver3 ssh user@internalserver3 ls -l /home/user/rsync-backup/mysqldump.sql
/usr/bin/rsync -avz -e "ssh user@externalserver3 ssh" user@internalserver3:/home/user/rsync-backup/mysqldump.sql /home/backup/server3/

Quick and Easy Regular Expression Command/Script to Run on Files in the Bash Shell

I often find it nec­es­sary to run reg­u­lar expres­sions on, not just one file; but instead a range of files. There are per­haps dozens of ways this can be done, with vary­ing lev­els of under­stand­ing nec­es­sary to do this. 

The sim­plest way I have encoun­tered uti­lizes the fol­low­ing syn­tax:

perl -pi -e "s/<find string>/<replace with string>/g" <files to replace in>

Here is an exam­ple where I replace the IP address in a range of report tem­plates with a dif­fer­ent IP address:

perl -pi -e "s/mysql:\/\/192.168.2.110/mysql:\/\/192.168.2.111/g" $reportTemplateLocation/*.rpt*

Basi­cal­ly, I am look­ing for a line which con­tains mysql://192.168.2.110, which I want to replace with mysql://192.168.2.111.

Here is an exam­ple of a bash script I call changeReportTemplateDatabase.sh, which I wrap around that com­mand, to accom­plish that same task with more ele­gance:

#!/bin/bash
#
# @(#)$Id$
#
# Point the report templates to a different database IP address.
reportTemplateLocation="/home/apphome/jboss-4.0.2/server/default/appResources/reportTemplates";
 
error()
{
    echo "$arg0: $*" 1>&2
    exit 1
}
usage()
{
        echo "Usage $0 -o <old-ip-address> -n <new-ip-address>";
}
 
vflag=0
oldip=
newip=
while getopts hvVo:n: flag
do
    case "$flag" in
    (h) help; exit 0;;
    (V) echo "$arg0: version 0.1 8/28/2010"; exit 0;;
    (v) vflag=1;;
    (o) oldip="$OPTARG";;
    (n) newip="$OPTARG";;
    (*) usage;;
    esac
done
shift $(expr $OPTIND - 1)
 
if [ "$oldip" = "" ]; then
        usage;
        exit 1;
fi
if [ "$newip" = "" ]; then
        usage;
        exit 1;
fi
 
echo "$0: Changing report templates to use the database at $newip from $oldip";
perl -pi -e "s/mysql:\/\/$oldip/mysql:\/\/$newip/g" $reportTemplateLocation/*.rpt*

Usage of the script is as sim­ple as the com­mand below. It will change every data­base ref­er­ence on report tem­plates in the direc­to­ry ref­er­enced by the vari­able report­Tem­plate­Lo­ca­tion to the new val­ue.

./changeReportTemplateDatabase.sh  -o 192.168.2.110 -n 192.168.2.111

A fur­ther improve­ment, which may be use­ful to some, would be to make the direc­to­ry a flag which can be edit­ed at the com­mand line.

Something You Should Know About Batteries

Bat­ter­ies can swell and deform with time. This is a lesson I learned by chance just yes­ter­day, when tak­ing my lap­top in for a repair.

I had been hav­ing prob­lems with my Mac­book Pro for the past week or so. The touch­pad would be click­ing on just about every­thing, even as I was typ­ing; caus­ing the text cur­sor to jump to wherever the mouse was and while mov­ing the mouse/touchpad it would click on every­thing in its path. As you might imag­ine, this made vir­tu­al­ly every­thing impos­si­ble to get done, with­out becom­ing quite frus­trat­ed.

I took it to the Apple store for diag­nos­tics and the first thing the tech­ni­cian checked was the bat­tery. He first noticed that the bat­tery door was not tight­ly closed. He pulled the bat­tery out and set it on the coun­ter, dis­cov­er­ing that it wasn’t flat, as it should be; there was actu­al­ly a bul­ge in it. 

The deformed bat­tery put pres­sure on the touch­pad caus­ing it to mis­fire clicks almost con­stant­ly. 90% of the mac­book pros he sees with touch­pad prob­lems are due to bat­tery defor­ma­tion.

Replac­ing the bat­tery fixed the prob­lem com­plete­ly. This can hap­pen on any­thing with a bat­tery. It has even hap­pened to larg­er bat­tery back­up sys­tems, caus­ing bat­ter­ies to become stuck in racks.

The tech­ni­cian told me a sto­ry about a 911 emer­gen­cy cen­ter with a bat­tery back­up array on a rack. Over time the­se bat­ter­ies enlarged; when it came time to replace them, they real­ized that they couldn’t even get them out of the racks any­more! They had to pay some pro­fes­sion­als high dol­lars to safe­ly remove the bat­ter­ies.

So the lesson from this, is be aware that not only do bat­ter­ies have a lim­it­ed lifes­pan and even­tu­al­ly fail to hold a good charge; but they may also become dis­fig­ured and dam­age or dis­able equip­ment. Who knows, if left unchecked for long enough, they might even break open and leak out chem­i­cals.

How to Get Started Freelancing on the Web

If you are a cre­ative, self-motivated, crit­i­cal, detail ori­ent­ed indi­vid­u­al, who wants to learn how to make a liv­ing designing/developing web­sites and/or web appli­ca­tions, then this video is a good start­ing place for you! You don’t need a whole lot of mon­ey to get start­ed and you don’t nec­es­sar­i­ly even need a degree from the uni­ver­si­ty. What you need comes from the dri­ve in your heart to suc­ceed and per­se­vere; the­se are qual­i­ties nobody can instill in you besides your­self.

Starting out in the big Wide Web

Speak­er: Anna Deben­ham
Con­fer­ence: Heart and Sole


vimeo Direct

As some­body who had a good uni­ver­si­ty edu­ca­tion, I can hon­est­ly say that while it often helps, it is not absolute­ly essen­tial. Aside from the basic pro­gram­ming, math and writ­ing skills I learned at the uni­ver­si­ty; most of the high­ly spe­cial­ized skills I’ve learned have been fig­ured out, either on the job, or on my own time. Almost all of the skills I’ve gained have been due to my own per­se­ver­ance and moti­va­tion.

Just going to the uni­ver­si­ty isn’t enough to make you suc­cess­ful, though it can often land you one of those gov­ern­ment jobs where you sit there all day in meet­ings; but who real­ly wants that? Often­times, going to the uni­ver­si­ty can get you into a load of debt; which can lead to a life­time of inter­est pay­ments.

What­ev­er your incli­na­tion, if you do wish to get into pro­fes­sion­al web design/development, it would be wise to con­sid­er get­ting into free­lanc­ing ear­ly on. The more real world expe­ri­ence you have, the more valu­able you will be to your cus­tomers.

If you come out of school with­out any real world expe­ri­ence, you may find your­self sur­prised to find out that you have a long way to go before you are ready to do non-academic projects.

Syncing a Forked git Repository With a Master Repository’s Changes

One task which is vir­tu­al­ly impos­si­ble to do prop­er­ly through the GIT Ver­sion Con­trol web inter­face, is sync­ing a forked repos­i­to­ry. For­tu­nate­ly, there is a fair­ly straight­for­ward way to auto­mat­i­cal­ly merge the changes via the com­mand line inter­face.

Let’s say for exam­ple, a few days back we cre­at­ed a fork called chriscase/friendika off of the main branch of the repos­i­to­ry friendika/friendika, using the GITHub web inter­face.

Since the time we cre­at­ed the chriscase/friendika fork, there were com­mits made on the main branch you copied the fork from. We need to get every­thing back in sync now; this way we’ll be work­ing on the lat­est code.

To do this syn­chro­niza­tion, let’s log into the server we’re going to be doing our work on and clone the fork local­ly; this way we have our local copy to work with. I’m going to be doing my work in ~/friendikademo.openmindspace.org/, which I use as my test/dev area.

This is how we pull in the files from the github repos­i­to­ry:

[bunda]$ cd friendikademo.openmindspace.org
[bunda]$ git clone https://chriscase@github.com/chriscase/friendika.git .
Cloning into ....
Password:
remote: Counting objects: 9907, done.
remote: Compressing objects: 100% (4047/4047), done.
remote: Total 9907 (delta 6324), reused 8563 (delta 5320)
Receiving objects: 100% (9907/9907), 5.20 MiB | 1.00 MiB/s, done.
Resolving deltas: 100% (6324/6324), done.

Now we’re going to link our local repos­i­to­ry with the mas­ter friendika/friendika repos­i­to­ry we want to pull changes from. Then we will fetch the code from the mas­ter repos­i­to­ry.

[bunda]$ git remote add upstream git://github.com/friendika/friendika.git
[bunda]$ git fetch upstream
remote: Counting objects: 207, done.
remote: Compressing objects: 100% (151/151), done.
remote: Total 157 (delta 123), reused 0 (delta 0)
Receiving objects: 100% (157/157), 21.15 KiB, done.
Resolving deltas: 100% (123/123), completed with 45 local objects.
From git://github.com/friendika/friendika
* [new branch]      2.1-branch -&gt; upstream/2.1-branch
* [new branch]      master     -&gt; upstream/master
From git://github.com/friendika/friendika
* [new tag]         2.1-stable -&gt; 2.1-stable

Now that we’ve got a local copy of my chriscase/friendika fork, we’ve linked up to the mas­ter repos­i­to­ry at friendika/friendia and we’ve fetched the code from the mas­ter repos­i­to­ry; we need to sync our chriscase/friendika fork with the mas­ter repos­i­to­ry.

We do the actu­al merge by run­ning the merge com­mand:

[bunda]$ git merge upstream/master
Updating a05b2b4..5e02519
Fast-forward
addon/facebook/LICENSE                             |  662 --------------------
addon/facebook/facebook.php                        |  213 ++++++-
addon/statusnet/statusnet.php                      |   27 +-
addon/twitter/twitter.php                          |   89 ++-
boot.php                                           |   36 +-
database.sql                                       |    4 +-
include/acl_selectors.php                          |   22 +
include/bbcode.php                                 |   10 +-
include/html2bbcode.php                            |   53 ++-
include/items.php                                  |    6 +-
include/notifier.php                               |   18 +-
include/oembed.php                                 |    5 +-
index.php                                          |    9 +-
mod/cb.php                                         |   24 +
mod/follow.php                                     |   27 +-
mod/item.php                                       |   26 +-
mod/network.php                                    |    5 +-
mod/profile.php                                    |    6 +-
mod/pubsub.php                                     |    4 +-
mod/salmon.php                                     |    2 +-
.../tiny_mce/plugins/bbcode/editor_plugin_src.js   |    2 +-
update.php                                         |    7 +
util/strings.php                                   |   44 +-
util/typo.php                                      |    2 +-
view/de/jot-header.tpl                             |    6 +-
view/de/jot.tpl                                    |    6 +-
view/en/jot-header.tpl                             |    6 +-
view/en/jot.tpl                                    |    6 +-
view/fr/jot-header.tpl                             |    7 +-
view/fr/jot.tpl                                    |    7 +-
view/it/jot-header.tpl                             |    6 +-
view/it/jot.tpl                                    |    7 +-
view/theme/duepuntozero/ff-16.jpg                  |  Bin 0 -&gt; 644 bytes
view/theme/duepuntozero/lock.cur                   |  Bin 0 -&gt; 4286 bytes
view/theme/duepuntozero/login-bg.gif               |  Bin 0 -&gt; 237 bytes
view/theme/duepuntozero/style.css                  |   17 +-
view/theme/loozah/style.css                        |   11 +
37 files changed, 587 insertions(+), 795 deletions(-)
delete mode 100644 addon/facebook/LICENSE
create mode 100644 mod/cb.php
create mode 100644 view/theme/duepuntozero/ff-16.jpg
create mode 100755 view/theme/duepuntozero/lock.cur
create mode 100644 view/theme/duepuntozero/login-bg.gif

If there were no con­flicts, which there shouldn’t be since we haven’t checked in any changes yet, the merge should take place with­out inci­dent.

Next we need to push the merged ver­sion of our code back to our chriscase/friendika fork. This is done with the fol­low­ing com­mand:

[bunda]$ git push origin master
Password:
Total 0 (delta 0), reused 0 (delta 0)
To https://chriscase@github.com/chriscase/friendika.git
a05b2b4..5e02519  master -&gt; master

Now that this has been done, you’re ready to start cod­ing on the project!

Fur­ther Read­ing:

How To Execute a Script After a Running Process Completes

Most peo­ple who are famil­iar with Lin­ux, real­ize that there are ways of chain­ing process­es to run one after anoth­er. Typ­i­cal­ly this is done by writ­ing a script, or using && to daisy chain addi­tion­al com­mands on com­mand line.

There is, how­ev­er, anoth­er way to do this; if you’ve already issued a com­mand and want to add anoth­er com­mand after the orig­i­nal has already start­ed. This is espe­cial­ly use­ful if you’re unzip­ping, say a 15 giga­byte data­base dump, and you want to make sure that the import hap­pens imme­di­ate­ly after the import is com­plete.

Here’s an exam­ple of what would hap­pen if I were enter­ing the com­mands man­u­al­ly.

Macintosh:~ chriscase$ scp user@hostname.com:archive/database.sql.gz .
Macintosh:~ chriscase$ gunzip database.sql.gz
Macintosh:~ chriscase$ mysql -u dbusername -pdbpassword dbname &lt; database.sql

Since I’m not going to stay glued to the con­sole for the entire dura­tion of this process, I either need to write a script or fig­ure out anoth­er tech­nique, so I keep things mov­ing along.

As it turns out, there is a very sim­ple way to accom­plish this, with the com­mand wait. This com­mand has the abil­i­ty to wait until a spec­i­fied process is com­plete before exe­cut­ing a com­mand.

Here’s an exam­ple of how this could be used, if you want­ed to add the last two process­es after the scp from the above exam­ple had already begun.

Macintosh:~ chriscase$ scp user@hostname.com:archive/database.sql.gz .

Once the down­load is kicked off, you can kick it into the back­ground by using [ctrl-z] which will pause the process and then issu­ing the com­mand [bg]. This will put the paused process run­ning into the back­ground. Now, to chain the oth­er process­es after­ward, you can do the fol­low­ing.

Macintosh:~ chriscase$ wait %1 && gunzip database.sql.gz && mysql -u dbusername -pdbpassword dbname < database.sql

The above code will wait to exe­cute until the scp is done, then it will use gzip to unzip the file and mysql to import the data­base dump file. Now that you’ve done this, you can go off and do some­thing else, con­fi­dent that your data­base will be done import­ing in a few hours.

Using Scroogle as Your Default Search on Opera 11

A sim­ple step you can take to pro­tect your pri­va­cy is to use scrap­ers like the Scroogle, the Google Scrap­er.  With this search tool, you are able to search the Google data­base with­out divulging your search pat­terns to them; pro­tect­ing your pri­va­cy, so your per­son­al infor­ma­tion is not sold to Google’s cus­tomers.

By default Scroogle is not avail­able in the list of search engi­nes on Opera or most browsers for that mat­ter; so you must take one sim­ple step to add and enable it under your Opera pref­er­ences.  After you’ve enabled Scroogle, if you’ve select­ed it as your default search, you can sim­ply type the search term in the address box.

As you can see, I typed friendika into the address bar and Opera offers to search for Friendika via Scroogle.  Of course, you can also type the term in the win­dow on the right-hand side of the address bar.

This is just one sim­ple thing you can do to sig­nif­i­cant­ly improve your pri­va­cy.

Restrict a Linux User’s Access: Only Allowing SCP/SFTP, no SSH

The stan­dard tech­niques for restrict­ing a Lin­ux user account, does not allow for file trans­fers to/from the user’s home direc­to­ry. In my expe­ri­ence it is use­ful to have cer­tain account types which are only allowed to upload/download files from their home direc­to­ry; but not login and run shell com­mands.

This is easy to do with a shell called rssh (Restrict­ed Secure Shell); but you must first install it, because it does not typ­i­cal­ly come pack­aged with most dis­tri­b­u­tions of Lin­ux.

Installing RSSH

Locate the most appro­pri­ate pack­age for your dis­tri­b­u­tion of Lin­ux at the down­load site. Once you have locat­ed the RPM you will need do the fol­low­ing steps, sub­sti­tut­ing your cho­sen pack­age for the RPM.

[root@Internal ~]# <strong>wget http://packages.sw.be/rssh/rssh-2.3.2-1.1.el3.rf.x86_64.rpm</strong>
--2010-10-11 20:36:21--  http://packages.sw.be/rssh/rssh-2.3.2-1.1.el3.rf.x86_64.rpm
Resolving packages.sw.be... 85.13.226.40
Connecting to packages.sw.be|85.13.226.40|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://rpmforge.sw.be/redhat/el3/en/x86_64/rpmforge/RPMS/rssh-2.3.2-1.1.el3.rf.x86_64.rpm [following]
--2010-10-11 20:36:21--  http://rpmforge.sw.be/redhat/el3/en/x86_64/rpmforge/RPMS/rssh-2.3.2-1.1.el3.rf.x86_64.rpm
Resolving rpmforge.sw.be... 85.13.226.40
Reusing existing connection to packages.sw.be:80.
HTTP request sent, awaiting response... 200 OK
Length: 45053 (44K) [application/x-rpm]
Saving to: “rssh-2.3.2-1.1.el3.rf.x86_64.rpm”
100%[====================================================================================================================================================>] 45,053      94.6K/s   in 0.5s
 
2010-10-11 20:36:22 (94.6 KB/s) - “rssh-2.3.2-1.1.el3.rf.x86_64.rpm” saved [45053/45053]
 
[root@Internal ~]# rpm -ivh rssh-2.3.2-1.1.el3.rf.x86_64.rpm
warning: rssh-2.3.2-1.1.el3.rf.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 6b8d79e6
Preparing...                ########################################### [100%]
1:rssh                   ########################################### [100%]

Updating Access Permissions

Now you should be able to set a user’s login shell to RSSH. Here is what the orig­i­nal line will usu­al­ly look like.

joe:x:501:501::/home/joe:/bin/bash

This is what the updat­ed line will look like.

joe:x:501:501::/home/joe:/usr/bin/rssh

What Happens if the User Attempts to SSH in After Access is Restricted

Now if joe attempts to login via SSH, the fol­low­ing will occur:

[root@Internal ~]# ssh joe@localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is b5:39:02:23:01:a5:ff:b9:c1:aa:01:a9:69:21:a4:e0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
joe@localhost's password: 
 
This account is restricted by rssh.
This user is locked out.
 
If you believe this is in error, please contact your system administrator.
 
Connection to localhost closed.