Getting the Last Modification Timestamp of a File with Stat

If we want to get just the date mod­i­fied, for a file, in a for­mat of our choos­ing. This can be done with a util­i­ty called stat.

The syn­tax is as fol­lows:

stat -f <format> -t "<timestamp format>" <path to file>

In this exam­ple, we are print­ing just the date cre­at­ed in the for­mat YYYYMMDD_HHMMSS.

stat -f "%Sm" -t "%Y%m%d_%H%M%S" filename.txt

We are using the –f “%Sm flag to spec­i­fy that we want to print out only the date mod­i­fied. The –t “%Y%m%d_%H%M%S” sets the date for­mat.

In my exam­ple, the out­put was:

20121130_180221

This trans­lates to Novem­ber 30, 2012 at 18:02:21.

Using the Linux Command Line to Find and Copy A Large Number of Files from a Large Archive, Preserving Metadata

One of my recent chal­lenges is to go through an archive on a NAS and find all of the .xlsx files, then copy them; pre­serv­ing as much of the file meta­data (date cre­at­ed, fold­er tree, etc) as pos­si­ble, to a spec­i­fied fold­er.  After this copy, they will be gone through with anoth­er script, to rename the files, using the meta­data, where they will then be processed by an appli­ca­tion, which uti­lizes the name of the file in its process.

The part I want to share here, is find­ing the files and copy­ing them to a fold­er, with meta­data pre­served.  This is where the pow­er of the find util­i­ty comes in handy.

Since this is a huge archive, I want to first pro­duce a list of the files, that way I will be able to break this up into two steps. This will pro­duce a list and write it into a text file.  I am first going to run a find com­mand on the vol­ume I have mount­ed called data in my Vol­umes fold­er.

find /Volumes/data/archive/2012 -name '*.xlsx' > ~/archive/2012_files.txt

Now that the list is saved into a text file, I want to copy the files in the list, pre­serv­ing the file meta­data and path infor­ma­tion, to my archive fold­er.  The cpio util­i­ty accepts the paths of the files to copy from std­in, then copies them to my archive fold­er.

cat ~/archive/2012_files.txt | cpio -pvdm ~/archive

Appending to a Remote File via SSH

Most LINUX users know how to copy and over­write a file from one server to anoth­er; but it can also be use­ful to direct­ly append to a file, with­out hav­ing to login to the remote server and make the changes man­u­al­ly. This does not appear to be pos­si­ble with the com­mon­ly used SCP util­i­ty; how­ev­er, there is a way to do this with SSH. Its actu­al­ly quite sim­ple. Con­tin­ue read­ing “Append­ing to a Remote File via SSH”

Fixing Performance Problems on Your JBoss Web Apps By Diagnosing Blocked Thread Issues

I was once per­plexed by a bizarre per­for­mance issue, I encoun­tered at seem­ing­ly ran­dom inter­vals, in an appli­ca­tion I help to main­tain. The appli­ca­tion kept freez­ing up, with­out any log mes­sages to use for diag­no­sis. This was very frus­trat­ing, because it meant the appli­ca­tion server typ­i­cal­ly had to be restart­ed man­u­al­ly to restore ser­vice.

After a bit of research, I learned of thread block­ing, as a poten­tial per­for­mance issue. Being as I was fair­ly cer­tain that the data­base was func­tion­ing with­in accept­able para­me­ters and the server had ample CPU and mem­o­ry to han­dle the load. I sought to deter­mine if thread block­ing was an issue.

I start­ed by sim­ply run­ning a twid­dle com­mand to dump the threads, when­ev­er this per­for­mance prob­lem was report­ed. This showed that the BLOCKED threads were indeed the cause. Con­tin­ue read­ing “Fix­ing Per­for­mance Prob­lems on Your JBoss Web Apps By Diag­nos­ing Blocked Thread Issues”

When a List of Files is Too Long for a Typical “rm” Command

I was on a client’s report­ing server and noticed that an “ls” of their report logs took about 10 min­utes. The direc­to­ry had a log for every report run since June 2010, which is around 1.3 mil­lion files!

Here’s a tran­script of the error:

[root@morpheus log]# pwd
/home/morpheus/tools/birt-runtime-2_0_1/Report Engine/log
You have new mail in /var/spool/mail/root
[root@morpheus log]# rm *
-bash: /bin/rm: Argument list too long

Con­tin­ue read­ing “When a List of Files is Too Long for a Typ­i­cal “rm” Com­mand”

RSYNC a File Through a Remote Firewall

One of my recent tasks was to set-up a new auto­mat­ic back­up script, which dumps out the MySQL data­base on the remote host at a reg­u­lar time, at a lat­er time, it is RSYNC’d from the back­up server through a remote fire­wall. I must say that I was a lit­tle sur­prised, to dis­cov­er that the fin­ished script and the con­fig­u­ra­tion that goes along with it, was actu­al­ly quite sim­ple and eas­i­ly repeat­able. I was able to repli­cate the process for three sites very quick­ly and will eas­i­ly be able to scale it to many more when nec­es­sary.

SSH Tunneling and SSH Keys

In order to per­form a process on a remote fire­walled host, you need to first set up keys, to allow the trust­ed back­up server to gain access to the inter­me­di­ate host. You must also set up a key which allows the inter­me­di­ate host to gain access to the fire­walled host.

First, let’s gen­er­ate a pub­lic key on the back­up server, if we don’t already have one. Be sure to use an emp­ty pass phrase since this is an unat­tend­ed script.

[backup@lexx log]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/backup/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /backup/.ssh/id_dsa.
Your public key has been saved in /backup/.ssh/id_dsa.pub.
The key fingerprint is:
3d:48:9c:0f:46:dc:da:c3:a6:19:82:63:b1:18:91:62 backup@lexx

The default key will, by default, be locat­ed in ~/.ssh/id_dsa.pub Copy the con­tents of this file to the clip­board, you will need this to get the remote server to trust the back­up server.

Logon to the remote exter­nal server via ssh. On this server we will con­fig­ure it to trust the back­up server.

[backup@lexx ~]# ssh user@remotehost.com
user@remotehost.com's password: 
Last login: Thu Jul 14 22:57:58 2011 from 69.73.94.214
[user@remotehost ~]# ls -al .ssh
total 28
drwx------  2 user user 4096 2011-07-14 22:05 .
drwxr-x--- 12 user user 4096 2011-07-14 21:54 ..
-rw-------  1 user user 3024 2011-07-14 21:57 authorized_keys2
-rw-------  1 user user  668 2010-10-27 23:52 id_dsa
-rw-r--r--  1 user user  605 2010-10-27 23:52 id_dsa.pub
-rw-r--r--  1 user user 5169 2010-10-21 13:01 known_hosts

If the authorized_keys2 or sim­mi­lar­ly named file does not yet exist, cre­ate it and open the file in your text edi­tor of choice. Then paste the key you copied from the id_dsa.pub file on the back­up server.

To make the remote server rec­og­nize the new­ly added key run the fol­low­ing:

[user@remotehost ~]# ssh-agent sh -c 'ssh-add < /dev/null && bash'

Now we can make sure that the key works as intend­ed by run­ning the fol­low­ing com­mand, which will ssh into the server and exe­cute the upti­me com­mand:

[backup@lexx ~]$ ssh user@remotehost.com uptime
 23:57:17 up 47 days,  4:11,  1 user,  load average: 0.54, 0.14, 0.04

Since we got the out­put of the upti­me com­mand with­out a login prompt, it means the key was cre­at­ed suc­cess­ful­ly.

Now we repeat the ssh key process, this time between the remote­host server and the fire­walled server.

[user@remotehost ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/user/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /user/.ssh/id_dsa.
Your public key has been saved in /user/.ssh/id_dsa.pub.
The key fingerprint is:
3d:48:9c:0f:46:dd:df:c3:a6:19:82:63:b1:18:91:62 user@remotehost

Copy the infor­ma­tion from the .ssh/id_dsa.pub of the remote exter­nal server to the fire­walled server, then add to the authorized_keys file and run:

[user@firewalledserver ~]# ssh-agent sh -c 'ssh-add < /dev/null && bash'

Now you should be able to pass the rsync com­mand all the way through the remote fire­wall, to the fire­walled server from the back­up server.

This can be test­ed by the fol­low­ing com­mand, which tun­nels through the fire­wall and exe­cutes the upti­me com­mand on the inter­nal server:

[backup@lexx ~]$ ssh user@remotehost.com ssh user@firewalledserver uptime
 23:52:17 up 41 days,  4:12,  1 user,  load average: 0.50, 0.13, 0.03

RSYNC The Data From the Backup Server, Through The Firewall

Now that we’ve got all of our keys set-up, most of the work has been done. I’m assum­ing you have a cron job on the inter­nal server which dumps the mysql data­base at a speci­fic time. You should set-up your rsync com­mand late enough, so that the cron job has had enough time to dump the data­base.

Here is the rsync com­mand which puts you through the fire­wall to down­load the remote mysql data­base dump. The –z flag allows you to do this with com­pres­sion, which can sig­nif­i­cant­ly speed up the process.

[backup@lexx ~]$ rsync -avz -e "ssh user@remotehost.com ssh" user@firewalledserver:/home/user/rsync-backup/mysqldump.sql /home/backup/

This will cre­ate a hid­den file that will be named some­thing like .mysqldump.sql.NvD8D, which stores the data until the sync is com­plete. After the sync is com­plete you will see a file named mysqldump.sql in /home/backup/ fold­er.

Just set up the nec­es­sary cron scripts to make sure every­thing hap­pens at the right time, pos­si­bly put some log­ging in there so you can see what has hap­pened and you’re done!

Here’s an exam­ple of what I did on the back­up server, to call the back­up script. It appends the out­put of both STDOUT and STDERR to the /var/log/remote_backuplog file each time it is run. It also runs the script as the back­up user so the files it gen­er­ates have the cor­rect per­mis­sions for the back­up user to access.

01 6 * * * backup /home/backup/run_backups.sh >> /var/log/remote_backuplog 2>&1

Here is what my rsync script run_backups.sh looks like.

#!/bin/bash
 
echo "running backups"
# print the date into the logfile
date
 
# backup server 1
echo "backing up server1"
ssh user@externalserver1 ssh user@internalserver1 ls -l /home/user/rsync-backup/mysqldump.sql
/usr/bin/rsync -avz -e "ssh user@externalserver1 ssh" user@internalserver1:/home/user/rsync-backup/mysqldump.sql /home/backup/server1/
 
# backup server 2
echo "backing up server2"
ssh user@externalserver2 ssh user@internalserver2 ls -l /home/user/rsync-backup/mysqldump.sql
/usr/bin/rsync -avz -e "ssh user@externalserver2 ssh" user@internalserver2:/home/user/rsync-backup/mysqldump.sql /home/backup/server2/
 
# backup server 3
echo "backing up server3"
ssh user@externalserver3 ssh user@internalserver3 ls -l /home/user/rsync-backup/mysqldump.sql
/usr/bin/rsync -avz -e "ssh user@externalserver3 ssh" user@internalserver3:/home/user/rsync-backup/mysqldump.sql /home/backup/server3/

Quick and Easy Regular Expression Command/Script to Run on Files in the Bash Shell

I often find it nec­es­sary to run reg­u­lar expres­sions on, not just one file; but instead a range of files. There are per­haps dozens of ways this can be done, with vary­ing lev­els of under­stand­ing nec­es­sary to do this. 

The sim­plest way I have encoun­tered uti­lizes the fol­low­ing syn­tax:

perl -pi -e "s/<find string>/<replace with string>/g" <files to replace in>

Here is an exam­ple where I replace the IP address in a range of report tem­plates with a dif­fer­ent IP address:

perl -pi -e "s/mysql:\/\/192.168.2.110/mysql:\/\/192.168.2.111/g" $reportTemplateLocation/*.rpt*

Basi­cal­ly, I am look­ing for a line which con­tains mysql://192.168.2.110, which I want to replace with mysql://192.168.2.111.

Here is an exam­ple of a bash script I call changeReportTemplateDatabase.sh, which I wrap around that com­mand, to accom­plish that same task with more ele­gance:

#!/bin/bash
#
# @(#)$Id$
#
# Point the report templates to a different database IP address.
reportTemplateLocation="/home/apphome/jboss-4.0.2/server/default/appResources/reportTemplates";
 
error()
{
    echo "$arg0: $*" 1>&2
    exit 1
}
usage()
{
        echo "Usage $0 -o <old-ip-address> -n <new-ip-address>";
}
 
vflag=0
oldip=
newip=
while getopts hvVo:n: flag
do
    case "$flag" in
    (h) help; exit 0;;
    (V) echo "$arg0: version 0.1 8/28/2010"; exit 0;;
    (v) vflag=1;;
    (o) oldip="$OPTARG";;
    (n) newip="$OPTARG";;
    (*) usage;;
    esac
done
shift $(expr $OPTIND - 1)
 
if [ "$oldip" = "" ]; then
        usage;
        exit 1;
fi
if [ "$newip" = "" ]; then
        usage;
        exit 1;
fi
 
echo "$0: Changing report templates to use the database at $newip from $oldip";
perl -pi -e "s/mysql:\/\/$oldip/mysql:\/\/$newip/g" $reportTemplateLocation/*.rpt*

Usage of the script is as sim­ple as the com­mand below. It will change every data­base ref­er­ence on report tem­plates in the direc­to­ry ref­er­enced by the vari­able report­Tem­plate­Lo­ca­tion to the new val­ue.

./changeReportTemplateDatabase.sh  -o 192.168.2.110 -n 192.168.2.111

A fur­ther improve­ment, which may be use­ful to some, would be to make the direc­to­ry a flag which can be edit­ed at the com­mand line.

How To Execute a Script After a Running Process Completes

Most peo­ple who are famil­iar with Lin­ux, real­ize that there are ways of chain­ing process­es to run one after anoth­er. Typ­i­cal­ly this is done by writ­ing a script, or using && to daisy chain addi­tion­al com­mands on com­mand line.

There is, how­ev­er, anoth­er way to do this; if you’ve already issued a com­mand and want to add anoth­er com­mand after the orig­i­nal has already start­ed. This is espe­cial­ly use­ful if you’re unzip­ping, say a 15 giga­byte data­base dump, and you want to make sure that the import hap­pens imme­di­ate­ly after the import is com­plete.

Here’s an exam­ple of what would hap­pen if I were enter­ing the com­mands man­u­al­ly.

Macintosh:~ chriscase$ scp user@hostname.com:archive/database.sql.gz .
Macintosh:~ chriscase$ gunzip database.sql.gz
Macintosh:~ chriscase$ mysql -u dbusername -pdbpassword dbname &lt; database.sql

Since I’m not going to stay glued to the con­sole for the entire dura­tion of this process, I either need to write a script or fig­ure out anoth­er tech­nique, so I keep things mov­ing along.

As it turns out, there is a very sim­ple way to accom­plish this, with the com­mand wait. This com­mand has the abil­i­ty to wait until a spec­i­fied process is com­plete before exe­cut­ing a com­mand.

Here’s an exam­ple of how this could be used, if you want­ed to add the last two process­es after the scp from the above exam­ple had already begun.

Macintosh:~ chriscase$ scp user@hostname.com:archive/database.sql.gz .

Once the down­load is kicked off, you can kick it into the back­ground by using [ctrl-z] which will pause the process and then issu­ing the com­mand [bg]. This will put the paused process run­ning into the back­ground. Now, to chain the oth­er process­es after­ward, you can do the fol­low­ing.

Macintosh:~ chriscase$ wait %1 && gunzip database.sql.gz && mysql -u dbusername -pdbpassword dbname < database.sql

The above code will wait to exe­cute until the scp is done, then it will use gzip to unzip the file and mysql to import the data­base dump file. Now that you’ve done this, you can go off and do some­thing else, con­fi­dent that your data­base will be done import­ing in a few hours.

Sending Mail in Shell Scripts via an External Server with Nail

If you’ve ever tried send­ing email via the com­mand line, using the mail util­i­ty, you may find that the method can be unre­li­able in some cas­es.  The email mes­sages are often inter­cept­ed by spam bots, fil­tered by secu­ri­ty pro­grams, etc.  A more ele­gant and sta­ble alter­na­tive, is to use your exist­ing email server to send the mes­sage.  Using the pro­gram nail makes this an easy task to do via the com­mand line.

The fol­low­ing exam­ple shows you how to send a sim­ple mes­sage with attach­ment. Here is the syn­tax for send­ing a mes­sage with nail.

echo "" | nail -s "" -a   ...

In order for nail to func­tion, you must have the .mail con­fig­u­ra­tion file in your path. Here is a sam­ple .mail con­fig­u­ra­tion file to get you start­ed quick­ly.

set smtp=smtp://yourhost.com
set from="yourname@yourhost.com (Display Name)"
set smtp-auth=login
set smtp-auth-user=your_username
set smtp-auth-password=your_password

Bash Shell Script to Convert All MySQL Tables to use the Innodb Engine

The fol­low­ing, when run in the Bash shell, will con­vert the engine for all of your MySQL tables to Inn­oDB.

mysql -B -N -e "SHOW TABLES" -u  <username> --password=<password> <databasename> | while read table; \
do \
     echo "+ Converting Table $table"; \
     mysql -B -N -e "alter table $table engine=innodb" -u <username> --password=<password> <databasename>; \ 
done;

Often, if you have a heav­i­ly used data­base, you will want to con­sid­er the Inn­oDB engine. By default, the MySQL data­base engine is MyISAM. How­ev­er, the Inn­oDB engine has many advan­tages; par­tic­u­lar­ly for high uti­liza­tion envi­ron­ments. With Inn­oDB data-integrity is main­tained through­out the entire query process because it is trans­ac­tion safe. 

Inn­oDB also pro­vides row-locking, as opposed to table-locking. With row-locking, while one query is busy updat­ing or insert­ing a row, anoth­er query can update a dif­fer­ent row at the same time. Hence Inn­oDB has supe­ri­or fea­tures for multi-user con­cur­ren­cy and per­for­mance.