One of my recent challenges is to go through an archive on a NAS and find all of the .xlsx files, then copy them; preserving as much of the file metadata (date created, folder tree, etc) as possible, to a specified folder. After this copy, they will be gone through with another script, to rename the files, using the metadata, where they will then be processed by an application, which utilizes the name of the file in its process.
The part I want to share here, is finding the files and copying them to a folder, with metadata preserved. This is where the power of the find utility comes in handy.
Since this is a huge archive, I want to first produce a list of the files, that way I will be able to break this up into two steps. This will produce a list and write it into a text file. I am first going to run a find command on the volume I have mounted called data in my Volumes folder.
find /Volumes/data/archive/2012 -name '*.xlsx' > ~/archive/2012_files.txt
Now that the list is saved into a text file, I want to copy the files in the list, preserving the file metadata and path information, to my archive folder. The cpio utility accepts the paths of the files to copy from stdin, then copies them to my archive folder.
cat ~/archive/2012_files.txt | cpio -pvdm ~/archive
I ran into an issue recently, where an existing log4j.xml configuration file was built into a jar file I was referencing and I was unable to get Java to recognize another file that I wanted it to use instead. Fortunately, the solution to this problem is fairly straightforward and simple.
I was running a standalone application in linux, via a bash shell script; but this technique can be used in other ways too. You simply add a parameter to the JVM call like the example below.
So the syntax is basically:
java -Dlog4j.configuration="file:<full path to file>" -cp <classpath settings> <package name where my main function is located>
Lets say I have a file named log4j.xml in /opt/tools/myapp/ which I want to use when my application runs, instead of any existing log4j.xml files. This can be done by passing a JVM flag -Dlog4j.configuration to Java.
Here is an example:
java -Dlog4j.configuration="file:/opt/tools/myapp/log4j.xml" -cp $CLASSPATH my.standalone.mainClass;
With that change, as long as your log4j file is set up properly, your problems should be behind you.
[amazon_link asins=‘0071808558,B01LXGO1I2,1590594991,1617290068’ template=‘ProductCarousel’ store=‘openmindspace-20’ marketplace=‘US’ link_id=‘2bff41de-d2ae-11e6-b98d-b35baa63e07c’]
I was once perplexed by a bizarre performance issue, I encountered at seemingly random intervals, in an application I help to maintain. The application kept freezing up, without any log messages to use for diagnosis. This was very frustrating, because it meant the application server typically had to be restarted manually to restore service.
After a bit of research, I learned of thread blocking, as a potential performance issue. Being as I was fairly certain that the database was functioning within acceptable parameters and the server had ample CPU and memory to handle the load. I sought to determine if thread blocking was an issue.
I started by simply running a twiddle command to dump the threads, whenever this performance problem was reported. This showed that the BLOCKED threads were indeed the cause. Continue reading “Fixing Performance Problems on Your JBoss Web Apps By Diagnosing Blocked Thread Issues”
When you’re trying to move a large block of files, its often useful to do so in one command and to be able to close your terminal window (or allow it to time out). If you run a command under normal circumstances, losing the connection can cause your command to terminate prematurely, this is where nohup (No HangUP — a utility which allows a process to continue even after a connection is lost) comes in.
Let’s say we have a large directory to backup, which we want to first tar, then gzip; keeping the command non-dependent on the continuity of the terminal session. Continue reading “Tar/GZip Files in One Operation, Unattached to the Terminal Session”
I often find it necessary to run regular expressions on, not just one file; but instead a range of files. There are perhaps dozens of ways this can be done, with varying levels of understanding necessary to do this.
The simplest way I have encountered utilizes the following syntax:
perl -pi -e "s/<find string>/<replace with string>/g" <files to replace in>
Here is an example where I replace the IP address in a range of report templates with a different IP address:
perl -pi -e "s/mysql:\/\/192.168.2.110/mysql:\/\/192.168.2.111/g" $reportTemplateLocation/*.rpt*
Basically, I am looking for a line which contains mysql://192.168.2.110, which I want to replace with mysql://192.168.2.111.
Here is an example of a bash script I call changeReportTemplateDatabase.sh, which I wrap around that command, to accomplish that same task with more elegance:
# Point the report templates to a different database IP address.
echo "$arg0: $*" 1>&2
echo "Usage $0 -o <old-ip-address> -n <new-ip-address>";
while getopts hvVo:n: flag
case "$flag" in
(h) help; exit 0;;
(V) echo "$arg0: version 0.1 8/28/2010"; exit 0;;
shift $(expr $OPTIND - 1)
if [ "$oldip" = "" ]; then
if [ "$newip" = "" ]; then
echo "$0: Changing report templates to use the database at $newip from $oldip";
perl -pi -e "s/mysql:\/\/$oldip/mysql:\/\/$newip/g" $reportTemplateLocation/*.rpt*
Usage of the script is as simple as the command below. It will change every database reference on report templates in the directory referenced by the variable reportTemplateLocation to the new value.
./changeReportTemplateDatabase.sh -o 192.168.2.110 -n 192.168.2.111
A further improvement, which may be useful to some, would be to make the directory a flag which can be edited at the command line.
Below is a simple script called monitor_jboss, which checks to see if jboss is running and whether or not too many instances are currently running. I found a need to write this script because we have some cron scripts which automatically restart JBoss each day and the JBoss shutdown script itself sometimes fails to properly shut down, causing some quirky behavior.
If it determines that one of the following conditions are true, it sends a short email to the address specified in the variable email describing the problem.
- JBoss is not running at all
- Jboss has more than max instances running
This script is then placed in /etc/cron.d/cron.hourly/ where it will check the system once an hour and send an email as appropriate.
# email addresses to send the message to
# maximum number of concurrently running instances allowed
# determine the number of running instances
count_running_jbosses=$(ps aux | grep jboss | grep -v grep | grep -v monitor_jboss | wc -l)
if [ $count_running_jbosses -eq "0" ] # jboss isn't running
message="JBoss Is Currently Not Running"
if [ $count_running_jbosses > $max ] # too many jboss instances running
message="JBoss Is Currently Running $count_running_jbosses instances; the maximum is $max"
subject="JBOSS MONITORING ALERT FOR: $(hostname)"
echo "$message" | /bin/mail -s "$subject" "$email"