LINUX: Removing Files Older Than x Days

0

It can often be use­ful to remove files that are unnec­es­sary, such as log files, backup files, etc, when it is not already done auto­mat­i­cally. For­tu­nately there is a very sim­ple com­mand to do just that.

Using the find com­mand, it is pos­si­ble to find the files in the folder you want to clean out and remove them. The fol­low­ing com­mand scans the folder /home/myuser/myfolder/ for files older than 30 days and then exe­cutes rm, to remove those files.

find /home/myuser/myfolder/* -mtime +30 -exec rm {} \;

If you want to be cau­tions, you can use the fol­low­ing com­mands to test it out:

To see what find pulls up, you can run this.

find /home/myuser/myfolder/* -mtime +30

If you want to make cer­tain the exec com­mand is given the right para­me­ters, you can run it through ls.

find /home/myuser/myfolder/* -mtime +30 -exec ls -l {} \;

Deleting Rows From a Table Referenced in the Query

0

If you do a fair amount of SQL, then every now and then, you’ll likely run into a sit­u­a­tion where you need to delete some­thing from a table; but need to also ref­er­ence that table in the query as well.

The trou­ble is MySQL won’t let you do this, in many cir­cum­stances. For­tu­nately, there is a sim­ple workaround.

delete from tableA 
 where id in (
   select b.id 
   from tableB b 
   join tableA a 
     on a.tableb_id = b.id
   );

The above query would throw an error sim­i­lar to the following:

ERROR 1093 (HY000): You can't specify target table 'tableA' for update in FROM clause

We can skirt this lim­i­ta­tion by nest­ing the sub­s­e­lect inside of another select state­ment. Then it will work just fine.

delete from tableA where id in (
  select aId from (
    select b.id as bId from tableB b join tableA a 
    on a.tableb_id = b.id 
  ) as apt
);

You’ll get out­put to indi­cate that the query is successful.

Query OK, 183 rows affected (0.53 sec)

This saved me a bunch of time and kept me from hav­ing to rework my query com­pletely, so I am shar­ing it with you!

Automatically Check RSYNC and Restart if Stopped

0

I occa­sion­ally use RSYNC to syn­chro­nize large direc­to­ries of files between servers. This is espe­cially use­ful if you’re mov­ing a client from one server to another and they have alot of sta­tic files that are always chang­ing. You can copy the files and sync them up, all with RSYNC and if your con­nec­tion gets cut off, it will start where it left off. It will also grab changes to files that have already been RSYNCd.

I ran into an issue with RSYNC recently, wherein the RSYNC process was run­ning in the back­ground; but was ter­mi­nat­ing due to errors sim­i­lar to the fol­low­ing. These con­nec­tions were prob­a­bly related to the slow and unsta­ble con­nec­tion to the remote server.

rsync: writefd_unbuffered failed to write 998 bytes to socket [sender]: Broken pipe (32)
rsync: connection unexpectedly closed (888092 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]

Given that I was trans­fer­ring files through a rel­a­tively bad inter­net con­nec­tion and received this error a half dozen times over a cou­ple of days, I decided the best way to han­dle it, would be to write a cron script. This cron script should check for the RSYNC process and start it if it isn’t running.

rsync_check.sh

Cus­tomize this script for your own pur­pose, to check for your RSYNC process and start it if it isn’t running.

#!/bin/bash
echo "checking for active rsync process"
COUNT=`ps ax | grep rsync | grep -v grep | grep -v rsync_check.sh | wc -l` # see how many are running
echo "there are $COUNT rsync related processes running";
if [ $COUNT -eq 0 ] 
then
	echo "no rsync processes running, restarting process"
	killall rsync  # prevent RSYNCs from piling up, if by some unforeseen reason there are already processes running
	rsync -avz -e "ssh" user@host.com:/mnt/syncdirectory/ /home/ccase/syncdirectory/ 
fi

Crontab Entry

Save the script in the appro­pri­ate cron direc­tory, or add it to the cron.d direc­tory and put a crontab entry in, to run it at the desired inter­val. This will have it run every 10 minutes.

*/10 * * * * ccase /etc/cron.d/rsync_check.sh

No More Worries

Now you can move onto other things, with the knowl­edge that your RSYNC will not just fail and leave the work undone. It prob­a­bly wouldn’t hurt to check on it at first and from time to time; but there’s alot less to worry about!

Mounting CIFS Shares At the LINUX Command Line or in /etc/fstab

0

Linux makes it rel­a­tively easy to mount shared dri­ves either man­u­ally, at the com­mand line, or auto­mat­i­cally, by con­fig­ur­ing an entry in /etc/fstab. Here is the basic syn­tax of our mount command.

[ccase@midas ~]$ sudo mount -t cifs  -o username=<share username>,password=<share password>,<additional options> //<name or ip of server>/<share name> <folder to mount to>

Here is an exam­ple of mount­ing our CIFS share to a folder named myshare. We are using the option ro to mount the share read only.

[ccase@midas ~]$ sudo mount -t cifs  -o username=admin,password=secret,ro //192.168.1.200/myshare myshare

If we want to make this auto­matic, it can eas­ily be con­fig­ured in /etc/fstab/ to mount after the net­work comes up. Here is the basic syn­tax you would use in /etc/fstab/

//<name or ip of server>/<share name> <folder to mount to> cifs  username=<share username>,password=<share password>,_netdev,<additional options>   0 0

Here is an exam­ple of mount­ing our CIFS share auto­mat­i­cally to /mnt/myshare/. We are using the option _netdev, to tell it to attempt the mount only after the net­work has come up and ro, to mount the share read only.

//192.168.1.200/myshare /mnt/myshare cifs  username=admin,password=secret,_netdev,ro   0 0

Getting the Last Modification Timestamp of a File with Stat

0

If we want to get just the date mod­i­fied, for a file, in a for­mat of our choos­ing. This can be done with a util­ity called stat.

The syn­tax is as follows:

stat -f <format> -t "<timestamp format>" <path to file>

In this exam­ple, we are print­ing just the date cre­ated in the for­mat YYYYMMDD_HHMMSS.

stat -f "%Sm" -t "%Y%m%d_%H%M%S" filename.txt

We are using the –f “%Sm flag to spec­ify that we want to print out only the date mod­i­fied. The –t “%Y%m%d_%H%M%S” sets the date format.

In my exam­ple, the out­put was:

20121130_180221

This trans­lates to Novem­ber 30, 2012 at 18:02:21.

m4s0n501

Using the Linux Command Line to Find and Copy A Large Number of Files from a Large Archive, Preserving Metadata

0

One of my recent chal­lenges is to go through an archive on a NAS and find all of the .xlsx files, then copy them; pre­serv­ing as much of the file meta­data (date cre­ated, folder tree, etc) as pos­si­ble, to a spec­i­fied folder.  After this copy, they will be gone through with another script, to rename the files, using the meta­data, where they will then be processed by an appli­ca­tion, which uti­lizes the name of the file in its process.

The part I want to share here, is find­ing the files and copy­ing them to a folder, with meta­data pre­served.  This is where the power of the find util­ity comes in handy.

Since this is a huge archive, I want to first pro­duce a list of the files, that way I will be able to break this up into two steps. This will pro­duce a list and write it into a text file.  I am first going to run a find com­mand on the vol­ume I have mounted called data in my Vol­umes folder.

find /Volumes/data/archive/2012 -name '*.xlsx' > ~/archive/2012_files.txt

Now that the list is saved into a text file, I want to copy the files in the list, pre­serv­ing the file meta­data and path infor­ma­tion, to my archive folder.  The cpio util­ity accepts the paths of the files to copy from stdin, then copies them to my archive folder.

cat ~/archive/2012_files.txt | cpio -pvdm ~/archive

Explicitly Setting log4j Configuration File Location

0

I ran into an issue recently, where an exist­ing log4j.xml con­fig­u­ra­tion file was built into a jar file I was ref­er­enc­ing and I was unable to get Java to rec­og­nize another file that I wanted it to use instead.  For­tu­nately, the solu­tion to this prob­lem is fairly straight­for­ward and simple.

I was run­ning a stand­alone appli­ca­tion in linux, via a bash shell script; but this tech­nique can be used in other ways too.  You sim­ply add a para­me­ter to the JVM call like the exam­ple below.

So the syn­tax is basically:

java -Dlog4j.configuration="file:<full path to file>" -cp <classpath settings> <package name where my main function is located>

Lets say I have a file named log4j.xml in /opt/tools/myapp/ which I want to use when my appli­ca­tion runs, instead of any exist­ing log4j.xml files.  This can be done by pass­ing a JVM flag –Dlog4j.configuration to Java.

Here is an example:

java -Dlog4j.configuration="file:/opt/tools/myapp/log4j.xml" -cp $CLASSPATH  my.standalone.mainClass;

With that change, as long as your log4j file is set up prop­erly, your prob­lems should be behind you.

Linux Mint 13: Enabling the SD Card Reader on the Toshiba Satellite P870

0

I started using SD cards recently and had a heck of a time using it on my lap­top at first. I tried using my 32 GB SDHC card in the USB adapter, to no avail, then I found the SD slot and it still did not work either. It turned out that the dri­ver was not load­ing by default. This is a com­mon prob­lem in Linux, as the devices that are less com­monly used are not going to always “just work”. You have to often get the dri­ver your­self and install it.

Get­ting it work­ing was not triv­ial, I had to fig­ure out which dri­ver to get, which took some guess­work. It turns out that this lap­top uses a Real­tek RTS5229 for its SD card inter­face. I found this infor­ma­tion with lspci. (more…)

The Possibility of a Friendica-Based Service

0

An idea came to mind the other day. I was pon­der­ing small ven­tures I could pos­si­bly spin up, to make a few dol­lars and in the process pro­vide some­thing of value for low cost. The pos­si­bil­ity of start­ing a Frien­dica–based ser­vice, wherein a user can start their own SSL-secured self-contained Frien­dica node, via a web-based ser­vice front-end, came to mind.

The goal of this ser­vice would be to pro­vide an inex­pen­sive and easy way, for non-technical indi­vid­u­als to start their own per­sonal Frien­dica nodes, com­plete with their own sub­do­main (pos­si­bly their own domain, as a later, more advanced fea­ture) and com­plete SSL protection.

As I talk to peo­ple who are not famil­iar with Frien­dica, I notice a recur­ring theme, that they find it inter­est­ing; but get­ting some­thing started is pos­si­bly too con­fus­ing or too tech­ni­cal for them.  I want to offer some­thing that elim­i­nates many of the hur­dles new users would face; things they typ­i­cally don’t want to deal with, while pro­vid­ing them an envi­ron­ment that they can be com­fort­able inter­act­ing in and fully sup­ported. (more…)

Using Friendica as a Content Aggregator

0

Frien­dica is a pow­er­ful tool, not only for social net­work­ing; but also for a vari­ety of other pur­poses. The usage I would like to dis­cuss today is con­tent aggregation.

There are many ways to aggre­gate con­tent on the web; but Frien­dica has some­thing that none of the oth­ers have. Frien­dica not only allows you to aggre­gate con­tent; but it also allows you to inte­grate that con­tent with social net­work­ing con­tent from a vari­ety of sources. Gen­er­ally aggre­ga­tors only aggre­gate RSS feeds; but Frien­dica has been cus­tomized to han­dle a vari­ety of dif­fer­ent kinds of con­tent, not just RSS feeds.

This means you can look at all of the lat­est posts from your favorite web­sites, via their RSS feeds, while also see­ing the lat­est from your social net­works (Frien­dica, Twit­ter, Iden­tica, Youtube, Face­book, etc). This is a valu­able tool for effi­ciently keep­ing up with the flow of infor­ma­tion from web­sites you fol­low and social net­works that you are part of.

The process of inte­grat­ing the con­tent from web­sites you want to fol­low, is sim­i­lar to how you might add a con­tact to your social net­work. In fact, on Frien­dica, the posts from web­sites you fol­low, appear in the same way as post from your social net­works.
(more…)

Go to Top