• Find and alert user password expiry

    Script to Alert User password Expiry before ten days of expiry date .

    ===================================================
    #!/bin/sh

    rcvr1=admin@adminlogs.info
    rcvr2=support@adminlos.info
    rcvr3=techlead@adminlogs.info

    for i in user1 user2
    do

    # convert current date to seconds
    currentdate=`date +%s`
    # find expiration date of user
    userexp=`chage -l $i |grep ‘Password Expires’ |cut -d: -f2`
    # convert expiration date to seconds
    passexp=`date -d “$userexp” +%s`
    # find the remaining days for expiry
    exp=`expr \( $passexp  – $currentdate \)`
    # convert remaining days from sec to days
    expday=`expr \( $exp / 86400 \)`
    if [ $expday -le 10 ]; then
    echo “Please do the necessary action”  | mailx -s “Password for $i will expire in $expday day/s” $rcvr3,$rcvr2
    fi
    done

    ### checking for root user password expiary ###

    for j in  root
    do
    currentdate=`date +%s`
    userexp=`chage -l $j |grep ‘Password Expires’ |cut -d: -f2`
    passexp=`date -d “$userexp” +%s`
    exp=`expr \( $passexp  – $currentdate \)`
    expday=`expr \( $exp / 86400 \)`

    if [ $expday -le 10 ]; then
    echo “Please do the necessary action”  | mailx -s “Password for $j will expire in $expday day/s” $rcvr1
    fi
    done
    ===================================================

  • Daily ,Weekly and Monthly backup from Linux to Windows

    Scenario :-

    Setup a backup script to take daily, weekly and monthly backups to a remote windows server.  I wrote this bash script to meet the clients requirement and its worked  perfectly.  With some minor changes you can use the same script to setup daily ,weekly and monthly backup’s to the local linux server. I have setup separate scripts for daily, weekly and monthly backup’s. So that if somebody searching for the same scenario then they can understand the logic very easily.

    Overview :-

    1)  Created a folder ” backup ” in the remote windows server backup drive.

    2) Created a user as follows



    3)  Grant necessary  privileges  for this user to the backup directory as follows .


    4 )   Mount the windows backup drive using /etc/fstab file follows ( with windows username and password )

    //winserver/backup  /WIN_BACKUP  cifs  username=backup,password=pass 0 0

    5)  Decided to run and setup  separate Daily , weekly and monthly backup scripts as follows

     

    Crontab entries :-

    #### Daily  backup at 03:01 am on  Monday to Satuarday
    01 03  * * 1-6  /bin/bash /usr/local/scripts/daily_backup.sh > /dev/null 2>&1

    ##### Weekly  backukp – every  Sunday at 05:01 am
    01 05  * * 0    /bin/bash /usr/local/scripts/weekly_backup.sh > /dev/null 2>&1

    ##### Monthly  backup – First day of every month at 06:01 am
    01 06 1 * *  /bin/bash /usr/local/scripts/monthly_backup.sh > /dev/null 2>&1

     

    Backup Script Files :-

     

    1)   /usr/local/scripts/daily_backup.sh

    #!/bin/bash
    PATH=/usr/bin:/bin:/usr/sbin:/sbin
    export PATH
    ## To find the day output will be like "Mon,Tue,Wed etc "
    path=`date | awk '{print $1}'`
    # Already created the folders Mon,Tue,Wed,..Sat inside /WIN_BACKUP/daily
    # Backup scripts directory
    rsync -avzub --copy-links /usr/local/scripts/   /WIN_BACKUP/daily/$path/scripts
    # Backup website files 
    rsync -avzub --copy-links  --exclude 'logs'  --exclude 'logs'  --exclude '*.tar'  --exclude '*.gz'  --exclude '*.zip' --exclude '*.sql'  /usr/local/www/   /WIN_BACKUP/daily/$path/UsrLocalWww

     

    2)  /usr/local/scripts/weekly_backup.sh

    #!/bin/bash PATH=/usr/bin:/bin:/usr/sbin:/sbin
    export PATH
    cd /WIN_BACKUP/website_weekly/
    mkdir sun-`date +%Y%m%d`
    cd sun-`date +%Y%m%d`
    mkdir -p scripts
    mkdir -p UsrLocalWww
    # Backup scripts directory
    rsync -avzub --copy-links /usr/local/scripts/    /WIN_BACKUP/website_weekly/sun-`date +%Y%m%d`/scripts
    # Backup website files 
    rsync -avzub --copy-links  --exclude 'logs'  --exclude 'logs'  --exclude '*.tar'  --exclude '*.gz'  --exclude '*.zip' --exclude '*.sql'  /usr/local/www/  /WIN_BACKUP/website_weekly/sun-`date +%Y%m%d`/UsrLocalWww

     

    3) /usr/local/scripts/monthly_backup.sh

    #!/bin/sh
    PATH=/usr/bin:/bin:/usr/sbin:/sbin
    export PATH
    ## To find the current month , out put will be " Jan, Feb, Mar etc " 
    path=`date | awk '{print $2}'`
    # Create the corresponding direcotries with current month
    mkdir -p /WIN_BACKUP/website_monthly/$path/scripts
    mkdir -p /WIN_BACKUP/website_monthly/$path/UsrLocalWww
    # Backup scripts directory
    rsync -Cavz /usr/local/scripts/   /WIN_BACKUP/website_monthly/$path/scripts
    # Backup all websites
    rsync -Cavz --exclude 'log'  --exclude 'logs'  --exclude '*.tar'  --exclude '*.gz'  --exclude '*.zip' --exclude '*.sql'  /usr/local/www/   /WIN_BACKUP/website_monthly/$path/UsrLocalWww

    I took almost 1 day to complete this setup and now its running fine 🙂 . Hope that this documentation will  definitely help somebody, who is looking for the same setup.

    For mysql daily,weekly and monthly backup setup check : MySql Backup Script

     

     

  • Are you worried about ssl certificate expiry ?

    Are you worried about ssl certificate expiry  ?  I found a good solution for that 🙂 . This script will monitor the ssl certificate expiry and  will  provide e-mail notifications when a certificate is getting close to expire !!!

    1) Download and setup the script for execution

    wget http://prefetch.net/code/ssl-cert-check
    chmod 744 ssl-cert-check

    2) To find the ssl expiry details of a local certificate

    ./ssl-cert-check -c  /usr/local/sss/adminlogs.crt

    3) To find  the ssl expiry details of a remote domain

    ./ssl-cert-check -s www.adminlogs.info -p 443

    4) To find the ssl expiry details of a list of domains

    If you are managing a number of domains , you can place the domains in a file with port number as follows

    # vi  /home/domainlist
    www.adminlogs.info 443
    www.google.com  443
    www.yahoo.com  443

    Then save the file and execute the script with the option ” -f ”

    ./ssl-cert-check -f  /home/domainlist  ./ssl-cert-check -i -f domainlist

    here ”  i ” will give the details of ssl provider/issuer
    5)  Setup e-mail alerts if ssl expiry date is less than or equal to 20 days

    ssl-cert-check can provide e-mail notifications when a certificate is getting close to expiring. The expiration interval can be controlled with ssl-cert-check’s “-x” (expiration interval) option, and the e-mail address to send notifications can be passed as an argument to the “-e” (e-mail address to send alerts) option.

    ./ssl-cert-check -a  -f   /home/domainlist  -q -x 20 -e  ssl-alert@adminlogs.info

    You can add the above command in cron and monitor your ssl certificate validity .

    You can find more ssl related stuffs here : most-common-openssl-commands

    Thank you prefetch.net for this excellent script !!!

     

  • Automate your ftp operations

    In our day to day operations , sometimes we will need to automate ftp operations. Today one of my client asked me to setup a cronjob to download files from remote machine using ftp

    We can do this in two ways

    1) FTP automation

    vi /usr/local/scripts/ftp-auto.sh

    #!/bin/bash
    HOST='adminlogs.info'
    USER='ftpadmin'
    PASSWD='password'
    ftp -n -v $HOST << EOT
    ascii
    user $USER $PASSWD
    prompt n Interactive mode Off
    mkdir linux
    cd linux
    bye
    EOT
    sleep 3

    I have included an example , you can add your ftp operation’s after “prompt “

     

    2) SFTP automation

    vi /usr/local/scripts/sftp-auto.sh

    #!/bin/bash
    HOST="adminlogs.info"
    USER="ftpadmin"
    PASS="password"
    FIRE=$(expect -c "
    spawn /usr/bin/sftp -o \"BatchMode no\" -b /tmp/commandfile  $USER@$HOST
    expect \"password:\"
    send \"$PASS\r\"
    interact
    ")
    echo "$FIRE"

    Note : –

    You should install ” expect ” , using  yum install expect

    You can add your own ftp commands in ” /tmp/commandfile ” .

     

     

  • Bash script to monitor bandwidth usage of a linux server

    Scenario :-

    One of my linux server was causing bandwidth chock in our DC . So I have decided to monitor the bandwidth usage ,using the following server. I used the command ” vnstat ” to produce the traffic in and out report. This script will check for the bandwidth usage 3 times before sending the e-mail alert. Bandwidth usage should cross the threshold value in all these three checks . If the server is added to nagios then I have a better solution here : Bandwidth Monitoring using Nagios

    1)  vi /usr/local/scripts/check_bandwidth.sh

    #!/bin/bash
    #Monitory Bandwidth Developed by Adminlogs.info
    #Tested and verified with CentOS and RedHat 5
    hostname=`hostname`
    vnstat -tr > /tmp/monitor
    i=0
    in=`cat /tmp/monitor | grep rx | grep -v kbit | awk '{print $2}' | cut  -d . -f1`
    inx=$(($in + $i))
    out=`cat /tmp/monitor | grep tx | grep -v kbit | awk '{print $2}' | cut  -d . -f1`
    outx=$(($out + $i))
     
    ##### Second Test after 2 minutes
    sleep 120
    vnstat -tr > /tmp/monitor2
    ini2=`cat /tmp/monitor2 | grep rx | grep -v kbit | awk '{print $2}' | cut  -d . -f1`
    inx2=$(($in2 + $i))
    out2=`cat /tmp/monitor2 | grep tx | grep -v kbit | awk '{print $2}' | cut  -d . -f1`
    outx2=$(($out2 + $i))
    
    #### Third Test after 4 minutes
    sleep 120
    vnstat -tr > /tmp/monitor3
    ini3=`cat /tmp/monitor3 | grep rx | grep -v kbit | awk '{print $2}' | cut  -d . -f1`
    inx3=$(($in2 + $i))
    out3=`cat /tmp/monitor3 | grep tx | grep -v kbit | awk '{print $2}' | cut  -d . -f1`
    outx3=$(($out3 + $i))
    
    #### condition checking
    [ $outx -ge 10 ] && [ $outx2 -ge 10 ] && [ $outx3 -ge 10 ]
    out4=$?
    if [ $out4 -eq 0 ]
    then
    cat /tmp/monitor /tmp/monitor2 /tmp/monitor2 >> /tmp/monitor_result
    mail -s "Outbond Bandwidth usage is critical on $hostname"  admin[at]adminlogs[dot]info  < /tmp/monitor_result
    fi
    #### clearing old results
    > /tmp/monitor
    > /tmp/monitor2
    > /tmp/monitor3
    > /tmp/monitor_result

    If you have a nagios server then I have a better solution, check here : Bandwidth Monitoring using Nagios

    2) Add the scripts to crontab ( run every 10 minutes )

    */10  * * * * /bin/bash /usr/local/scripts/check_bandwidth.sh  > /dev/null 2>&1


  • Admin Tips 2 : Monitor linux services using bash script

    Scenario : –

    In one of my resin server,  resin service was crashing due to some resource usage.  It was happening at night time.

    I used the following script to monitor the status and restart resin if its not running .

    # vi /home/resin/check-resin.sh

    #!/bin/sh
    run=`ps ax | grep /usr/java/jdk1.6.0_14/bin/java | grep -v grep | cut -c1-5 | paste -s -`
    if [ "$run" ];
    then
    echo "resin is running" > /home/resin/check_resin.log
    else
    /usr/local/resin/bin/resin-servers.sh restart
    mail -s "resin server restarted by check-resin script " admin[at]adminlogs[dot]info < /usr/local/www/hosts/www.adminlogs.info/log/stdout.log
    fi

    Or the issue is only for a single website ( shared resin hosting ) , you can use the following script and restart the respective server only.

    # vi /home/resin/check-resin.sh

    #!/bin/sh
    cd /tmp
    wget www.adminlogs.info:8080
    if [ $? -gt 0 ]; then
    /usr/local/resin/bin/resin-adminlogs.sh restart
    mail -s "adminlogs resin server restarted by check-resin script "  admin[at]adminlogs[dot]info < /usr/local/www/hosts/www.adminlogs.info/log/stdout.log
    fi

    $? contains the return code of the last executed process. -gt means greater than. Usually programs return zero on success or something else on failure

    After  making the following small changes ( use appropriate daemon) you can use the above script to monitor other services like Apache, ftpd,mysql etc as follows :-

    For example :-

    #  Vi check_httpd.sh

    #!/bin/sh
    run=`ps ax | grep /usr/local/apache/bin/httpd  | grep -v grep | cut -c1-5 | paste -s -`
    if [ “$run” ];
    then
    echo “apache is running” > /home/admin/check_httpd.log
    else
    /usr/local/apache/bin/apachectl -k restart
    mail -s “Apache server restarted by check-httpd script ” admin [at]adminlogs[dot]info < /usr/local/apache/logs/error.log
    fi

    Or ( only for apache )

    # Vi check_httpd.sh

    #!/bin/sh
    cd /tmp
    wget adminlogs.info:80
    if [ $? -gt 0 ]; then
    /usr/local/apache/bin/apachectl -k restart
    mail -s “Apache server restarted by check-httpd script ” admin [at]adminlogs[dot]info < /usr/local/apache/logs/error.log
    fi

    Add the script to crontab ( It will check the status in every 5 minutes )

    */5 * * * * /bin/bash check_httpd.sh

    Its worked fine  and now I have no worry about that website and getting good sleep 🙂

     

  • Admin Tips 1

    1)  Find the number of hits towards your webserver . This one is very much help full to find whether you are facing any DDOS.

    netstat -ntu | grep ':80' | awk '{print $5}' | sed 's/::ffff://' | cut -f1 -d ':' | sort | uniq -c | sort -nr  | grep -v 127.0.0.1

    2) Find and replace old files from backup .

    Following command will find and remove all the directories  which are older than 10 days and its name contains "2001"
    find /backup/mysql_backup/  -mtime  +10   -type d \( -iname "*-2001*" \) -exec rm -rf {} \;

    3) Find and remove a file from a specified location

    find . -name  '*.class' |  xargs /bin/rm -f

    ( dot “.” means the current working directory )

    find /usr  -name  '*.class' |  xargs /bin/rm -f

    5)  Archive a folder excluding unwanted directories

    tar -zcvf  /home/adminlogs.tar.gz   --exclude='log' --exclude='tmp'  /home/adminlogs

    If you have more files/folders to exclude then you can create file and mention the exclude list on that

    # vi exclude.txt
    abc

    abc2

    abc3

    # tar -zcvf  /backup/adminlogs.tar.gz   -X exclude.txt /home/adminlogs

    6) Untar files to the specified location or directory

    # tar -xzf /home/admin/adminlogs.tar.gz -C /tmp

    Hope that this tips will be helpful for you

  • Linux Local and Remote backup using rsync and rdiff

    Scenario : –

    1)      Take daily backup of linux server to local backup disk/secondary drive

    2)      Copy the backup drive to the remote server.

    ( Using this method we can keep our data in 3 different places )

    Setup Local Backup

    1)      create a folder /backup

    2)      rsync the necessary files to /backup

    rsync -avzub /boot/   /backup/boot

    rsync -avzub /usr/local/scripts/   /backup/User_local_scripts

    rsync -avpuzb  –exclude ‘logs’ –copy-links –exclude ‘logs’  –exclude ‘*.tar’  –exclude ‘*.gz’ –exclude ‘*.zip’ –exclude ‘*.sql’ –exclude ‘log’  /usr/local/apache/htdocs    /backup/apache_htdocs

    Copy the local backup to Remote server

    Now all the necessary files are copied to /backup.  We need to copy this to a remote backup server for data redundancy .

    I found Rdiff is a nice backup tool for copying folders overs a network.

    Rdiff-backup:- is a very nice backup tool.We can copy one directory to another ,possibly over a network.The target directory ends up a copy of the source directory, but extra reverse diffs are stored in a special subdirectory of that target directory, so you can still recover files lost some time ago. The idea is to combine the best features of a mirror and an incremental backup. rdiff-backup also preserves subdirectories, hard links, dev files, permissions, uid/gid ownership, modification times, extended attributes, acls, and resource forks. Also, rdiff-backup can operate in a bandwidth efficient manner over a pipe, like rsync. Thus you can use rdiff-backup and ssh to securely back a hard drive up to a remote location, and only the differences will be transmitted. Finally, rdiff-backup is easy to use

    1) Download rdiff  and install rdiff

    yum install rdiff-backup ( easiest method )

    Or you can download the tar and Install

    Wget http://savannah.nongnu.org/download/rdiff-backup/rdiff-backup-1.2.8.tar.gz

    2) tar –zxf rdiff-backup-1.2.8.tar.gz

    3) cd rdiff-backup

    4) ./configure

    5) make

    6) make install

    7) Create a script to copy the directory to remote server.

    vi  rdiff-backupTo-Remote.sh

    #!/bin/bash

    # hostname adminlogs.info

    # this will copy the /backup folder to remoteserver /backup location

    rdiff-backup /backup  remoteserverIP::/backup/adminlogs  > /dev/null 2>&1

    8)  add this script to cron  ( execute the cron at 1:30 am  )

    30  01 * * * bash  rdiff-backupTo-Remote.sh

    NB :- You should allow password less login from the local server to remote server.

    You can use the following  url to create the ssh password less login

    http://adminlogs.info/2011/05/27/passwordless-login-ssh/

    That’s it…You have successfully configured your backup script. You can relax , your data is safe and available in 3 disks 😉

  • Server load monitoring script

    Its a small but very helpful bash script to monitor server load. Once the server load reaches the threshold value ( here 5 ) , this script will send a mail to the specified address with the following details ” results of top , free , process which are taking more cpu resources and w ”

    1) Create the script

    >>> vi /usr/local/scripts/load_monitoring.sh

    ###################

    #!/bin/bash
    load=`cat /proc/loadavg  | awk {'print $1}' | cut -d. -f1`
    if [ "$load" -ge "5" ]; then
    echo -e  " \n#######Process details from Hostname  ######\n"  >> /tmp/topresult
    ps  –eo  pid,user,%cpu,args  --sort  %cpu  | tail -20  >> /tmp/topresult
    echo -e   " \n#######Server Memory Details ######"  >> /tmp/topresult
    free -m  >> /tmp/topresult
    echo -e  " \n#######Server Load details######"  >> /tmp/topresult
    w >> /tmp/topresult
    mail -s "CPU usage high on Hostname"  alerts@adminlogs.info < /tmp/topresult
    > /tmp/topresult
    fi

    ###################

    2) Add the script in your crontab

    >>> crontab -e

    #### Load Monitoring ######
    */10 * * * * /bin/bash /usr/local/scripts/load_monitoring.sh > /dev/null 2>&1

    Hope this will help you to monitor your servers in critical situations