• Setup your own git server !

                              Git server setup with with gitolite

    Git is a distributed version control system which is developed by Linux Torvalds ( 2005) for the development of Linux Kernel project.  Performance wise I felt its far better than other version control systems like cvs and svn .

    Here we have used two other tool to use the git function in more controlled way

    1)      Gitolite : which will work with git and allow us to make a good control over the repositories and the users who are accessing on this projects.

    2)       Gitweb : This is a nice front end for Git ( gitweb )

    Installation of Git : –

    # As root:

    – Red Hat Enterprise Linux 5 / i386:

    rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm

    – Red Hat Enterprise Linux 5 / x86_64:

    rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS//rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm 

    # yum -y install git

    Install Gitolite : –

    # yum –enablerepo=epel-testing install gitolite

    We should install epel repo via :

    rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-7.noarch.rpm  http://fedoraproject.org/wiki/EPEL/FAQ#How_can_I_install_the_packages_from_the_EPEL_software_repository.3F

    Setup Gitolite environment : –  ( on Server )

    # usermod -d /srv/git gitolite
    # cd /srv/ ; mkdir git
    #  chown gitolite:gitolite git/

    Create the Administrator user to manage the Git repositories .

    On any of you client machine , generate the pub key and copy this to the Git server

    [[email protected] ~]$ ssh-keygen -t rsa -b 2048 -C “Admin”
    # cp .ssh/id_rsa.pub   .ssh/git-admin.pub
    # scp   .ssh/git-admin.pub  [email protected]:/tmp

    Login back to the git server and switch the user as “gitolite “

    # su – gitolite
    # gl-setup /tmp/ git-admin.pub

    After successful completion of this command , it will create two folders  “repositories and projects.list “
    Now the git-admin ( from client machine ) can create his own repositories and users to access the repos .

    [ From Client machine ]

    Just introduce yourself to the Git server and this will provide you nice logs

    # git config –global user.name “Admin”
    # git config –global user.email “[email protected]

    Playing with Git : – ( git clone , git add and git push )

    #  git clone [email protected]:/ gitolite-admin
    #  cd repositories/gitolite-admin

    This Admin repository  contains two files , one config folder and one Key folder .  config/ gitolite.conf will help you to manage the repos and users as follows

    #  cat gitolite.conf
    @developers = root hari william
    @qa = jack lilly

    repo    gitolite-admin
    RW+ = @developers

    repo    adminlogs.com
    RW+ = @developers
    R = @qa

    repo    adminlog.info
    RW+ = @all

    Note that you can specify the projects groups like @developers and @qa etc .

    Also you can add any number of public keys from your project members inside key directory . Copy the keys like “hari.pub , jack.pub, William.pub etc .  Please note that you need to specify the same in gitolte.conf .

    For example if the key name is lilly.pub then the allowed user in configuration file must be “lilly” ( with out .pub )

    Once you complete the changes then update /push the files to master repo

    # git remote –v
    # git add .
    # git commit –a –m “Added new entries in gitolite.conf and added new pub keys in key dir “
    # git push origin master

    That’s it ..You have successfully configured git with gitolite.  Now all your team members who have added the keys to git admin repo can create and push their own project !!

    Just try , Its an awesome tool !!!

  • libtool error with sysbench-0.4.12

    Sysbench is a very good tool to test your database performance .

    I got the following error today while setting up sysbench version 0.4.12

    Error : –

    /bin/sh ../libtool --tag=CC   --mode=link gcc -pthread -g -O2      -o sysbench sysbench.o sb_timer.o sb_options.o sb_logger.o db_driver.o tests/fileio/libsbfileio.a tests/threads/libsbthreads.a tests/memory/libsbmemory.a tests/cpu/libsbcpu.a tests/oltp/libsboltp.a tests/mutex/libsbmutex.a drivers/mysql/libsbmysql.a -L/usr/local/mysql/lib/ -lmysqlclient_r   -lrt -lm
    ../libtool: line 838: X--tag=CC: command not found
    ../libtool: line 871: libtool: ignoring unknown tag : command not found
    ../libtool: line 838: X--mode=link: command not found
    ../libtool: line 1004: *** Warning: inferring the mode of operation is deprecated.: command not found
    ../libtool: line 1005: *** Future versions of Libtool will require --mode=MODE be specified.: command not found
    ../libtool: line 2231: X-g: command not found
    ../libtool: line 2231: X-O2: command not found
    ../libtool: line 1951: X-L/usr/local/mysql/lib/: No such file or directory
    ../libtool: line 2400: Xsysbench: command not found
    ../libtool: line 2405: X: command not found
    ../libtool: line 2412: Xsysbench: command not found
    ../libtool: line 2420: mkdir /.libs: No such file or directory
    ../libtool: line 2547: X-lmysqlclient_r: command not found
    ../libtool: line 2547: X-lrt: command not found
    ../libtool: line 2547: X-lm: command not found
    ../libtool: line 2629: X-L/root/sysbench-0.4.12/sysbench: No such file or directory
    ../libtool: line 2547: X-lmysqlclient_r: command not found
    ../libtool: line 2547: X-lrt: command not found

    Confused ?

    After getting the above error , I tried to run ” autogen.sh ” and its leads to the fix

    sysbench/drivers/mysql/Makefile.am:17: library used but `RANLIB' is undefined
    sysbench/drivers/mysql/Makefile.am:17:   The usual way to define `RANLIB' is to add `AC_PROG_RANLIB'
    sysbench/drivers/mysql/Makefile.am:17:   to `configure.ac' and run `autoconf' again.
    sysbench/drivers/oracle/Makefile.am:17: library used but `RANLIB' is undefined
    sysbench/drivers/oracle/Makefile.am:17:   The usual way to define `RANLIB' is to add `AC_PROG_RANLIB'
    sysbench/drivers/oracle/Makefile.am:17:   to `configure.ac' and run `autoconf' again.
    sysbench/drivers/pgsql/Makefile.am:17: library used but `RANLIB' is undefined

    How to fix ?  : –

    1) Edit configure.ac  file and hash the  75th line and add a new entry as follows

    vi sysbench-0.4.12/configure.ac


    2)  ./autogen.sh

    3)  ./configure

    4)  make & make install

    That’s it !!!  😉


  • VmWare ESXi 5.1 driver issue with Dell servers .

    VmWare ESXi 5.1 driver issue with Dell servers .


    Today i had to setup VmWare ESXi on a new server ( Dell PowerEdge R320 ) .
    Firstly, I downloaded latest exsi 5.1 and tried to install. But it was throwing network driver errors.

    After some googling, I found a “Repaired vmware exsi 5.1 iso ” from Dell support .

    If you are facing the same issue then please download the dell .iso from the following location

     Download from here :


    Sometimes you may face issues to download from this location , then you may try ” wget” in linux .

    wget http://downloads-us.dell.com/FOLDER00876460M/1/VMware-VMvisor-Installer-5.1.0-799733.x86_64-Dell_Customized_RecoveryCD_A00.iso

    Thats it !!  Enjoy new 5.1 ESXi !!!! 


  • How to get mail statistics from your postfix mail logs

    Overview :-

    Last few years  i am supporting postfix mail servers. I would like to share one nice log diagnosing tool that I have used more ” Postfix Log Entry Summarizer

    Its an amazing tool and will provide you the following details

    • Total number of:
      • Messages received, delivered, forwarded, deferred, bounced and rejected
      • Bytes in messages received and delivered
      • Sending and Recipient Hosts/Domains
      • Senders and Recipients
      • Optional SMTPD totals for number of connections, number of hosts/domains connecting, average connect time and total connect time
    • Per-Day Traffic Summary (for multi-day logs)
    • Per-Hour Traffic (daily average for multi-day logs)
    • Optional Per-Hour and Per-Day SMTPD connection summaries
    • Sorted in descending order:
      • Recipient Hosts/Domains by message count, including:
        • Number of messages sent to recipient host/domain
        • Number of bytes in messages
        • Number of defers
        • Average delivery delay
        • Maximum delivery delay
      • Sending Hosts/Domains by message and byte count
      • Optional Hosts/Domains SMTPD connection summary
      • Senders by message count
      • Recipients by message count
      • Senders by message size
      • Recipients by message size

      with an option to limit these reports to the top nn.

    • A Semi-Detailed Summary of:
      • Messages deferred
      • Messages bounced
      • Messages rejected
    • Summaries of warnings, fatal errors, and panics
    • Summary of master daemon messages

    Installation :-

    Installation is very simple , just download the package and unzip

    •  wget http://jimsun.linxnet.com/downloads/pflogsumm-1.1.1.tar.gz
    •  tar -zxf pflogsumm-1.1.1.tar.gz
    • chown root:root pflogsumm-1.1.1


    Generate the statistics  :-

    #  cat /var/log/maillog | ./pflogsumm.pl
    ( The above command will generate a detailed statistics as follows )

    Grand Totals

       1867   received
       3523   delivered
          0   forwarded
       707   deferred  (75  deferrals)
         35   bounced
        259  rejected (6%)
          0   reject warnings
          0   held
          0   discarded (0%)

      55528k  bytes received
      71732k  bytes delivered
         46   senders
         32   sending hosts/domains
        649   recipients
        350   recipient hosts/domains

    Per-Day Traffic Summary
        date          received  delivered   deferred    bounced     rejected
        Jul 17 2011       257       2003       7295          8
        Jul 18 2011       471        352         94          2        216
        Jul 19 2011       986       1000        145         23         33
        Jul 20 2011       153        168         55          2         10

    Per-Hour Traffic Daily Average
        time          received  delivered   deferred    bounced     rejected
        0000-0100           9          9          3          0          1
        0100-0200          11         10          4          1          4
        0200-0300          10         10          3          0          2
        0300-0400          11         13          3          0          2
        0400-0500          16         82        287          1          2

    I am sure this will be definitely helpful for somebody who is working with postfix mail servers.

  • New PHP-CGI exploit: CVE-2012-1823, Badly affecting php scripts

    Recently some folks reported an interesting and nasty bug with php which will allow an intruder to view the source code and access the file systems.

    As per the update from php ( http://php.net ) , this bug has gone unnoticed for at least past 8 years .

    # Who all are affected ?

    If you are using Apache mod_cgi to run PHP you may be vulnerable to this bug.

    # Are you safe ?

    Just pass the argument “ ?-s “ to any of  your php pages and see.  Are you shocked ???
    If you pass the following arguments in your site , say example.com :

    1 ) http://example.com/index.php?-s
    Will dump your source code of the file index.php ( in simple words it will display the content of the file index.php )

    2) http://example.com/index.php?-dauto_prepend_file%3d/etc/passwd+-n
    Will display your /etc/passwd file !!!!!!!

    # Which all php versions are affected ?

    The PHP Group – PHP 5.3.11,PHP 5.3.10, 5.4.0 and  5.4.1

    # How to fix ?

    To fix this, upgrade your php to PHP 5.3.12 or PHP 5.4.2.

    # Any Patch ?

    Yes , php has provided  a temporary work around . I have tested and confirmed ( in php 5.3.10 )that  this will close the loop hole .
    Apply the following rewrite rule in your sites DocumentRoot .htaccess file .

             RewriteCond %{QUERY_STRING} ^(%2d|-)[^=]+$ [NC]
             RewriteRule ^(.*) $1? [L]


    # More Reference ?

  • STrace : Third Eye of a System Admin


    It was a monday early morning , I got a call from my SL1 team in offshore . It was regarding a unique issue of one application which is hosted in Apache tomcat platform .  After the initial investigation ,team updated the following and escalated the ticket to my queue.

    1) Site was not loading / 500 Internal Server Error
    2) Apache error log was throwing “ Premature End of Script “

    I started to work on that issue and found that particular “php” processes for that site was hitting the Maximum allowed connections in Apache . I couldnt find anymore info from apache logs .

    For example , if adminlogs.info is the site and “ admin “ is the username , 50 is the maximum allowed number of connections.

    # Ps aux | grep admin | wc -l

    I decided to kill all these processes and restart apache . As expected , the site started to work as normal . But after few minutes its again hit the limit 50 !!!!  I felt something is stuck somewhere and the process cycle is not completing properly .

    And finally it was time to take a deeper look on the issue !!

    strace : strace is an excellent diagnostic tool for linux admins which will trace the system calls and signals.

    I decided to use strace to dig the above mentioned process , for example if the pid is “12345 “

    1) [email protected]:~ # strace -f -p 12345
    Process 27776 attached – interrupt to quit
    select (1024, [13], [], NULL, NULL

    here it clearly shows the system call is stuck with the ” Select ” query . Unfortunately , in the earlier stage of investigation we were not able to find/understand the file descriptor #13.

    ( In the above strace output , the first argument (1024) is the max number of file descriptors in a set, the second ([13]) is the set of file descriptors polled for reading, the third ([] – empty set) is the set of file descriptors polled for writing. )

    2) We can see the details of file descriptor (13) using  the ” lsof ” command as follows

    [email protected]:~ # lsof  -p 27776

    php-5.3.6 27009 dw02290c   13u IPv4 1612251401                 TCP server.web-global.com:49062->ldap-global4.com:ldap(ESTABLISHED)

    The above ” lsof ” output revealed that the file descriptor “13u ”  is stuck with a system call to a remote/backend ldap server . And due to which the process does not get any response from the remote ldap server and hence it became stuck/hung .

    3)  I confirmed the above connections using netstat command also

    [email protected]:~ # netstat -a | grep  ldap-global4.com
    tcp        0      0 server.web-global.com:49062  ldap-global4.com:ldap  ESTABLISHED

    4)  Then we advised the client to remove the above problem ldap server “ ldap-global4.com” from the configured server pool of their  application ( Issue was present in the staging site also ) . And later deployed the changes to live site after testing in staging site/ server.

    Hope this would be helpful to some of my friends facing such issues in future.

  • Hash Table Vulnerability or Hash Collision

    Description :-

    A hash table or hash map is a data structure that uses a hash function to map identifying values, known as keys , to their associated values . Thus, a hash table implements an associative array.  The various application servers store POST form data in hash tables, so that later they could be used during application development. If more than one key is hashed to a single hash using hash function, then it can lead to a problem called hash collision. Any application platform  that use a hash function  is easily affected by this vulnerability.

    A recent n.runs’ AG’s report explains that “If the language does not provide a randomized hash function or the application server does not recognize attacks using multi-collisions, an attacker can degenerate the hash table by sending lots of colliding keys. The algorithmic complexity of inserting n elements into the table then goes to O(n**2), making it possible to exhaust hours of CPU time using a single HTTP request” .

    This in turn results on DoS(Denial of Service) attacks. In general terms, DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.

    How the attack works :-

    Here, three different keys namely wolf, tiger and elephant are hashed to the same hash 05 through hash function. This increases the complexity of processing a request which involves these key values and finally results in hash collisions.

    Let us now take a quick glance at how these hash table vulnerabilities affect PHP, JAVA and Tomcat..

    1) Apache Tomcat

    As the Apache Tomcat uses hash tables for storing various http request parameters, it is affected by the above mentioned issues.

    As a remedial measure, Tomcat’s Mark Thomas said: “Tomcat has implemented a work-around for this issue by providing a new option (maxParameterCount) to limit the number of parameters processed for a single request, This default limit is 10000: high enough to be unlikely to affect any application; low enough to mitigate the effects of the DoS.”

    The workaround is available in variants 7.0.23 and onwards, and 6.0.35 and later. However it is suggested to implement these measures and to upgrade to safer versions which are less prone to such attacks.

    2) Java

    Java uses the HashMap and Hashtable classes, which use the String.hashcode() hash function. Hence it could be affected by hash collision.

    3) PHP

    The case of PHP is not different too. It uses another hash function, which paves a reason for these attacks to happen in PHP as well. PHP is telling to tweak max_input_time and max_execution_time.  Also they released one bug fix ,but not for production servers.

    Refer : http://www.php.net/archive/2011.php#id2011-12-25-1

    For those working platforms, where fixes are not yet released, the suggested work arounds are:-

    1. Limit CPU time
      Limiting the processing time for a single request can help minimize the impact of malicious requests.

    2. Limit maximum POST size
      Limiting the maximum POST request size can reduce the number of possible predictable collisions, thus reducing the impact of an attack.

    3. Limit maximum request parameters
      Some servers offer the option to limit the number of parameters per request, which can also minimize impact.

    In short, the basic idea is to regulate the traffic of CPU utilization. thereby,at least you can keep a control on such attacks affecting your server before the respective fixes are released.

    Reference :-


  • Install mysql 5.5 from source

    I got a request from  one of my clients  to setup a mysql server with the latest version.  I decided to install mysql using source , because i always love compilation 🙂

    As usual I downloaded the latest source and fired the command ” ./configure ”  with options . But the result was not good 🙁


    =>> Download the latest MySql source

    wget http://dev.mysql.com/get/Downloads/MySQL-5.5/mysql-5.5.15.tar.gz/from/http://mysql.oss.eznetsols.org/
    =>> Configure ( Old Story )

    1)  tar -zxf mysql-5.5.15.tar.gz
    2)  cd mysql-5.5.15
    3)  ./configure
    ./configure: command not found

    ( After some goggling , i found  ”  In MySQL 5.5 onwards , CMake is used as the build framework on all platforms. ” )


    =>> Download and install cmake

    $  wget http://www.cmake.org/files/v2.8/cmake-2.8.5.tar.gz
    $  tar zxvf cmake-2.8.5.tar.gz
    $  cd cmake-2.8.5
    $  yum install gcc-c++
    $  ./configure
    $  make
    $  make install


    =>> Configure ( New Story )

    $ cd mysql-5.5.15

    $  Configure using cmake

    cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysql5 -DMYSQL_TCP_PORT=3306  -DMYSQL_UNIX_ADDR=/tmp/mysql.sock 

    — Could NOT find Curses (missing:  CURSES_LIBRARY CURSES_INCLUDE_PATH)
    CMake Error at cmake/readline.cmake:83 (MESSAGE):
    Curses library not found.  Please install appropriate package,

    $ yum install ncurses-devel

    rm -f  CMakeCache.txt   ( Equivalent to ” make clean ”  )

    $  Again run the cmake command after fixing the curses error.

    cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysql5  -DMYSQL_TCP_PORT=3306  -DMYSQL_UNIX_ADDR=/tmp/mysql.sock 

    more cmake configuration options here :  cmake options

    $  make

    $ make install


    =>> Post installation Steps

    $  cp support-files/my-medium.cnf   /etc/my.cnf

    $  cp support-files/mysql.server   /etc/init.d/mysql

    $  chown -R mysql:mysql .

    $  ./scripts/mysql_install_db  – -user=mysql  – -datadir=/var/lib/mysql

    $  /etc/init.d/mysql restart

    $  ./bin/mysqladmin -u root password ‘new-password’

    $  ./bin/mysql_secure_installation

    Thats its you installed mysql 5.5 successfully. You can configure / optimize the mysql using the my.cnf file .


    =>> Test the insatalation

    $ mysql -u root -p

    > create table new ;

    Hope that this will be helpful for someone worrying with ” ./configure command not found ” in mysql latest version.

    How to reset mysql root password : click here 🙂

  • Resin SSL configuration in five steps !!

    Resin is a powerful web server which will run smoothly with java and html ( it will  support php also ).  Its very tough to get details about resin except from the www.caucho.com   site.  Hope that this doc will be helpful for the System admins who are working with  resin .  Its a simple  5 steps doc to setup ssl certificate for resin.


    1)  Create Key

    openssl  genrsa  -des3  -out  www.adminlgos.info.key  2048

    2)  Create CSR

    openssl req -new -key www.adminlogs.info.key -out www.adminlogs.info.csr

    3) Purchase the SSL using the above csr

    4) SSL configuration for resin web server
    ( We should save the ssl key , chainfile ( CA bundle ) and certificate in ”  /usr/local/resin/keys  ” )

    # vi /usr/local/resin/resin.conf

    <server id=”adminlogs” address=”″>
    <http id=”adminlogs” address=”″ port=”8080″/>
    <http id=”adminlogs” address=”″ port=”8443″>

    Terms : –
    certificate-file  : SSL certificate location
    certificate-key-file : SSL key file location
    certificate-chain-file : chain file location
    chain file  contains both ca bundle and ssl certificate
    For example you should create the file as follows , certificate first and then ca bundle.
    cat adminlogs.crt >> admin-inter.txt
    cat intermediate.txt >> admin-inter.txt

    password : Password given at the time of SSL Key creation in first step

    5) Restart resin

    /usr/local/resin/bin/resin-servers.sh restart

  • Daily ,Weekly and Monthly backup from Linux to Windows

    Scenario :-

    Setup a backup script to take daily, weekly and monthly backups to a remote windows server.  I wrote this bash script to meet the clients requirement and its worked  perfectly.  With some minor changes you can use the same script to setup daily ,weekly and monthly backup’s to the local linux server. I have setup separate scripts for daily, weekly and monthly backup’s. So that if somebody searching for the same scenario then they can understand the logic very easily.

    Overview :-

    1)  Created a folder ” backup ” in the remote windows server backup drive.

    2) Created a user as follows

    3)  Grant necessary  privileges  for this user to the backup directory as follows .

    4 )   Mount the windows backup drive using /etc/fstab file follows ( with windows username and password )

    //winserver/backup  /WIN_BACKUP  cifs  username=backup,password=pass 0 0

    5)  Decided to run and setup  separate Daily , weekly and monthly backup scripts as follows


    Crontab entries :-

    #### Daily  backup at 03:01 am on  Monday to Satuarday
    01 03  * * 1-6  /bin/bash /usr/local/scripts/daily_backup.sh > /dev/null 2>&1

    ##### Weekly  backukp – every  Sunday at 05:01 am
    01 05  * * 0    /bin/bash /usr/local/scripts/weekly_backup.sh > /dev/null 2>&1

    ##### Monthly  backup – First day of every month at 06:01 am
    01 06 1 * *  /bin/bash /usr/local/scripts/monthly_backup.sh > /dev/null 2>&1


    Backup Script Files :-


    1)   /usr/local/scripts/daily_backup.sh

    export PATH
    ## To find the day output will be like "Mon,Tue,Wed etc "
    path=`date | awk '{print $1}'`
    # Already created the folders Mon,Tue,Wed,..Sat inside /WIN_BACKUP/daily
    # Backup scripts directory
    rsync -avzub --copy-links /usr/local/scripts/   /WIN_BACKUP/daily/$path/scripts
    # Backup website files 
    rsync -avzub --copy-links  --exclude 'logs'  --exclude 'logs'  --exclude '*.tar'  --exclude '*.gz'  --exclude '*.zip' --exclude '*.sql'  /usr/local/www/   /WIN_BACKUP/daily/$path/UsrLocalWww


    2)  /usr/local/scripts/weekly_backup.sh

    #!/bin/bash PATH=/usr/bin:/bin:/usr/sbin:/sbin
    export PATH
    cd /WIN_BACKUP/website_weekly/
    mkdir sun-`date +%Y%m%d`
    cd sun-`date +%Y%m%d`
    mkdir -p scripts
    mkdir -p UsrLocalWww
    # Backup scripts directory
    rsync -avzub --copy-links /usr/local/scripts/    /WIN_BACKUP/website_weekly/sun-`date +%Y%m%d`/scripts
    # Backup website files 
    rsync -avzub --copy-links  --exclude 'logs'  --exclude 'logs'  --exclude '*.tar'  --exclude '*.gz'  --exclude '*.zip' --exclude '*.sql'  /usr/local/www/  /WIN_BACKUP/website_weekly/sun-`date +%Y%m%d`/UsrLocalWww


    3) /usr/local/scripts/monthly_backup.sh

    export PATH
    ## To find the current month , out put will be " Jan, Feb, Mar etc " 
    path=`date | awk '{print $2}'`
    # Create the corresponding direcotries with current month
    mkdir -p /WIN_BACKUP/website_monthly/$path/scripts
    mkdir -p /WIN_BACKUP/website_monthly/$path/UsrLocalWww
    # Backup scripts directory
    rsync -Cavz /usr/local/scripts/   /WIN_BACKUP/website_monthly/$path/scripts
    # Backup all websites
    rsync -Cavz --exclude 'log'  --exclude 'logs'  --exclude '*.tar'  --exclude '*.gz'  --exclude '*.zip' --exclude '*.sql'  /usr/local/www/   /WIN_BACKUP/website_monthly/$path/UsrLocalWww

    I took almost 1 day to complete this setup and now its running fine 🙂 . Hope that this documentation will  definitely help somebody, who is looking for the same setup.

    For mysql daily,weekly and monthly backup setup check : MySql Backup Script