nothing is impossible!!!!

nothing is impossible!!!!

Monday, October 26, 2009

Wednesday, September 16, 2009

how to configure APC pdu using minicom.

Prerequisite:

PDU installation with proper network and console connection.


Configuration:
1. Run minicom -s -c on
-s - Setup. It has different option to setup your console. It edits /etc/minirc.dfl which is default to store configuration.
-c It usage color's on console.

Note: you can even run only "minicom -s"

Your screen should be like this,

┌─────[configuration]───-──┐
│ Filenames and paths │
│ File transfer protocols │
│ Serial port setup │
│ Modem and dialing │
│ Screen and keyboard │
│ Save setup as dfl │
│ Save setup as.. │
│ Exit │
│ Exit from Minicom │
└──────────────────-─┘


2. Select serial port setup option to select connect to correct serial device. I have terminated serial console cable from PDU to my Linux box so my device was /dev/ttyS0

It may differ depend on OS.

┌────────────────────────────────────┐
│ A - Serial Device : /dev/ttyS0 │
│ B - Lockfile Location : /var/lock │
│ C - Callin Program : │
│ D - Callout Program : │
│ E - Bps/Par/Bits : 9600 8N1 │
│ F - Hardware Flow Control : No │
│ G - Software Flow Control : No │
│ │
│ Change which setting? │
└────────────────────────────────────┘


3. I had to set Hardware Flow Control to No as it was Yes by default.

4. Choose Save setup as dfl to save the configuration. By default it save configuration in /etc/minirc.dfl. Here is my minirc.dfl.

# Machine-generated file - use "minicom -s" to change parameters.
pr port /dev/ttyS0
pu baudrate 9600
pu bits 8
pu parity N
pu stopbits 1
pu rtscts No


5. Once you save and exit from the setup. It intitialize the serial console and gives you access to PDU. The next screen would be something like this,

Welcome to minicom 2.1


OPTIONS: History Buffer, F-key Macros, Search History Buffer, I18n
Compiled on Jul 26 2006, 06:38:09.


Press CTRL-A Z for help on special keys

User Name : apc
Password : ***



6. Default username and password for apc PDU's are apc/apc.

American Power Conversion Network Management Card AOS v3.7.0
(c) Copyright 2008 All Rights Reserved Rack PDU APP v3.7.0
----------------------------------------------------------------------------------------------------------
Name : Engineering Rack PDU Date : 29.03.2000
Contact : Nilesh Patil Time : 19:23:22
Location : Server Room 1 User : Administrator
Up Time : 0 Days 5 Hours 0 Minutes Stat : P+ N+ A+


Switched Rack PDU: Communication Established

------- Control Console -------------------------------------------------------

1- Device Manager
2- Network
3- System
4- Logout


- Main Menu, - Refresh, - Event Log
>


7. Select 2-Network option to configure network. You can define network settings like ip, netmask, gateway etc.


------- Network ---------------------------------------------------------------

1- TCP/IP
2- DNS
3- Ping Utility
4- FTP Server
5- Telnet/SSH
6- Web/SSL/TLS
7- Email
8- SNMP
9- Syslog
10- ISX Protocol

- Back, - Refresh, - Event Log



8. Likewise you can configure your PDU as per your need.

Wednesday, August 26, 2009

How to setup rsync

If you want to run rsync as a daemon make sure following thing.
- rsync is not hashed in /etc/services

- I have created following script to start rsync daemon which uses /etc/rsyncd.conf file

######### Creating start/stop script ..... /etc/init.d/rc.d/rsyncd

#!/bin/sh
# Rsyncd This shell script takes care of starting and stopping the rsync daemon
# description: Rsync is an awesome replication tool.

# Source function library.
. /etc/rc.d/init.d/functions

[ -f /usr/bin/rsync ] || exit 0

case "$1" in
start)
action "Starting rsyncd: " /usr/bin/rsync --daemon
;;
stop)
action "Stopping rsyncd: " killall rsync
;;
*)
echo "Usage: rsyncd {start|stop}"
exit 1
esac
exit 0

##########

######### Create /etc/rsyncd.conf file

### Rsync Configurations ###
uid = nobody
gid = nobody
use chroot = no
max connections = 10
syslog facility = local5
pid file = /var/run/rsyncd.pid
motd file = /etc/rsyncd.motd
lock file = /var/run/rsync.lock

[daily_backup]
path = /backup/
auth users = backup
comment = Main backup directory.
#### secrets file = /etc/rsyncd.secrets


IMP NOTE:
We should take care of trailing "/" while specifying the source directory for data copying.
for example,

rsync -avz -e ssh remoteuser@remotehost:/remoterdir/data/for/copy/ /local/data/dir

Above example means that /local/data/dir must be available to to receive data from /remoterdir/data/for/copy/, otherwise rsync will simply download all the files into the path given as destination.

rsync -avz -e ssh remoteuser@remotehost:/remoterdir/data/for/copy /local/data/dir
in this case "copy" directory will be created first under /local/data/dir directory and data will be populated from remote host.


I have created script to copy data remotely.

#!/bin/bash
# Script to copy data on remote machine.

RSYNC=/usr/bin/rsync
RSSH=/usr/bin/ssh
RUSER=backup
RHOST=backup.remote.host.com
RDATABASE=/backup/Database
RDIR=/backup/Directory/

$RSYNC -avz -e $RSSH /backup/daily/ $RUSER@$RHOST:$RDATABASE/MySQL/
$RSYNC -avz -e $RSHH /var/www/html/ $RUSER@$RHOST:$RDIR


Please generate password less keys to copy data remotely without interruption. Use "ssh-keygen"

Monday, August 24, 2009

Postgresql backup and restore.

First method using pg_dump and second one is file system level backup.

Dump level Backup:
pg_dump dbname > outfile

Options available with pg_dump,
-h hostname. Default is localhost or whatever is set in PGHOST variable.
-p which port. PGPORT env.
-U username. default is logged in user name. PGUSER env.
outfile - name of the target file

Note:
pg_dump does not block other operations on the database while it is working. (Exceptions are those operations that need to operate with an exclusive lock, such as VACUUM FULL.)

Restore:
psql dbname < infile
infile - infile is what you used as outfile for the pg_dump command.

It is suggested to run analyze on each db to obtain the useful statistics. Run,
vacuumdb -a -z to VACUUM ANALYZE all databases;

pg_dump and psql can also use to dump a database directly from one server to another;
for example:
pg_dump -h host1 dbname | psql -h host2 dbname

2. File system level backup
tar -cf backup.tar /usr/local/pgsql/data
- Servers must be shutdown before taking backup.
- To restore database, we have to restore full database, can not do partial restore of tables or etc.

Wednesday, August 12, 2009

Open GUI application via ssh

ssh -p 2222 -l nilesh -X -v {remote.host.ip.or.hostname}

-p 2222 In case your server runs a non-standard TCP port.
(If yours runs on the default port (TCP port 22), there is no need to add this option.)

-l nilesh is only required if you do not have matching usernames on remote host.

-X allows X forwarding. Use -x can be used to disable X11 forwarding

-v is verbose. This lets you watch what is going on.

Run your application once it is done.
I was about to run virt-manager GUI application, which ran successfully :)

Virtualization: Xen Installation

How Xen works?
Xen hypervisor is run on top of the hardware which is the virtual machine monitor. Guest operating systems are run on top of this hypervisor, thus all guest operating systems are secondary to the hardware and contacts the hardware through the hypervisor. The first thing for grub would be to load the hypervisor. Look at the /boot/grub/grub.conf which loads the xen.gz-2.6.18-128.4.1.el5 which is the hypervisor.

Hypervisor loads the Dom-0 kernel and initrd image and starts the main system. Dom-0 itself is a guest operating system with additional privileges to manage other guest operating systems and is started with system start up.

Following rpms should be installed for Xen virtualization,
- kernel-xen ---> Dom-0 and Dom-U kernels.
- xen ----------> Xen hypervisor and other management tools.
- libvirt ------> Libraries required to manage domains which is used as a backend for virtmanager. http://libvirt.org
- virt-manager ----> GUI interface to manage guests

Once installation of above RPM's are done, System should be rebooted using the new XEN kernel.

Tuesday, August 11, 2009

Hidden ports on Linux.

A nice blog post about the hidden ports on linux.
http://www.ossec.net/dcid/?p=87

Thursday, July 30, 2009

**FATAL_ERROR** No password for admin user - please re-run ntop in non-daemon mode fir st

When you install ntop, daemon does not start automatically. It returns above error when you try to start it.

You need to set password to start ntop in daemon mode.
"ntop -A" or ntop --set-passwd

It asks for password, set it.
Then start the service.
/etc/init.d/ntop start

You are now ready to view your network statitics :).

checking for intltool >= 0.35.0... ./configure: line 16914: intltool-update: command not found

Error:
checking for intltool >= 0.35.0... ./configure: line 16914: intltool-update: command not found
configure: error: Your intltool is too old. You need intltool 0.35.0 or later.

Solution:
1. Install package, intltool-0.35.0-2.i386
2. It will fail for following dependency if XML Parser not installed.
error: Failed dependencies:
perl-XML-Parser is needed by intltool-0.35.0-2.i386
3. Also Download and install perl-XML-Parser package

Error! You need to have libevent 1.4.X or better.

You need to install libevent-devel package. Please find the respective package for your OS and install it. Also check whether you have already install libevent.

Wednesday, July 29, 2009

How to monitor bandwidht using MRTG and Nagios

Assuming that you have enough knowledge of Unix system and Nagios monitoring. If you dont know how to configure nagios please follow this link

Prerequisite:
- Make sure Nagios is running properly. Check http://localhost/nagios
- Also make sure that you have installed nagios_plugins.
- Iptables and Selinux must be disabled.

Install check_snmp plugin:
- Make sure you have all following packages install for successful installation of snmp plugin.
net-snmp, net-snmp-devel, net-snmp-libs, net-snmp-utils, beecrypt-devel,
elfutils-devel, elfutils-devel-static, lm_sensors

- To install check_snmp I download the plugin from this location. check_snmp

1. Unzip the downloaded file
bunzip check_snmp-1.1.tar.bz2

2. Untar it.
tar xvf check_snmp-1.1.tar

3. Configure and install it.
./configure
make
make install

4. Once it is install successfully you can find check_snmp at /usr/local/bin location.

5. Create a soft link for check_snmp.
ln -s /usr/local/bin/check_snmp /usr/local/nagios/libexec/check_snmp


Install MRTG:
To monitor bandwidth usages of router/switch you must have mrtg installed on system. Before installation please make sure you have install gd, libpng, zlib packages.

1. Download MRTG.

2. Once all libraries installed required for MRTG you are all set to compile and configure MRTG.
gunzip -c mrtg-2.16.2.tar.gz | tar xvf -
cd mrtg-2.16.2

3. ./configure --prefix=/usr/local/mrtg-2
make
make install

4. Create /var/www/html/mrtg file to store mrtg html files.
mkdir /var/www/html/mrtg

5. Now you need not to create mrtg configuration file. Use cfgmaker.
cfgmaker --global 'WorkDir: /var/www/html/mrtg' --global 'Options[_]: bits,growright' --output /etc/httpd/conf/mrtg.cfg public@172.17.42.22
( I choose default apache web location for installation of mrtg html files and apache conf directory to store mrtg.cfg.)

6. Go to respective location and make sure that above command has created respective files.
cd /var/www/html/mrtg;
ls -al /etc/httpd/conf/mrtg.cfg

7. Run this command to update mrtg log file.
env LANG=C /usr/local/mrtg-2/bin/mrtg /home/mrtg/cfg/mrtg.cfg

8. create a script to update the mrtg log file which will fetch data regularly and display in graph as well as on nagios.
vi /usr/local/mrtg-2/bin/monitor_mrtg.sh

9. Put following lines in it and save.
#!/bin/sh
env LANG=C /usr/local/mrtg-2/bin/mrtg /home/mrtg/cfg/mrtg.cfg
(When you run it first time it returns you few errors/warnings. Ignore it.)

chmod 755 mrtgbw.sh ; Make script executable.

10. Set cron to run above script every 5 minute.
crontab -e

11. Modify and save,
*/5 * * * * /usr/local/mrtg-2/bin/monitor_mrtg.sh

12. Restart cron service.
/etc/init.d/crond restart

13. Confirm that it is been configured.
http://localhost/mrtg/{name of html file}

Actually when you run mrtg command it searches for respective router community collects all data from router. Accordingly it creates log file. Like in my case it has found port 2 running on router and hence created file 172.17.42.22_2.log, 172.17.42.22_2.html. So I can access my graph through this link,
http://localhost/mrtg/172.17.42.22_2.html

Procedure to monitor Bandwidth Usages in Nagios:
1. Default installation directory of nagios is /usr/local/nagios/.

2. Open switch.cfg file
vi /usr/local/nagios/etc/objects/switch.cfg

3. Make changes according to your router specifications. Like,
define host{
use generic-switch
host_name Router_1
alias Router 1
address 172.17.42.22
hostgroups switches
}
4. You can also set PING, Uptime, Ports Link Status etc.
define service{
use generic-service ; Inherit values from a template
host_name Router_1 ; The name of the host the service is associated with
service_description PING ; The service description
check_command check_ping!200.0,20%!600.0,60% ; The command used to monitor the service
normal_check_interval 5 ; Check the service every 5 minutes under normal conditions
retry_check_interval 1 ; Re-check the service every minute until its final/hard state is determined
}

define service{
use generic-service ; Inherit values from a template
host_name Router_1
service_description Uptime
check_command check_snmp!-C public -o sysUpTime.0 -H 172.17.42.22
}

define service{
use generic-service ; Inherit values from a template
host_name Router_1
service_description Port 2 Link Status
check_command check_snmp!-C public -o ifOperStatus.2 -r 1 -H 172.17.42.22
}

define service{
use generic-service ; Inherit values from a template
host_name Router_1
service_description Port 2 Bandwidth Usage
check_command check_local_mrtgtraf!/var/www/html/mrtg/172.17.42.22_2.log!AVG!1000000,1000000!5000000,5000000
!10
}

5. Verify the configuration of nagios
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

6. Restart nagios service.
/etc/init.d/nagios restart

Tuesday, July 21, 2009

Solaris: Disk Management

- A disk under the Solaris OE can be divided into eight slices that are labeled Slice 0 through Slice 7.
- In reality Slice 2 represents the entire disk because "/" is the first(0) slice and swap is second(1).

Dist Slice Naming Convention, Controller number, Target number, Disk number, Slice number.
- Controller number, which control communications between system and disk unit.
- Target number is a unique hardware address of disk, tape, CDROM etc.
- Disk number is a logical unit number.
- Slice number is partition number on disk.

Solaris: Key points in File System

Inode
In general inode contains two parts. First, inodes contains information about file, including owners, permissions, size etc. Second inode contains pointers to data blocks associated with files.

Different file types
Regular files
Directories
Device files contains block device and character device.
Symbolic links
Socket files

- Data block associated with directory contains lists of all files and associated inodes.
- Symbolic links can point to regular files, directories, other symbolic links and device files.
- A data block of symbolic link contains the path of original file.
- A long listing of a device file shows two numbers, separated by a comma where major number denotes specific device driver required to access the device and minor device denotes the specific unit of the device driver control.

Wednesday, July 15, 2009

Red Hat : Global File System

- native file system.
- Interfaces directly with Linux kernel file system interface (VFS Player).
- Can be implemented on Standalone System.
- Can be a part of cluster configuration.
- GFS can be created on LVM logical volume.
- GFS is based on 64 bit architecture, can accommodate 8 EB file system.
- Red Hat GFS nodes can be configured and managed with Red Hat Cluster Suite configuration and management tools.
- LVM logical volumes in a Red Hat Cluster Suite are managed by CLVM, which is cluster wide implementation of LVM (clvmd daemon).
- daemon allow all nodes in cluster to share the volume.

Tuesday, July 7, 2009

OTRS: How to set the Generic Agent?

Taken from, lists.otrs.org.

So you need to create as many jobs as you have agents.
The idea behind GenericAgent is very simple: search for tickets using the
values in sections shaded grey and set new values in tickets in sections
shaded black. You should have all your ticket types, queues, agents,
customers defined before you start making a GenericAgent job.

For example, we are building a job to assign Hardware problems to Mark
Twain.

In Admin, go to Generic Agent. Type in the job name Hardware_to_Twain and
click Add. A new job form appears. Study it a little. There are sections
with title shaded gray. The values you put into these sections' fields are
used to select the tickets for processing by the job. You select Agent/Owner
= root at localhost. You select ticket state = new. You select Ticket lock =
unlock, you select ticket type = Hardware problem. How do you plan to deal
with email tickets btw? They will not get tycket type set correctly.

Now we move to sections with titles shaded black. That's where we program
new values for ticket fields:
You select new Agent = Mark Twain. You select the new Queue = Hardware, New
ticket lock = lock.
You may add a note to leave trace of what has been done for your customer.
(It's reflected in the history for the agents anyway).
Check Send no notifications flag .

Go back to the beginning and think of the schedule for the job. How often it
should run?

AS soon as you save the job - you're done. If there are problems with the
job - look in the System Log.

OTRS: How survey module works?

There are no steps given anywhere about how survey module works.
Here are the steps given,
1. Download and install Survey module
2. Login as OTRS Admin i.e. root@localhost by default.
3. Navigate to SysConfig setting under "Misc" section.
4. Search for Survey, then go to Core -> Survey::SendPeriod.
5. Change this value to 0, to send mail always whenever agent close the ticket.
6. Also created survey should have status as "Master". This is the setting when you create the survey.

This is where you can find more details, this post is translation from German.
http://translate.google.com/translate?hl=en&sl=de&u=http://wiki.otrs-forum.de/index.php%3Ftitle%3DSurvey&ei=3VWxSdOZCozMmQeRxJjfBQ&sa=X&oi=translate&resnum=8&ct=result&prev=/search%3Fq%3DSurvey::SendPeriod%26hl%3Den%26client%3Dsafari%26rls%3Den-us%26sa%3DG

Sunday, July 5, 2009

Ganglia Installation errors and solutions.

Error:
Checking for apr
checking for apr-1-config... no
configure: error: apr-1-config binary not found in path
make: *** No targets specified and no makefile found. Stop.

Resolution:
Install respective version of apr-devel. for me it was apr-devel-1.2.7-11.i386.rpm

Error:
Checking for confuse
checking for cfg_parse in -lconfuse... no
Trying harder including gettext
checking for cfg_parse in -lconfuse... no
Trying harder including iconv
checking for cfg_parse in -lconfuse... no
libconfuse not found

Resolution:
Install "libconfuse" and "libconfuse-devel" packages.

Error:
mod_python.c error: Python.h: No such file or directory
Resolution:
Install "python-devel" package if not installed.

- nilesh

Wednesday, July 1, 2009

Nagios: How to suppress notifications when host goes down.

There are two ways to avoid extra notifications of services when host goes down.

1. Configure ping (use check-host-alive) as the host check, when the host is unreachable, the notifications for the services will not be sent.

2. Can have service dependency configuration file like this,
define servicedependency {
hostgroup_name all_servers
service_description ping check
dependent_hostgroup_name all_servers
dependent_service_description *
execution_failure_criteria w,c
notification_failure_criteria w,u,c
}

Tuesday, June 30, 2009

Nagios: How to monitor a website

check_http supports arguments like hostname, hostaddress, URI etc.
you can have,
check_http -H $ARG1 -I $ARG2 -u $ARG3
where,
$ARG1 - hostname
$ARG2 - hostaddress
$ARG3 - URI - www.yourwebsite.com

Also check "check_http --help" for more details.

Nagios: Adding Cisco enterprise MIBs

(Taken from nagios user list)

- Default directory for snmp mibs are /usr/share/snmp/mibs/ if you have libsnmp-base package installed.

- to verify that the mibs are working, you can use a mib browser such as mbrowse (apt-get install mbrowse).

- not all OIDs in a MIB are supported for all IOS versions. Cisco usually has a link for each IOS version and what MIBs are supported.

- once the mib is verified and working, use the check_snmp check_command
in your .cfg files of the routers and switches you want to monitor.

Nagios: Default parameter with check_total_procs

We define service like this,

define service{
service_description Total_Processes
check_command check_nrpe!check_total_procs

}

By default this check is an execution of check_procs with only the warning and critical thresholds, so it counts all processes running on your machine.

The thresholds are configured in the nrpe.cfg on the remote machine.

- Nilesh

Install and Configure Nagios Plugins & NRPE on Solaris 10

I came across lots of issues to do this installation so here is a step by step installation of Nagios Plugins and NRPE on Solaris 10.

Add “nagios” user as “/usr/local/nagios” as home directory.

# useradd -c “Nagios User” -d /usr/local/nagios -m nagios

Change ownership of directory to nagios

# chown nagios:nagios /usr/local/nagios/

download nagios-plugins and nrpe from net. I have download them from sourceforge.

# mkdir /nagios; cd /nagios

wget http://sourceforge.net/project/downloading.php?group_id=29880&filename=nagios-plugins-1.4.13.tar.gz

wget http://sourceforge.net/project/downloading.php?group_id=26589&filename=nrpe-2.12.tar.gz

Now extract them

# gunzip nagios-plugins-1.4.13.tar.gz; gunzip nrpe-2.12.tar.gz

# tar xvf nagios-plugins-1.4.13.tar.gz; tar xvf nrpe-2.12.tar.gz

Before compiling I had to set PATH to find gcc binary,

# export PATH=$PATH:/usr/sfw/sbin:/usr/sfw/bin:/usr/ccs/bin

# cd nagios-plugins-1.4.13;

# ./configure –without-mysql (I did want to install with mysql support)
# make; make install
# chown -R nagios:nagios /usr/local/nagios/libexec

Install NRPE with SSL library support otherwise you will get error while compilation like this,

“checking for SSL headers… configure: error: Cannot find ssl headers”

If you run “dmesg” or if you check system messages you can see this error.

May 28 19:08:26 solaris10.remotehost.com inetd[24233]: [ID 702911 daemon.error] Failed to set credentials for the inetd_start method of instance svc:/network/nrpe/tcp:default (chdir: No such file or directory)
May 28 19:15:27 solaris10.remotehost.com inetd[24241]: [ID 702911 daemon.error] Failed to set credentials for the inetd_start method of instance svc:/network/nrpe/tcp:default (chdir: No such file or directory)

# cd nrpe-2.12; ./configure –with-ssl=/usr/sfw/ –with-ssl-lib=/usr/sfw/lib/ –with-ssl-inc=/usr/sfw/include

Still if you compilation fails please apply these faqs/solutions given in nagios faqs.

http://www.nagios.org/faqs/index.php?section_id=4&expand=false&showdesc=true

In my case I had to make changes in src/nrpe.c for encryption. Do make all, make install to create respective binaries.

# make all; make install; make install-daemon-config;

Once that is done, modify nrpe.cfg with approprite settings. Add following line at the end of /etc/services

nrpe 5666/tcp # NRPE

Also add this line to /etc/inetd.conf and convert it into SMF and enable service with -e option. Also checkout whether it went online.

nrpe stream tcp nowait nagios /usr/sfw/sbin/tcpd /usr/local/nagios/bin/nrpe -c /usr/local/nagios/etc/nrpe.cfg -i

# inetconv; inetconv -e

# svcs | grep nrpe

Check if 5666 port is open and in LISTEN mode.

# netstat -a | grep nrpe

Make sure that your /etc/hosts.allow and /etc/hosts.deny does not block your nagios server. Here are the entries

hosts.allow: nrpe: 127.0.0.1, 172.17.38.11

hosts.deny: nrpe: ALL

Final command, make sure that nrpe returns correct output.

# /usr/local/nagios/libexec/check_nrpe -H localhost
NRPE v2.12

Monday, June 29, 2009

Open Audit

There is no documentation available for this Open Source Tool. These all points written while going through open-audit documentation.

What it Open Audit?
- It audits hardware and software it discovers on the your computer.
- Use's MySQL database to store all discovered data.
- PHP used to display information stored in MySQL database.
- Apache to make it available through web interface.

What is audit.vbs?
- audit.vbs reads data from Microsoft's Windows Management Interface (WMI) and posts its findings to the server.
- OA collects the data using audit.vbs script and write directly to web server through POST method.

Schedule an audit?
- use "at" command to schedule an audit.
- audit domain everyday at some specific time, (can also use windows schedule task)
at 18:00 /interactive /every:M,T,W,Th,F,S,Su "C:\Program Files\xampp\htdocs\scripts\audit_mydomain.bat"
- audit_mydomain.bat contains something like....
@echo off
rem audit local domain pcs
cscript audit.vbs
cscript nmap.vbs
:end

Friday, June 26, 2009

OTRS Issues - Phase I

Putting it from list.otrs.org for my own information.
-----------------------------------------------------
Problem: How to Clean up the database.?
Solution
- You can use the Generic Agent to remove all or some ticket based on criteria.
- With a bulk change you could set all or some tickets to state 'removed' for instance, and than run GA on it.
- Deleting customers/agents is harder, like for some other data types. You could do it in the database (phpmyadmin), or you can do like I do, recycle. Make them invalid, rename them to INVALID... en reuse (rename) when needed.
--------------------------------------------------------------------------------------

Problem : How to setup LDAP Authentication?
Solution:
#Enable LDAP authentication for Customers / Users
$Self->{'Customer::AuthModule'} = 'Kernel::System::CustomerAuth::LDAP';
$Self->{'Customer::AuthModule::LDAP::Host'} = 'xx.xxx.xx.xx';
$Self->{'Customer::AuthModule::LDAP::BaseDN'} =
'ou=user,ou=dublin,dc=int,dc=domain,dc=com';
$Self->{'Customer::AuthModule::LDAP::UID'} = 'sAMAccountName';

#The following is valid but would only be necessary if the
#anonymous user do NOT have permission to read from the LDAP tree
# $Self->{'Customer::AuthModule::LDAP::SearchUserDN'} = 'otrsldap';
# $Self->{'Customer::AuthModule::LDAP::SearchUserPw'} = 'password';
$Self->{'Customer::AuthModule::LDAP::SearchUserDN'} = 'MyDomain\otrsldap';
$Self->{'Customer::AuthModule::LDAP::SearchUserPw'} = 'password';

#CustomerUser
#(customer user database backend and settings)
$Self->{CustomerUser} = {
Module => 'Kernel::System::CustomerUser::LDAP',
Params => {
Host => 'xx.xxx.xx.xx',
BaseDN => 'ou=user,ou=dublin,dc=int,dc=domain,dc=com',
SSCOPE => 'sub',
UserDN =>'otrsldap',
UserPw => 'password',
},

# customer unique id
CustomerKey => 'sAMAccountName',
# customer #
CustomerID => 'mail',
CustomerUserListFields => ['sAMAccountName', 'cn', 'mail'],
CustomerUserSearchFields => ['sAMAccountName', 'cn', 'mail'],
CustomerUserSearchPrefix => '',
CustomerUserSearchSuffix => '*',
CustomerUserSearchListLimit => 250,
CustomerUserPostMasterSearchFields => ['mail'],
CustomerUserNameFields => ['givenname', 'sn'],
Map => [
# note: Login, Email and CustomerID needed!
# var, frontend, storage, shown, required, storage-type
#[ 'UserSalutation', 'Title', 'title', 1, 0, 'var' ],
[ 'UserFirstname', 'Firstname', 'givenname', 1, 1, 'var' ],
[ 'UserLastname', 'Lastname', 'sn', 1, 1, 'var' ],
[ 'UserLogin', 'Login', 'sAMAccountName', 1, 1, 'var' ],
[ 'UserEmail', 'Email', 'mail', 1, 1, 'var' ],
[ 'UserCustomerID', 'CustomerID', 'mail', 0, 1, 'var' ],
#[ 'UserPhone', 'Phone', 'telephonenumber', 1, 0, 'var' ],
#[ 'UserAddress', 'Address', 'postaladdress', 1, 0, 'var' ],
#[ 'UserComment', 'Comment', 'description', 1, 0, 'var' ],
],
};

#Add the following lines when only users are allowed to login if they
reside in the spicified security group
#Remove these lines if you want to provide login to all users specified in
the User Base DN
#example: $Self->{'Customer::AuthModule::LDAP::BaseDN'} = 'ou=BaseOU,
dc=example, dc=com';
$Self->{'Customer::AuthModule::LDAP::GroupDN'} =
'CN=OTRS_Users,OU=Security Groups,OU=Dublin,DC=int,DC=domain,DC=com';
$Self->{'Customer::AuthModule::LDAP::AccessAttr'} = 'member';
$Self->{'Customer::AuthModule::LDAP::UserAttr'} = 'DN'
--------------------------------------------------------------------------------------

Problem:
Solution:

Few important points about OTRS:
--------------------------------
- The queue view only displays unlocked tickets by default.
- There is a line that says "Tickets shown..... All Tickets xx" The xx is a link which should display all tickets locked and unlocked which are in your "My Queues".

Monday, March 16, 2009

Innodb Error 'Can't create table 'mysql.ibbackup_binlog_marker'

Problem:
090316 11:57:32 [ERROR] Slave SQL: Error 'Can't create table 'mysql.ibbackup_binlog_marker' (errno: -1)' on query. Default database: 'mysql'. Query: 'CREATE TABLE ibbackup_binlog_marker(a INT) TYPE=INNODB', Error_code: 1005
090316 11:57:32 [Warning] Slave: The syntax 'TYPE=storage_engine' is deprecated and will be removed in MySQL 5.2. Please use 'ENGINE=storage_engine' instead Error_code: 1287
090316 11:57:32 [Warning] Slave: Can't create table 'mysql.ibbackup_binlog_marker' (errno: -1)
Error_code: 1005
090316 11:57:32 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'testing-bin.000009' position 321

Answer:

mysql> CREATE TABLE ibbackup_binlog_marker2(a INT) TYPE=INNODB;
Query OK, 0 rows affected, 1 warning (0.02 sec)

bash# cd /var/mysql/data/
bash# mv mysql/ibbackup_binlog_marker{2,}.frm
bash# mysqladmin flush-tables

mysql> SET SQL_LOG_BIN = 0;
Query OK, 0 rows affected (0.00 sec)

mysql> DROP TABLE IF EXISTS ibbackup_binlog_marker;
Query OK, 0 rows affected (0.12 sec)

Wednesday, February 18, 2009

All about : NTP

I had this plan long back that i will keep my own configurations and documents ready for others to use it. My memory is very short, so i forget things very often. One day i came across this blogger and found best way to keep everything at one place and need not to carry single paper anywhere.

Planning to start All about sessions on all servers, topics etc. Anything which is interesting and information about it is scattered everywhere. hope will be able to do that.

NTP (Network Time Protocol)
---------------------------
What is it ?
Used to synchronize the clocks of computers to some time reference.

NTP uses UDP/IP packets for data transfer because of the fast connection setup and response times.

Port number for the NTP (that ntpd and ntpdate listen and talk to) is 123.

how many ntp servers available in Internet?
According to ntp.org there are 175,000 ntp servers are available in the Internet. Among these there were over 300 valid stratum-1 servers. In addition there were over 20,000 servers at stratum 2, and over 80,000 servers at stratum 3.

ntp commands
ntpd - A daemon process that is both, client and server.
ntpdate - A utility to set the time once, similar to the popular rdate command.
ntpq, ntpdc - Monitoring and control programs that communicate via UDP with ntpd.
ntptrace - A utility to back-trace the current system time, starting from the local server.

time references
- A reference clock is some device or machinery that spits out the current time.
- A reference clock will provide the current time, that's for sure.
- NTP will compute some additional statistical values like offset (or phase), jitter (or dispersion), frequency error, and stability.

There are serveral ways how a NTP client will know about NTP servers to use:
- Servers to be polled can be configured manually
- Servers can send the time directly to a peer
- Servers may send out the time using multicast or broadcast addresses

- ntpd's reaction will depend on the offset between the local clock and the reference time.
- For a tiny offset ntpd will adjust the local clock as usual; for small and larger offsets, ntpd will reject the reference time for a while.
- In the latter case the operation system's clock will continue with the last corrections effective while the new reference time is being rejected.
- After some time, small offsets (significantly less than a second) will be slewed (adjusted slowly), while larger offsets will cause the clock to be stepped (set anew).
- Huge offsets are rejected, and ntpd will terminate itself, believing something very strange must have happened.

stratum 1
A server operating at stratum 1 belongs to the class of best NTP servers available, because it has a reference clock attached to it.

Servers synchronized to a stratum 1 server will be stratum 2.