Thursday, December 31, 2009

VERITAS NetBackup Related Documents:

264256: VERITAS NetBackup (tm) 5.0 Installation Guide for UNIX
 http://support.veritas.com/docs/264256


268087: VERITAS NetBackup (tm) 5.1 Installation Guide for UNIX
 http://support.veritas.com/docs/268087


278130: VERITAS NetBackup (tm) 6.0 Release Notes Known Issues Documentation
 http://support.veritas.com/docs/278130


278132: VERITAS NetBackup (tm) 6.0 Release Impact Bulletin
 http://support.veritas.com/docs/278132


279261: Veritas NetBackup (tm) 6.0 Installation Guide for UNIX
 http://support.veritas.com/docs/279261


284146: Top 10 Recommended Veritas NetBackup (tm) TechNotes
 http://support.veritas.com/docs/284146


290199: Veritas NetBackup (tm) 6.5 Installation Guide for UNIX and Linux
 http://support.veritas.com/docs/290199

NetBackup Support for Solaris 10

The following describes NetBackup Support for Solaris 10.

1. Sun Solaris 10 support* begins with NetBackup 5.0 Maintenance Pack 4 and NetBackup 5.1 Maintenance Pack 2 as follows:

Supported OS version Hardware NBU Support Patch Level
Solaris 10 Sparc NetBackup Client 5.0 MP4 / 5.1 MP2
Solaris 10 X86 NetBackup Client 5.1 MP2
Solaris 10 Sparc NetBackup Server 5.0 MP5 / 5.1 MP2
Solaris 10 Opteron NetBackup Client 5.1 MP3

* Base OS support, full support limited by #6 in this document.

Note: NetBackup 6.x requires no special Maintenance Packs for Solaris 10 Support.

After loading NetBackup version 5.0 or 5.1 from CD-ROM media, you must update your system to either 5.0 MP4 (for 5.0 support of NetBackup client), 5.0 MP5 (for 5.0 support of NetBackup server), or 5.1 MP2 (when running 5.1). Sun Solaris 10 is not supported on the base CD-ROM version of either NetBackup 5.0 or 5.1. There are known connection and Java GUI issues that will be encountered if you attempt to run Sun Solaris 10 on the CD-ROM version of NetBackup. Therefore, the corresponding patch update must be applied. This requirement is due to a new inetd design method introduced in Solaris 10.

2. Use of NetBackup Advanced Client (ADC) methods are supported on Sun Solaris 10 beginning with Veritas Storage Foundation Suite version 4.1 releasing in 2005. Check the Veritas Web site for availability.

3. Veritas Storage Migrator option is not supported on Sun Solaris 10 until the NetBackup 6.0 release.

4. The following script must be run in order for "Solaris Solaris10" or "Solaris Solaris_x86_10" to show as a client selection in the drop down list for backup policies:

/usr/openv/netbackup/bin/goodies/new_clients

Upon running this script, Solaris 10 choices will be available in the drop down menu for Solaris 10 clients.

5. Solaris 10 Zone Support: The above support is for base Solaris 10 OS support. NetBackup Server and Enterprise Client Components (master server, media server, SAN media server and SAN client) are supported only in a global zone. Only the NetBackup Standard Client is supported in non-global zones.
Veritas Infrastructure Core Services (ICS) are only supported in the global zone. This includes packages such as the Veritas Authentication service (VxAT) and the Veritas Authorization Service (VxAZ). Attempts to install these packages in a local zone will fail during the installation.
Systems configured with non-global zones can be backed up with the NetBackup Solaris 10 standard client, provided the standard client is loaded on each non-global zone in which a backup is desired. On a standard non-global zone, a workaround must be performed to successfully load the NetBackup standard client. Prior to installation, the /usr/openv directory must be created and made writeable for a successful NetBackup Client installation. To do this, you must use the "zlogin" process to become "Zone root" on the non-global zone. Then, you will be able to create a link from /usr/openv to a writeable location on the non-global zone.

VERITAS NetBackup

Before:
•VERITAS NetBackup 5.x Troubleshooting Techniques for UNIX (VT-261)
•VERITAS NetBackup 5.x Vault (VT-263)

NetBackup 6.0 Architecture
•Master Server, EMM Server, Media Server, and Client architecture
•Intelligent Resource Manager (IRM)
•Enterprise Media Manager (EMM)
•Master Server components
•EMM Server components
•Media Server components
•Client components
•Backup Process Flow
•Restore Process Flow
Module 2 - NetBackup 6.0 Installation
•Installation requirements
•Pre-installation activities
•NetBackup 6.0 Installation Initial installation
•NetBacukup 6.0 Installation Upgrade installation
•Post-installation activities
Module 3 - NetBackup 6.0 Installation Lab
•Installing NetBackup 6.0 Master Server, Media Server, EMM Server, and Client
•NetBackup 6.0 licensing
•NetBackup 6.0 directory structures
•NetBackup 6.0 daemons/services/processes
Module 4 - NetBackup 6.0 Configuration and Operations
•NetBackup 6.0 GUIs
•NetBackup 6.0 Configuration Devices, Storage Units, Media, Backup Policies
•Backups and Restores
•NetBackup 6.0 Catalog backups and restores
Module 5 - NetBackup 6.0 Configuration and Operations Lab
•NetBackup GUIs
•Configuring NetBackup 6.0 devices, storage units, media, backup policies, and catalog backups.
•Performing basic backup and restore operations
•Performing catalog backup and restore operations
•Monitoring NetBackup jobs
Module 6 - NetBackup 6.0 Internals and Support
•Universal logging (VxUL) concepts
•Universal logging commands and operations
•Legacy debug logging
•NetBackup 6.0 patches and packs
Module 7 - NetBackup 6.0 Support Activities Lab
•Enabling NetBackup 6.0 debug logging
•Locating/analyzing NetBackup 6.0 debug logs
•NetBackup 6.0 error/status codes
•NetBackup 6.0 Online Troubleshooting Guide
•NetBackup 6.0 command line usage
•NetBackup 6.0 patches and packs
Module 8 - NetBackup 6.0 New Installation Lab
•Installing a Master Server (EMM Server/Media Server)
•Installing a Media Server system
•Configuring NetBackup
•Verifying NetBackup Operations

CLUSTER


A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

High-availability (HA) clusters
High-availability clusters (also known as Failover Clusters) are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure.

There are many commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system.
 Load-balancing clusters
Load-balancing when multiple computers are linked together to share computational workload or function as a single virtual computer. Logically, from the user side, they are multiple machines, but function as a single virtual machine. Requests initiated from the user are managed by, and distributed among, all the standalone computers to form a cluster. This results in balanced computational work among different machines, improving the performance of the cluster system.

Compute clusters
Often clusters are used primarily for computational purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a cluster might support computational simulations of weather or vehicle crashes. The primary distinction within compute clusters is how tightly-coupled the individual nodes are. For instance, a single compute job may require frequent communication among nodes - this implies that the cluster shares a dedicated network, is densely located, and probably has homogenous nodes. This cluster design is usually referred to as Beowulf Cluster. The other extreme is where a compute job uses one or few nodes, and needs little or no inter-node communication. This latter category is sometimes called "Grid" computing. Tightly-coupled compute clusters are designed for work that might traditionally have been called "supercomputing". Middleware such as MPI (Message Passing Interface) or PVM (Parallel Virtual Machine) permits compute clustering programs to be portable to a wide variety of clusters.

          


VERITAS NETBACKUP SKILLS

•Identify the architecture and components of NetBackup 6.0 that reside on the Master Server, EMM Server, Media Server, and Client.
•Identify the process flow that occurs during basic backup and restore operations.
•Install NetBackup 6.0 Server and Client software.
•Configure NetBackup 6.0 devices, storage units, media, backup policies, and catalog backups.
•Perform basic and catalog backup and restore operations.
•Perform basic, common NetBackup administrative tasks.
•Enable and examine both traditional and VxUL debug logging.
•Perform NetBackup operations using new command line interface commands and command arguments.
•Locate and identify error messages presented in/by NetBackup 6.0.
•Perform basic troubleshooting of NetBackup 6.0 backup and restore problems.

NEVER automount your backup directories

NEVER automount your backup directories
the infamous "rm -rf /" will erase your automounted backups too


Always mount the backup directory before performing backups
Always unmount the backup directory when done performing backups


Prune your backups of un-necessary files from being backed up
tar --exclude /tmp --exclude /proc ...
egrep ".netscape/cache| *.o | *.swp | *~ | ..."


Be sure that the backup files is NOT group/world writable/readable
It'd contain confidential data, passwds etc


chown root /Backup_Dir/Date.x.tgz
chgrp backup /Backup_Dir/Date.x.tgz
chmod 440 /Backup_Dir/Date.x.tgz


encrypt your backup files if you are really paranoid about your passwd files




Example Backup Commands
Creating datestamps for file names
date '+%Y%m%d'
Ouputs 20020615 for June 15, 2002


date '+%Y.%m.%d'
Ouputs 2002.06.15 for June 15, 2002

www.thing.dyndns.org bash examples


dd -- copying partitions, mirror'd partitions
mounting and unmounting is NOT needed in this case
dd if=/dev/hda1 of=/dev/hdb1 bs=1024


Simplified tar --newer Incremental Backup example
LastBackupTime = `cat /Backup_Dir/Last.txt`
tar zcvf --newer $LastBackupTime /Backup_Dir/Date.x.tgz $DIRS
echo "Date" > /Backup_Dir/Last.txt


Simplified find | tar Incremental Backup example
Cnt = `cat /Backup_Dir/Last.txt`
find $DIRS -mtime -$Cnt -print | tar zcvf /Backup_Dir/Date.$Cnt.tgz -T -
echo "$Cnt +1" > /Backup_Dir/Last.txt


Simplified dump | restore Backup example


Simplified cpio Backup example
find /home -xdev | cpio -pm /mnt/backup

Creating Backup Files

Copying Directories and Files

dd Copying / on /dev/hda1 to a /dev/hdb1 ( backup disk )
dd if=/dev/hda1 of=/dev/hdb1 bs=1024k

tar | tar Copying /home on /dev/hda5 to a /dev/hdb5 ( backup disk )
mount /dev/hdb5 /mnt/backup
( tar cf - /home ) | ( cd /mnt/backup ; tar xvfp - )
umount /mnt/backup
scp Copying /home on /dev/hda5 to a /dev/hdb5 ( backup disk )
mount /dev/hdb5 /mnt/backup
scp -par /home /mnt/backup
umount /mnt/backup
scp -pr user1@host:/Source_to_Backup user2@BackupServer:/Backup_Dir

find | cpio Copying /home on /dev/hda5 to a /dev/hdb5 ( backup disk )
mount /dev/hdb5 /mnt/backup
find /home -print | cpio -pm /mnt/backup
umount /mnt/backup

To View Contents
cpio -it < file.cpio

To Extract a file
cpio -id "usr/share/file/doc/*" < file.cpio

Creating Backup Files
tar Backup /home to ( backup disk )
mount /dev/hdb5 /mnt/backup
tar zcvf /mnt/backup/home.Date.tgz /home > /mnt/backup/home.Date.log
umount /mnt/backup
Encrypt home.Date.tgz if security of sensitive data is an issue
Example Full/Incremental Script

Typical list of directories you WANT to backup regularly...
DIRS is typically: /root /etc /home/

Using Month/Date for Backup Filenames
date `%Y.%M.%D"

Simple Full Backup
tar zcvf /Backup_Dir/Month_Date.Full.tgz $DIRS

Simple Incremental Backups
Change the Month_Date.tgz file
Backup the files to a Different Server:/Disks [ reasons ]

Simple Daily Incremental Backup
find $DIRS -mtime -1 -type f -print | tar zcvf /BackupDir_1/Month_Date.tgz -T -

Simple Weekly Incremental Backup
find $DIRS -mtime -7 -type f -print | tar zcvf /BackupDir_7/Month_Date.7.tgz -T -
use -mtime -32 to cover for un-noticed failed incremental backups from last week
Simple Monthly Incremental Backup
find $DIRS -mtime -31 -type f -print | tar zcvf /BackupDir_30/Year_Month.30.tgz -T -
use -mtime -93 to cover for un-noticed failed incremental backups from last month

BACKUP THE LINUX SERVER

Which servers to backup
Which directories/files to backup
What is the Backup media
Backup Failure Modes
Testing Your Backups

OffSite Backups

Backup Servers
Which Servers to backup
Pull the ethernet cable to it....and see what happens..

Backups should be on a different server than the server/data you are trying to backup
protect against the server wiping itself out and its backup
Backup all PCs on one LAN/hubb to a local backup server - minimize traffic
Backup all Local backup servers to the other Building's Backup server and vice verss
Backup Directories
Most of the "System" directories and files are already on the installation cdrom
Most of the "Updates" you applied are already scattered at the various mirrors on the internet
Which directories to backup is dictated by the size and topology of your network
Backup of a single server is significantly simpler than backup of different servers
www, email, firewall, ftp servers, home servers, file servers, etc


Which directories to backup is dictated by the partitions used to install
move system config files into /etc or /usr/local/etc so that its backed up

If you can recreate your entire system on another disk...you've selected the "right directories to backup"
you should backup "user data"
/root /etc /home /usr/local
you should optionally backup log files and pending emails
/var/log /var/spool/mail
 Backup Media 
The number of servers and size of your data dictates your backup media
Backups onto floppy -- good for Full backups of /etc

Backups onto Zip -- good for backups of 200Mb

Backups onto CDR -- good for backups of 600Mb

Backups onto DVD -- good for backups of 4GbMb

Backups onto Tapes -- good for backups of up to 40-80Gb, and more with tape libraries

Backups onto Disks -- good for 500Gb -- xxTeraByte Raid5 Backups


NEVER backup your data to the same partition, nor same disk
if you lose the disk...you do NOT have any backups


Backups are best done to a DIFFERENT server
protect your backups from hardware flakyness and random power surges etc



Tape and CDR backup media REQUIRES you to change it daily/weekly...
if you forget, you lose the previous backup... and you may also lose todays incremental backup data too

You have to clean the tape head regularly

You can lose one of the daily incremental tapes, or someone can walk out with your corp data
Restoring a file from tape can be a slow process ( hours and hours )


One 20Gb can hold about 1-2 months of full and incremental backup of another 20Gb disks depending on data amd backup methodology


Backup Failures 
Backup Failure Modes

BACKUP THE LINUX SERVER

Simulate a random disk crash -- just turn off the power
oNothing should break -- people can keep working
oNothing should break -- email and web still works

•Rotate your Backups
odaily incremental on Backup-Daily
oweekly 30 day incremental on Backup-Weekly ( different server )
oweekly Full backups ( different server than dail or weekly-30 )

•do NOT erase last weeks backup
okeep multiple backups on different servers
okeep the 32 or 94 day incrementals if you decide to erase *.Full.tgz

•Test your Backups from Bare Metal Restores
oalways start from virgin disk when testing backups
oalways just apply the patches ... NO manual changes

Apache webserver

Apache is one of the most popular Web servers on the Web right now, and part of its charm is that it's free. It also has a lot of features that make it very extensible and useful for many different types of Web sites. It is a server that is used for personal Web pages up to enterprise level sites.
This article will discuss how to install Apache on a Linux system. Before we start you should be at least comfortable working in Linux - changing directories, using tar and gunzip, and compiling with make (I'll discuss where to get binaries if you don't want to mess with compiling your own). You should also have access to the root account on the server machine.
Download Apache
I recommend downloading the latest stable release. At the time of this writing, that was Apache 2.0. The best place to get Apache is from the Apache HTTP Server download site. Download the sources appropriate to your system. Binary releases are available as well.
Extract the Files
Once you've downloaded the files you need to uncompress them and untarring:
  gunzip -d httpd-2_0_NN.tar.gz
  tar xvf httpd-2_0_NN.tar
This creates a new directory under the current directory with the source files.
Configuring
Once you've got the files, you need to tell your machine where to find everything by configuring the source files. The easiest way is to accept all the defaults and just type:
  ./configure
Of course, most people don't want to accept just the default choices. The most important option is the prefix= option. This specifies the directory where the Apache files will be installed. You can also set specific environment variables and modules. Some of the modules I like to have installed are:
•mod_alias - to map different parts of the URL tree
•mod_include - to parse Server Side Includes
•mod_mime - to associate file extensions with its MIME-type
•mod_rewrite - to rewrite URLs on the fly
•mod_speling (sic) - to help your readers who might misspell URLs
•mod_ssl - to allow for strong cryptography using SSL
•mod_userdir - to allow system users to have their own Web page directories
Please keep in mind that these aren't all the modules I might install on a given system. Read the details about the modules to determine which ones you need.
Build
As with any source installation, you'll then need to build the installation:
  make
  make install

Apache server configuration

Apache is controlled by a series of configuration files: httpd.conf, access.conf. and srm.conf (there's actually also a mime.types file, but you have to deal with that only when you're adding or removing MIME types from your server, which shouldn't be too often). The files contain instructions, called directives, that tell Apache how to run. Several companies offer GUI-based Apache front-ends, but it's easier to edit the configuration files by hand.
Remember to make back-up copies of all your Apache configuration files, in case one of the changes you make while experimenting renders the Web server inoperable.
Also, remember that configuration changes you make don't take effect until you restart Apache. If you've configured Apache to run as an inetd server, then you don't need to worry about restarting, since inetd will do that for you.
Download the reference card
As with other open-source projects, Apache users share a wealth of information on the Web. Possibly the single most useful piece of Apache-related information--apart from the code itself, of course--is a two-page guide created by Andrew Ford.
Called the Apache Quick Reference Card, it's a PDF file (also available in PostScript) generated from a database of Apache directives. There are a lot of directives, and Ford's card gives you a handy reference to them.
While this may not seem like a tip on how to run Apache, it will make your Apache configuration go much smoother because you will have the directives in an easy-to-access format.
One quick note--we found that the PDF page was a bit larger than the printable area of our printer (an HP LaserJet 8000 N). So we set the Acrobat reader to scale-to-fit and the pages printed just fine.
Use one configuration file
The typical Apache user has to maintain three different configuration files--httpd.conf, access.conf, and srm.conf. These files contain the directives to control Apache's behavior.
The tips in this story keep the configuration files separate, since it's a handy way to compartmentalize the different directives. But Apache itself doesn't care--if you have a simple enough configuration or you just want the convenience of editing a single file, then you can place all the configuration directives in one file. That one file should be httpd.conf, since it is the first configuration file that Apache interprets. You'll have to include the following directives in httpd.conf:
AccessConfig /dev/null
ResourceConfig /dev/null
That way, Apache won't cough up an error message about the missing access.conf and srm.conf files. Of course, you'll also need to copy the directives from srm.conf and access.conf into your new httpd.conf file.
Restrict access
Say you have document directories or files on your Web server that should be visible only to a select group of computers. One way to protect those pages is by using host-based authentication. In your access.conf file, you would add something like this:

order deny,allow
deny from all
allow from 10.10.64

The directive is what's called a sectional directive. It encloses a group of directives that apply to the specified directory. The Apache Quick Reference Card includes a listing of sectional directives.

The File Transfer Protocol (FTP)

The File Transfer Protocol (FTP) is used as one of the most common means of copying files between servers over the Internet. Most web based download sites use the built in FTP capabilities of web browsers and therefore most server oriented operating systems usually include an FTP server application as part of the software suite. Linux is no exception.
This chapter will show you how to convert your Linux box into an FTP server using the default Very Secure FTP Daemon (VSFTPD) package included in Fedora.
FTP Overview
FTP relies on a pair of TCP ports to get the job done. It operates in two connection channels as I'll explain:
FTP Control Channel, TCP Port 21: All commands you send and the ftp server's responses to those commands will go over the control connection, but any data sent back (such as "ls" directory lists or actual file data in either direction) will go over the data connection.
FTP Data Channel, TCP Port 20: This port is used for all subsequent data transfers between the client and server.
In addition to these channels, there are several varieties of FTP.
Types of FTP
From a networking perspective, the two main types of FTP are active and passive. In active FTP, the FTP server initiates a data transfer connection back to the client. For passive FTP, the connection is initiated from the FTP client. These are illustrated in Figure 15-1.
Figure 15-1 Active And Passive FTP Illustrated
From a user management perspective there are also two types of FTP: regular FTP in which files are transferred using the username and password of a regular user FTP server, and anonymous FTP in which general access is provided to the FTP server using a well known universal login method.
Take a closer look at each type.
Active FTP
The sequence of events for active FTP is:
1.Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as 'ls' and 'get' are sent over this connection.
2.Whenever the client requests data over the control connection, the server initiates data transfer connections back to the client. The source port of these data transfer connections is always port 20 on the server, and the destination port is a high port (greater than 1024) on the client.
3.Thus the ls listing that you asked for comes back over the port 20 to high port connection, not the port 21 control connection.
FTP active mode therefore transfers data in a counter intuitive way to the TCP standard, as it selects port 20 as it's source port (not a random high port that's greater than 1024) and connects back to the client on a random high port that has been pre-negotiated on the port 21 control connection.
Active FTP may fail in cases where the client is protected from the Internet via many to one NAT (masquerading). This is because the firewall will not know which of the many servers behind it should receive the return connection.
Passive FTP
Passive FTP works differently:
1.Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as ls and get are sent over that connection.
2.Whenever the client requests data over the control connection, the client initiates the data transfer connections to the server. The source port of these data transfer connections is always a high port on the client with a destination port of a high port on the server.
Passive FTP should be viewed as the server never making an active attempt to connect to the client for FTP data transfers. Because client always initiates the required connections, passive FTP works better for clients protected by a firewall.

                   

ALL ABOUT WEB SERVERS

The Web server is the basis of everything that happens with your Web page, and yet often people know nothing about it. Do you even know what Web server software is running on the machine? How about the machine's operating system?
For simple Web sites, these questions really don't matter. After all, a Web page that runs on Unix with a Netscape Server will usually run okay on a Windows machine with IIS. But once you decide you need more advanced features on your site (like CGI, database access, ASP, etc.), knowing what's on the back-end means the difference between things working and not.
The Operating System
Most Web servers are run on one of three Operating Systems:
1. Unix
2. Linux
3. Windows NT
You can generally tell a Windows NT machine by the extensions on the Web pages. For example, all the pages on Web Design/HTML @ About.com end in .htm. This hearkens back to DOS when file names were required to have a 3 character extension. Linux and Unix Web servers usually serve files with the extension .html.
Unix, Linux, and Windows are not the only operating systems for Web servers, just some of the most common. I have run Web servers on Windows 95 and MacOS. And just about any operating system that exists has at least one Web server for it, or the existing servers can be compiled to run on them.
The Servers
A Web server is just a program running on a computer. It provides access to Web pages via the Internet or other network. Servers also do things like track hits to the site, record and report error messages, and provide security.
Apache
This is possibly the world's most popular Web server. It is the most widely used and because it is released as "open source" and with no fee for use, it has had a lot of modifications and modules made for it. You can download the source code, and compile it for your machine, or you can download binary versions for many operating systems (like Windows, Solaris, Linux, OS/2, freebsd, and many more). There are many different add-ons for Apache, as well. The drawback to Apache is that there might not be as much immediate support for it as other commercial servers. However, there are many pay-for-support options now available. If you use Apache, you'll be in very good company.
Internet Information Services (IIS)
The Internet Information Servcies (IIS) is Microsoft's addition to the Web server arena. If you are running on a Windowws Server system, this might be the best solution for you to implement. It interfaces cleanly with the Windows Server OS, and you are backed by the support and power of Microsoft. The biggest drawback to this Web server is that Windows Server is very expensive. It is not meant for small businesses to run their Web services off, and unless you have all your data in Access and plan to run a solely Web based business, it is much more than a beginning Web development team needs. However, it's connections to ASP.Net and the ease with which you can connect to Access databases make it ideal for Web businesses.
Sun Java Web Server
The third big Web server of the group is the Sun Java Web Server. This is most often the server of choice for corporations that are using Unix Web server machines. The Sun Java Web Server offers some of the best of both Apache and IIS in that it is a supported Web server with strong backing by a well known company. It also has a lot of support with add-in components and APIs to give it more options. This is a good server if you are looking for good support and flexibility on a Unix platform

How do I kill process in Linux

How do I kill process in Linux?
Linux and all other UNIX like oses comes with kill command. The command kill sends the specified signal (such as kill process) to the specified process or process group. If no signal is specified, the TERM signal is sent.
Kill process using kill command under Linux/UNIX
kill command works under both Linux and UNIX/BSD like operating systems.
Step #1: First, you need to find out process PID (process id)
Use ps command or pidof command to find out process ID (PID). Syntax:
ps aux | grep processname
pidof processname
For example if process name is lighttpd, you can use any one of the following command to obtain process ID:
# ps aux | grep lighttpdOutput:
lighttpd 3486 0.0 0.1 4248 1432 ? S Jul31 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf
lighttpd 3492 0.0 0.5 13752 3936 ? Ss Jul31 0:00 /usr/bin/php5-cg