Thursday, December 31, 2009

VERITAS NetBackup Related Documents:

264256: VERITAS NetBackup (tm) 5.0 Installation Guide for UNIX
 http://support.veritas.com/docs/264256


268087: VERITAS NetBackup (tm) 5.1 Installation Guide for UNIX
 http://support.veritas.com/docs/268087


278130: VERITAS NetBackup (tm) 6.0 Release Notes Known Issues Documentation
 http://support.veritas.com/docs/278130


278132: VERITAS NetBackup (tm) 6.0 Release Impact Bulletin
 http://support.veritas.com/docs/278132


279261: Veritas NetBackup (tm) 6.0 Installation Guide for UNIX
 http://support.veritas.com/docs/279261


284146: Top 10 Recommended Veritas NetBackup (tm) TechNotes
 http://support.veritas.com/docs/284146


290199: Veritas NetBackup (tm) 6.5 Installation Guide for UNIX and Linux
 http://support.veritas.com/docs/290199

NetBackup Support for Solaris 10

The following describes NetBackup Support for Solaris 10.

1. Sun Solaris 10 support* begins with NetBackup 5.0 Maintenance Pack 4 and NetBackup 5.1 Maintenance Pack 2 as follows:

Supported OS version Hardware NBU Support Patch Level
Solaris 10 Sparc NetBackup Client 5.0 MP4 / 5.1 MP2
Solaris 10 X86 NetBackup Client 5.1 MP2
Solaris 10 Sparc NetBackup Server 5.0 MP5 / 5.1 MP2
Solaris 10 Opteron NetBackup Client 5.1 MP3

* Base OS support, full support limited by #6 in this document.

Note: NetBackup 6.x requires no special Maintenance Packs for Solaris 10 Support.

After loading NetBackup version 5.0 or 5.1 from CD-ROM media, you must update your system to either 5.0 MP4 (for 5.0 support of NetBackup client), 5.0 MP5 (for 5.0 support of NetBackup server), or 5.1 MP2 (when running 5.1). Sun Solaris 10 is not supported on the base CD-ROM version of either NetBackup 5.0 or 5.1. There are known connection and Java GUI issues that will be encountered if you attempt to run Sun Solaris 10 on the CD-ROM version of NetBackup. Therefore, the corresponding patch update must be applied. This requirement is due to a new inetd design method introduced in Solaris 10.

2. Use of NetBackup Advanced Client (ADC) methods are supported on Sun Solaris 10 beginning with Veritas Storage Foundation Suite version 4.1 releasing in 2005. Check the Veritas Web site for availability.

3. Veritas Storage Migrator option is not supported on Sun Solaris 10 until the NetBackup 6.0 release.

4. The following script must be run in order for "Solaris Solaris10" or "Solaris Solaris_x86_10" to show as a client selection in the drop down list for backup policies:

/usr/openv/netbackup/bin/goodies/new_clients

Upon running this script, Solaris 10 choices will be available in the drop down menu for Solaris 10 clients.

5. Solaris 10 Zone Support: The above support is for base Solaris 10 OS support. NetBackup Server and Enterprise Client Components (master server, media server, SAN media server and SAN client) are supported only in a global zone. Only the NetBackup Standard Client is supported in non-global zones.
Veritas Infrastructure Core Services (ICS) are only supported in the global zone. This includes packages such as the Veritas Authentication service (VxAT) and the Veritas Authorization Service (VxAZ). Attempts to install these packages in a local zone will fail during the installation.
Systems configured with non-global zones can be backed up with the NetBackup Solaris 10 standard client, provided the standard client is loaded on each non-global zone in which a backup is desired. On a standard non-global zone, a workaround must be performed to successfully load the NetBackup standard client. Prior to installation, the /usr/openv directory must be created and made writeable for a successful NetBackup Client installation. To do this, you must use the "zlogin" process to become "Zone root" on the non-global zone. Then, you will be able to create a link from /usr/openv to a writeable location on the non-global zone.

VERITAS NetBackup

Before:
•VERITAS NetBackup 5.x Troubleshooting Techniques for UNIX (VT-261)
•VERITAS NetBackup 5.x Vault (VT-263)

NetBackup 6.0 Architecture
•Master Server, EMM Server, Media Server, and Client architecture
•Intelligent Resource Manager (IRM)
•Enterprise Media Manager (EMM)
•Master Server components
•EMM Server components
•Media Server components
•Client components
•Backup Process Flow
•Restore Process Flow
Module 2 - NetBackup 6.0 Installation
•Installation requirements
•Pre-installation activities
•NetBackup 6.0 Installation Initial installation
•NetBacukup 6.0 Installation Upgrade installation
•Post-installation activities
Module 3 - NetBackup 6.0 Installation Lab
•Installing NetBackup 6.0 Master Server, Media Server, EMM Server, and Client
•NetBackup 6.0 licensing
•NetBackup 6.0 directory structures
•NetBackup 6.0 daemons/services/processes
Module 4 - NetBackup 6.0 Configuration and Operations
•NetBackup 6.0 GUIs
•NetBackup 6.0 Configuration Devices, Storage Units, Media, Backup Policies
•Backups and Restores
•NetBackup 6.0 Catalog backups and restores
Module 5 - NetBackup 6.0 Configuration and Operations Lab
•NetBackup GUIs
•Configuring NetBackup 6.0 devices, storage units, media, backup policies, and catalog backups.
•Performing basic backup and restore operations
•Performing catalog backup and restore operations
•Monitoring NetBackup jobs
Module 6 - NetBackup 6.0 Internals and Support
•Universal logging (VxUL) concepts
•Universal logging commands and operations
•Legacy debug logging
•NetBackup 6.0 patches and packs
Module 7 - NetBackup 6.0 Support Activities Lab
•Enabling NetBackup 6.0 debug logging
•Locating/analyzing NetBackup 6.0 debug logs
•NetBackup 6.0 error/status codes
•NetBackup 6.0 Online Troubleshooting Guide
•NetBackup 6.0 command line usage
•NetBackup 6.0 patches and packs
Module 8 - NetBackup 6.0 New Installation Lab
•Installing a Master Server (EMM Server/Media Server)
•Installing a Media Server system
•Configuring NetBackup
•Verifying NetBackup Operations

CLUSTER


A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

High-availability (HA) clusters
High-availability clusters (also known as Failover Clusters) are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure.

There are many commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system.
 Load-balancing clusters
Load-balancing when multiple computers are linked together to share computational workload or function as a single virtual computer. Logically, from the user side, they are multiple machines, but function as a single virtual machine. Requests initiated from the user are managed by, and distributed among, all the standalone computers to form a cluster. This results in balanced computational work among different machines, improving the performance of the cluster system.

Compute clusters
Often clusters are used primarily for computational purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a cluster might support computational simulations of weather or vehicle crashes. The primary distinction within compute clusters is how tightly-coupled the individual nodes are. For instance, a single compute job may require frequent communication among nodes - this implies that the cluster shares a dedicated network, is densely located, and probably has homogenous nodes. This cluster design is usually referred to as Beowulf Cluster. The other extreme is where a compute job uses one or few nodes, and needs little or no inter-node communication. This latter category is sometimes called "Grid" computing. Tightly-coupled compute clusters are designed for work that might traditionally have been called "supercomputing". Middleware such as MPI (Message Passing Interface) or PVM (Parallel Virtual Machine) permits compute clustering programs to be portable to a wide variety of clusters.

          


VERITAS NETBACKUP SKILLS

•Identify the architecture and components of NetBackup 6.0 that reside on the Master Server, EMM Server, Media Server, and Client.
•Identify the process flow that occurs during basic backup and restore operations.
•Install NetBackup 6.0 Server and Client software.
•Configure NetBackup 6.0 devices, storage units, media, backup policies, and catalog backups.
•Perform basic and catalog backup and restore operations.
•Perform basic, common NetBackup administrative tasks.
•Enable and examine both traditional and VxUL debug logging.
•Perform NetBackup operations using new command line interface commands and command arguments.
•Locate and identify error messages presented in/by NetBackup 6.0.
•Perform basic troubleshooting of NetBackup 6.0 backup and restore problems.

NEVER automount your backup directories

NEVER automount your backup directories
the infamous "rm -rf /" will erase your automounted backups too


Always mount the backup directory before performing backups
Always unmount the backup directory when done performing backups


Prune your backups of un-necessary files from being backed up
tar --exclude /tmp --exclude /proc ...
egrep ".netscape/cache| *.o | *.swp | *~ | ..."


Be sure that the backup files is NOT group/world writable/readable
It'd contain confidential data, passwds etc


chown root /Backup_Dir/Date.x.tgz
chgrp backup /Backup_Dir/Date.x.tgz
chmod 440 /Backup_Dir/Date.x.tgz


encrypt your backup files if you are really paranoid about your passwd files




Example Backup Commands
Creating datestamps for file names
date '+%Y%m%d'
Ouputs 20020615 for June 15, 2002


date '+%Y.%m.%d'
Ouputs 2002.06.15 for June 15, 2002

www.thing.dyndns.org bash examples


dd -- copying partitions, mirror'd partitions
mounting and unmounting is NOT needed in this case
dd if=/dev/hda1 of=/dev/hdb1 bs=1024


Simplified tar --newer Incremental Backup example
LastBackupTime = `cat /Backup_Dir/Last.txt`
tar zcvf --newer $LastBackupTime /Backup_Dir/Date.x.tgz $DIRS
echo "Date" > /Backup_Dir/Last.txt


Simplified find | tar Incremental Backup example
Cnt = `cat /Backup_Dir/Last.txt`
find $DIRS -mtime -$Cnt -print | tar zcvf /Backup_Dir/Date.$Cnt.tgz -T -
echo "$Cnt +1" > /Backup_Dir/Last.txt


Simplified dump | restore Backup example


Simplified cpio Backup example
find /home -xdev | cpio -pm /mnt/backup

Creating Backup Files

Copying Directories and Files

dd Copying / on /dev/hda1 to a /dev/hdb1 ( backup disk )
dd if=/dev/hda1 of=/dev/hdb1 bs=1024k

tar | tar Copying /home on /dev/hda5 to a /dev/hdb5 ( backup disk )
mount /dev/hdb5 /mnt/backup
( tar cf - /home ) | ( cd /mnt/backup ; tar xvfp - )
umount /mnt/backup
scp Copying /home on /dev/hda5 to a /dev/hdb5 ( backup disk )
mount /dev/hdb5 /mnt/backup
scp -par /home /mnt/backup
umount /mnt/backup
scp -pr user1@host:/Source_to_Backup user2@BackupServer:/Backup_Dir

find | cpio Copying /home on /dev/hda5 to a /dev/hdb5 ( backup disk )
mount /dev/hdb5 /mnt/backup
find /home -print | cpio -pm /mnt/backup
umount /mnt/backup

To View Contents
cpio -it < file.cpio

To Extract a file
cpio -id "usr/share/file/doc/*" < file.cpio

Creating Backup Files
tar Backup /home to ( backup disk )
mount /dev/hdb5 /mnt/backup
tar zcvf /mnt/backup/home.Date.tgz /home > /mnt/backup/home.Date.log
umount /mnt/backup
Encrypt home.Date.tgz if security of sensitive data is an issue
Example Full/Incremental Script

Typical list of directories you WANT to backup regularly...
DIRS is typically: /root /etc /home/

Using Month/Date for Backup Filenames
date `%Y.%M.%D"

Simple Full Backup
tar zcvf /Backup_Dir/Month_Date.Full.tgz $DIRS

Simple Incremental Backups
Change the Month_Date.tgz file
Backup the files to a Different Server:/Disks [ reasons ]

Simple Daily Incremental Backup
find $DIRS -mtime -1 -type f -print | tar zcvf /BackupDir_1/Month_Date.tgz -T -

Simple Weekly Incremental Backup
find $DIRS -mtime -7 -type f -print | tar zcvf /BackupDir_7/Month_Date.7.tgz -T -
use -mtime -32 to cover for un-noticed failed incremental backups from last week
Simple Monthly Incremental Backup
find $DIRS -mtime -31 -type f -print | tar zcvf /BackupDir_30/Year_Month.30.tgz -T -
use -mtime -93 to cover for un-noticed failed incremental backups from last month

BACKUP THE LINUX SERVER

Which servers to backup
Which directories/files to backup
What is the Backup media
Backup Failure Modes
Testing Your Backups

OffSite Backups

Backup Servers
Which Servers to backup
Pull the ethernet cable to it....and see what happens..

Backups should be on a different server than the server/data you are trying to backup
protect against the server wiping itself out and its backup
Backup all PCs on one LAN/hubb to a local backup server - minimize traffic
Backup all Local backup servers to the other Building's Backup server and vice verss
Backup Directories
Most of the "System" directories and files are already on the installation cdrom
Most of the "Updates" you applied are already scattered at the various mirrors on the internet
Which directories to backup is dictated by the size and topology of your network
Backup of a single server is significantly simpler than backup of different servers
www, email, firewall, ftp servers, home servers, file servers, etc


Which directories to backup is dictated by the partitions used to install
move system config files into /etc or /usr/local/etc so that its backed up

If you can recreate your entire system on another disk...you've selected the "right directories to backup"
you should backup "user data"
/root /etc /home /usr/local
you should optionally backup log files and pending emails
/var/log /var/spool/mail
 Backup Media 
The number of servers and size of your data dictates your backup media
Backups onto floppy -- good for Full backups of /etc

Backups onto Zip -- good for backups of 200Mb

Backups onto CDR -- good for backups of 600Mb

Backups onto DVD -- good for backups of 4GbMb

Backups onto Tapes -- good for backups of up to 40-80Gb, and more with tape libraries

Backups onto Disks -- good for 500Gb -- xxTeraByte Raid5 Backups


NEVER backup your data to the same partition, nor same disk
if you lose the disk...you do NOT have any backups


Backups are best done to a DIFFERENT server
protect your backups from hardware flakyness and random power surges etc



Tape and CDR backup media REQUIRES you to change it daily/weekly...
if you forget, you lose the previous backup... and you may also lose todays incremental backup data too

You have to clean the tape head regularly

You can lose one of the daily incremental tapes, or someone can walk out with your corp data
Restoring a file from tape can be a slow process ( hours and hours )


One 20Gb can hold about 1-2 months of full and incremental backup of another 20Gb disks depending on data amd backup methodology


Backup Failures 
Backup Failure Modes

BACKUP THE LINUX SERVER

Simulate a random disk crash -- just turn off the power
oNothing should break -- people can keep working
oNothing should break -- email and web still works

•Rotate your Backups
odaily incremental on Backup-Daily
oweekly 30 day incremental on Backup-Weekly ( different server )
oweekly Full backups ( different server than dail or weekly-30 )

•do NOT erase last weeks backup
okeep multiple backups on different servers
okeep the 32 or 94 day incrementals if you decide to erase *.Full.tgz

•Test your Backups from Bare Metal Restores
oalways start from virgin disk when testing backups
oalways just apply the patches ... NO manual changes

Apache webserver

Apache is one of the most popular Web servers on the Web right now, and part of its charm is that it's free. It also has a lot of features that make it very extensible and useful for many different types of Web sites. It is a server that is used for personal Web pages up to enterprise level sites.
This article will discuss how to install Apache on a Linux system. Before we start you should be at least comfortable working in Linux - changing directories, using tar and gunzip, and compiling with make (I'll discuss where to get binaries if you don't want to mess with compiling your own). You should also have access to the root account on the server machine.
Download Apache
I recommend downloading the latest stable release. At the time of this writing, that was Apache 2.0. The best place to get Apache is from the Apache HTTP Server download site. Download the sources appropriate to your system. Binary releases are available as well.
Extract the Files
Once you've downloaded the files you need to uncompress them and untarring:
  gunzip -d httpd-2_0_NN.tar.gz
  tar xvf httpd-2_0_NN.tar
This creates a new directory under the current directory with the source files.
Configuring
Once you've got the files, you need to tell your machine where to find everything by configuring the source files. The easiest way is to accept all the defaults and just type:
  ./configure
Of course, most people don't want to accept just the default choices. The most important option is the prefix= option. This specifies the directory where the Apache files will be installed. You can also set specific environment variables and modules. Some of the modules I like to have installed are:
•mod_alias - to map different parts of the URL tree
•mod_include - to parse Server Side Includes
•mod_mime - to associate file extensions with its MIME-type
•mod_rewrite - to rewrite URLs on the fly
•mod_speling (sic) - to help your readers who might misspell URLs
•mod_ssl - to allow for strong cryptography using SSL
•mod_userdir - to allow system users to have their own Web page directories
Please keep in mind that these aren't all the modules I might install on a given system. Read the details about the modules to determine which ones you need.
Build
As with any source installation, you'll then need to build the installation:
  make
  make install

Apache server configuration

Apache is controlled by a series of configuration files: httpd.conf, access.conf. and srm.conf (there's actually also a mime.types file, but you have to deal with that only when you're adding or removing MIME types from your server, which shouldn't be too often). The files contain instructions, called directives, that tell Apache how to run. Several companies offer GUI-based Apache front-ends, but it's easier to edit the configuration files by hand.
Remember to make back-up copies of all your Apache configuration files, in case one of the changes you make while experimenting renders the Web server inoperable.
Also, remember that configuration changes you make don't take effect until you restart Apache. If you've configured Apache to run as an inetd server, then you don't need to worry about restarting, since inetd will do that for you.
Download the reference card
As with other open-source projects, Apache users share a wealth of information on the Web. Possibly the single most useful piece of Apache-related information--apart from the code itself, of course--is a two-page guide created by Andrew Ford.
Called the Apache Quick Reference Card, it's a PDF file (also available in PostScript) generated from a database of Apache directives. There are a lot of directives, and Ford's card gives you a handy reference to them.
While this may not seem like a tip on how to run Apache, it will make your Apache configuration go much smoother because you will have the directives in an easy-to-access format.
One quick note--we found that the PDF page was a bit larger than the printable area of our printer (an HP LaserJet 8000 N). So we set the Acrobat reader to scale-to-fit and the pages printed just fine.
Use one configuration file
The typical Apache user has to maintain three different configuration files--httpd.conf, access.conf, and srm.conf. These files contain the directives to control Apache's behavior.
The tips in this story keep the configuration files separate, since it's a handy way to compartmentalize the different directives. But Apache itself doesn't care--if you have a simple enough configuration or you just want the convenience of editing a single file, then you can place all the configuration directives in one file. That one file should be httpd.conf, since it is the first configuration file that Apache interprets. You'll have to include the following directives in httpd.conf:
AccessConfig /dev/null
ResourceConfig /dev/null
That way, Apache won't cough up an error message about the missing access.conf and srm.conf files. Of course, you'll also need to copy the directives from srm.conf and access.conf into your new httpd.conf file.
Restrict access
Say you have document directories or files on your Web server that should be visible only to a select group of computers. One way to protect those pages is by using host-based authentication. In your access.conf file, you would add something like this:

order deny,allow
deny from all
allow from 10.10.64

The directive is what's called a sectional directive. It encloses a group of directives that apply to the specified directory. The Apache Quick Reference Card includes a listing of sectional directives.

The File Transfer Protocol (FTP)

The File Transfer Protocol (FTP) is used as one of the most common means of copying files between servers over the Internet. Most web based download sites use the built in FTP capabilities of web browsers and therefore most server oriented operating systems usually include an FTP server application as part of the software suite. Linux is no exception.
This chapter will show you how to convert your Linux box into an FTP server using the default Very Secure FTP Daemon (VSFTPD) package included in Fedora.
FTP Overview
FTP relies on a pair of TCP ports to get the job done. It operates in two connection channels as I'll explain:
FTP Control Channel, TCP Port 21: All commands you send and the ftp server's responses to those commands will go over the control connection, but any data sent back (such as "ls" directory lists or actual file data in either direction) will go over the data connection.
FTP Data Channel, TCP Port 20: This port is used for all subsequent data transfers between the client and server.
In addition to these channels, there are several varieties of FTP.
Types of FTP
From a networking perspective, the two main types of FTP are active and passive. In active FTP, the FTP server initiates a data transfer connection back to the client. For passive FTP, the connection is initiated from the FTP client. These are illustrated in Figure 15-1.
Figure 15-1 Active And Passive FTP Illustrated
From a user management perspective there are also two types of FTP: regular FTP in which files are transferred using the username and password of a regular user FTP server, and anonymous FTP in which general access is provided to the FTP server using a well known universal login method.
Take a closer look at each type.
Active FTP
The sequence of events for active FTP is:
1.Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as 'ls' and 'get' are sent over this connection.
2.Whenever the client requests data over the control connection, the server initiates data transfer connections back to the client. The source port of these data transfer connections is always port 20 on the server, and the destination port is a high port (greater than 1024) on the client.
3.Thus the ls listing that you asked for comes back over the port 20 to high port connection, not the port 21 control connection.
FTP active mode therefore transfers data in a counter intuitive way to the TCP standard, as it selects port 20 as it's source port (not a random high port that's greater than 1024) and connects back to the client on a random high port that has been pre-negotiated on the port 21 control connection.
Active FTP may fail in cases where the client is protected from the Internet via many to one NAT (masquerading). This is because the firewall will not know which of the many servers behind it should receive the return connection.
Passive FTP
Passive FTP works differently:
1.Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as ls and get are sent over that connection.
2.Whenever the client requests data over the control connection, the client initiates the data transfer connections to the server. The source port of these data transfer connections is always a high port on the client with a destination port of a high port on the server.
Passive FTP should be viewed as the server never making an active attempt to connect to the client for FTP data transfers. Because client always initiates the required connections, passive FTP works better for clients protected by a firewall.

                   

ALL ABOUT WEB SERVERS

The Web server is the basis of everything that happens with your Web page, and yet often people know nothing about it. Do you even know what Web server software is running on the machine? How about the machine's operating system?
For simple Web sites, these questions really don't matter. After all, a Web page that runs on Unix with a Netscape Server will usually run okay on a Windows machine with IIS. But once you decide you need more advanced features on your site (like CGI, database access, ASP, etc.), knowing what's on the back-end means the difference between things working and not.
The Operating System
Most Web servers are run on one of three Operating Systems:
1. Unix
2. Linux
3. Windows NT
You can generally tell a Windows NT machine by the extensions on the Web pages. For example, all the pages on Web Design/HTML @ About.com end in .htm. This hearkens back to DOS when file names were required to have a 3 character extension. Linux and Unix Web servers usually serve files with the extension .html.
Unix, Linux, and Windows are not the only operating systems for Web servers, just some of the most common. I have run Web servers on Windows 95 and MacOS. And just about any operating system that exists has at least one Web server for it, or the existing servers can be compiled to run on them.
The Servers
A Web server is just a program running on a computer. It provides access to Web pages via the Internet or other network. Servers also do things like track hits to the site, record and report error messages, and provide security.
Apache
This is possibly the world's most popular Web server. It is the most widely used and because it is released as "open source" and with no fee for use, it has had a lot of modifications and modules made for it. You can download the source code, and compile it for your machine, or you can download binary versions for many operating systems (like Windows, Solaris, Linux, OS/2, freebsd, and many more). There are many different add-ons for Apache, as well. The drawback to Apache is that there might not be as much immediate support for it as other commercial servers. However, there are many pay-for-support options now available. If you use Apache, you'll be in very good company.
Internet Information Services (IIS)
The Internet Information Servcies (IIS) is Microsoft's addition to the Web server arena. If you are running on a Windowws Server system, this might be the best solution for you to implement. It interfaces cleanly with the Windows Server OS, and you are backed by the support and power of Microsoft. The biggest drawback to this Web server is that Windows Server is very expensive. It is not meant for small businesses to run their Web services off, and unless you have all your data in Access and plan to run a solely Web based business, it is much more than a beginning Web development team needs. However, it's connections to ASP.Net and the ease with which you can connect to Access databases make it ideal for Web businesses.
Sun Java Web Server
The third big Web server of the group is the Sun Java Web Server. This is most often the server of choice for corporations that are using Unix Web server machines. The Sun Java Web Server offers some of the best of both Apache and IIS in that it is a supported Web server with strong backing by a well known company. It also has a lot of support with add-in components and APIs to give it more options. This is a good server if you are looking for good support and flexibility on a Unix platform

How do I kill process in Linux

How do I kill process in Linux?
Linux and all other UNIX like oses comes with kill command. The command kill sends the specified signal (such as kill process) to the specified process or process group. If no signal is specified, the TERM signal is sent.
Kill process using kill command under Linux/UNIX
kill command works under both Linux and UNIX/BSD like operating systems.
Step #1: First, you need to find out process PID (process id)
Use ps command or pidof command to find out process ID (PID). Syntax:
ps aux | grep processname
pidof processname
For example if process name is lighttpd, you can use any one of the following command to obtain process ID:
# ps aux | grep lighttpdOutput:
lighttpd 3486 0.0 0.1 4248 1432 ? S Jul31 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf
lighttpd 3492 0.0 0.5 13752 3936 ? Ss Jul31 0:00 /usr/bin/php5-cg

Wednesday, December 16, 2009

Sun SPARC Enterprise T5120


Product : Sun SPARC Enterprise T5120, T5220   
Date of Resolved Release : 12-Feb-2008  

Some Sun SPARC Enterprise T5120 and T5220 Servers Shipped With an Incorrect Solaris 10 Image Containing an Insecure Configuration
Impact
Sun SPARC Enterprise T5120 and T5220 servers with datecode prior to BEL07480000 have been mistakenly shipped with factory settings in the pre-installed Solaris 10 OS image. These settings may allow a local or remote user to be able to execute arbitrary commands with the privileges of the root (uid 0) user.
(To determine if your systems are affected by this issue please look for the changed parameters and extra files listed in the Contributing Factors section below).
2. Contributing Factors
This issue can occur on the following platforms:
Sun SPARC Enterprise T5120 and T5220 Servers with datecode prior to BEL07480000
Note: Systems are only impacted by this issue if they have an incorrect factory image installed.
To determine the datecode on the T5120 or T5220, use either "Lights Out Management" (LOM) or prtdiag(1M) commands:

    ILOM CLI:  > show /SYS/
    ALOM CLI:  sc> showplatform
    prtdiag –v




The Sun Storage 7410

The Sun Storage 7410 Unified Storage System is ideal for enterprises requiring mission-critical storage and provides dramatically easier and faster ways to manage and scale your storage up to 288 TB[1].
At a Glance
Easy-to-use DTrace Analytics increase production uptime
Scales up to 288 TB[1]
High-availability Cluster option protects against downtime
Flash Hybrid Storage Pool improves response times
Scales throughput, performance, and capacity to meet application needs
More Features
--------------------------------------------------------------------------------
Key Applications
Bulk unified storage
HPC
Web 2.0
Server virtualization
Database/BIDW
Backup

--------------------------------------------------------------------------------
Change Your Storage
Discover how the Sun Storage 7000 can radically simplify your storage, improving performance and cutting costs.
 

Troubleshooting grub error

Introduction to the Boot Process
When an x86-based system powers on, the BIOS initializes the CPU, memory, and platform hardware. Upon completion, the BIOS loads the initial bootstrap software (that is, the bootloader) from the configured boot device and hands control over to the bootloader. The Solaris 10 3/05 OS and earlier releases use a Sun-developed bootloader that includes an interactive shell and a menu-driven device configuration assistant based on realmode drivers.
Starting with the Solaris 10 1/06 release, the open source GRUB or GNU GRand Unified Bootloader is used as the bootloader. The initial delivery is based on GRUB version 0.95 and will be updated as newer versions become available. The Solaris kernel is fully compliant with the Multiboot Specification (reference 2); hence, the Solaris OS can be booted via any bootloader implementing the Multiboot Specification.
The switch to GRUB brings several benefits to Solaris customers.
With GRUB it is very easy to specify kernel and boot options in the boot menu.
For end users, booting and installing from USB DVD drives is now supported.
It is easier for the Solaris OS to coexist with other operating systems on the same machine. In particular, it is possible for the Solaris OS to share the same GRUB bootloader with Linux.
Deploying the Solaris OS via the network is also simplified, particularly in the area of DHCP server setup. Vendor-specific options are no longer required in the DHCP server setup.
Developers no longer need to deal with realmode drivers, which were part of the bootloader required for previous Solaris releases.
For IHVs, it is now possible to deliver drivers at install time via CD/DVD in addition to floppies.
Finally, by adopting a bootloader developed by the open source community, Sun's customers can leverage the considerable GRUB experience gained within that community.

2. Booting the Solaris OS With GRUB
Once GRUB gains control, it displays a menu on the console asking the user to choose an OS instance to boot. The user may pick a menu item, modify a menu item using the built-in editor, or manually load an OS kernel in command mode. To boot the Solaris OS, GRUB must load a boot_archive file and a "multiboot" program. The boot archive is a ramdisk image containing Solaris kernel modules and data. GRUB simply puts it in memory without any interpretation. The multiboot program is an ELF executable with a Multiboot Specification-compliant header. Once loading is complete, GRUB hands control over to the multiboot program. GRUB itself then becomes inactive, and its memory is reclaimed.
The multiboot program is responsible for assembling core kernel modules in memory by reading the boot_archive, and passing boot-related information (as specified in the Multiboot Specification) to the kernel. Note that the multiboot program goes hand-in-hand with the boot_archive file. You cannot mix and match multiboot and boot_archive information from different releases or OS instances.
Once the kernel gains control, it will initialize CPU, memory, and I/O devices, and it will mount the root file system on the device as specified by the bootpath property with a file system type as specified by the property fstype. Properties can be set in /boot/solaris/bootenv.rc via the eeprom(1M) command or in the GRUB command line via the GRUB menu or shell. If the properties are not specified, the root file system defaults to UFS on /devices/ramdisk:a, which is the case when booting the install miniroot.

3. Installation
The Solaris OS may be installed from CD, DVD, and net install servers. The Solaris 10 1/06 release differs from the Solaris 10 3/05 release in several ways:
Minimum memory requirement: The system must have 256MB of main memory to boot the install miniroot. Systems with insufficient memory will get a message from GRUB: "Selected item can not fit in memory".
USB drive support: Installation from CD/DVD drives connected via USB interfaces is fully supported.
Net install: The standard procedure for setting up net install images remains the same. Clients are assumed to boot via the Preboot eXecution Environment (PXE) mechanism. Clients not capable of PXE boot can use a GRUB floppy (see Appendix B).
When booting the install miniroot, a GRUB menu is displayed. A user may interactively edit boot options (see section 4.2). After GRUB loads the Solaris OS, the following install menu is displayed:
1. Solaris Interactive (default)
2. Custom JumpStart
3. Solaris Interactive Text (Desktop session)
4. Solaris Interactive Text (Console session)
5. Apply driver updates
6. Single user shell
The Device Configuration Assistant and associated interactive shell, which users are familiar with from the Solaris 10 3/05 OS and earlier, are no longer present. Users wishing to add drivers required during install (for example, host adapter drivers) should choose option 5 and supply an ITU (Install Time Update) floppy or CD/DVD.
Option 6 is available for system recovery. It provides quick access to a root prompt without going through system identification. This option is identical to booting a Solaris Failsafe session (see section 4.4).

4. Managing the Boot Subsystem
4.1 BIOS
It is generally a good idea to update the BIOS firmware to the latest revision before installing the Solaris OS. This is typically accomplished by visiting the support page for the vendor that manufactured the computer.
Compared to the Solaris 10 3/05 release, the Solaris 10 1/06 OS uses a different subset of BIOS features. In particular, the kernel makes use of more information from the Advanced Configuration and Power Management Interface (ACPI) table, using the parser from Intel's ACPI CA software.
On systems that do not conform to BIOS 2.0 specifications, the syslog may contain messages related to parsing the ACPI table, such as:
ACPI-0725: *** Warning:
Type override - [4s] had invalid type (DEB_[\200IODB
Such messages are harmless and do not impact normal system operation. If ACPI errors prevent normal system boot, the user can disable the ACPI parser by setting acpi-user-options to 2 (see eeprom(1M)) in the kernel line of the GRUB menu:
kernel .. -B ...,acpi-user-options=2
In this case, the system assumes a set of standard ISA devices is present, including a keyboard, a mouse, two serial ports, and a parallel port.
4.2 Boot Options
To boot the Solaris OS, a user may specify the kernel to load, options to be passed to the kernel (see kernel(1M)), and a list of property names and values to customize system behaviors (see eeprom(1M)). At Solaris installation time, a set of default values are chosen for the system and stored in /boot/solaris/bootenv.rc. Users may change the settings by editing the GRUB menu or modifying the bootenv.rc file indirectly via the eeprom(1M) command.
Specifying kernel name and kernel options via eeprom requires setting the boot-file property. To boot the 32-bit kernel in verbose mode, run the following command:
# eeprom boot-file="kernel/unix -v"
To specify the same thing on the GRUB menu, modify the kernel command of the GRUB menu from:
kernel /platform/i86pc/multiboot
to:
kernel /platform/i86pc/multiboot kernel/unix -v
See kernel(1M) for additional boot arguments that the Solaris kernel accepts.
Properties other than boot-file can be specified on the GRUB kernel command line with this syntax:
kernel /platform/i86pc/multiboot -B prop1=val1[,prop2=val2...]
To configure the serial console on ttya (com1), set the console property to ttya:
kernel /platform/i86pc/multiboot -B console=ttya
If the property value contains commas, the value should be quoted. The following GRUB command sets the Solaris console to ttya in high speed.
kernel /platform/i86pc/multiboot -B console=ttya,ttya-mode="115200,8,n,1,-"
In short, specifying "-B foo=bar" in the GRUB menu is equivalent to running "eeprom foo=bar". The -B option in GRUB is primarily for temporary overrides. Permanent settings should be specified via eeprom(1M) so that they are preserved by the Solaris upgrade process.
4.3 Boot Archive
The boot archive refers to the file platform/i86pc/boot_archive. It is a collection of core kernel modules and configuration files packed in either UFS or ISOFS format. At boot time, GRUB loads the boot archive into system memory. The kernel can now initialize itself from data and text in the boot archive without performing I/O to the root device. Once the kernel gains sufficient I/O capability, it will mount the root file system on the real root device as specified by the bootpath property. At this point, the boot archive loaded by GRUB is discarded from memory.
The content of the boot archive is specified in /boot/solaris/filelist.ramdisk. Upon system shutdown, the system checks for updates to the root file system and updates the boot archive when necessary. The system may manually update the boot archive prior to system shutdown by running the bootadm(1M) command.
4.4 The Failsafe Menu Entry
New to the Solaris 10 1/06 OS is a file, /boot/x86.miniroot-safe, containing a bootable, standalone Solaris image. This file can be loaded by choosing the Solaris failsafe entry from the GRUB menu. This is for the convenience of system administrators when the normal entry fails to boot.
Suppose you add a new package containing a faulty driver, and the system panics at boot time. Upon reboot, you can pick the Solaris failsafe menu entry. While in the failsafe session, mount the root file system on /a and run pkgrm -R to remove the faulty package. Once this is complete, you can reboot to the normal Solaris entry to resume system operation.
The file /boot/x86.miniroot-safe can also be copied to portable media, such as a USB stick, as a recovery tool.
4.5 Keeping the System Bootable
To ensure that the system remains bootable, the GRUB boot blocks, the GRUB menu, and the boot archive must be up-to-date.
The GRUB boot blocks reside in the Solaris partition. If the boot blocks become corrupt, they should be reinstalled using the installgrub(1M) command. Note that installboot(1M) and fmthard(1M) cannot be used to write GRUB boot blocks.
The GRUB menu resides in /boot/grub/menu.lst (or /stubboot/boot/grub/menu.lst if a Solaris boot partition is used). The menu is maintained with the bootadm(1M) update-menu subcommand. Because GRUB names disks by the BIOS disk number, a change in BIOS boot device configuration may render GRUB menu entries invalid in some cases. Running bootadm update-menu will create the correct menu entry in such cases.
The boot archive must be updated as the root file system is modified. In cases of system failure (power failure or kernel panic) immediately following a kernel file update, the boot archive may be out of sync with the root file system. In such cases, the system/boot-archive service, managed through the Solaris Service Manager (see svcadm(1M) for example), will fail on the next reboot. Solaris will print a message telling the user that it is still possible to boot and clear the event that triggered the error, but it is safer to reboot the system, select failsafe session, and update the boot archive while in the failsafe session.

Custom df (diskfree) column output in Solaris using nawk

Let's say you want to combine some features of "df -h" with "df -n" to show filesystem type and some other custom modifications to the output. This is where awk/nakw/gawk/whatever come in handy:

% df -g | nawk '{if (NR % 5 == 1) printf "%-22s", $1 ; if (NR % 5 == 4) printf "%-10s", "fstype " $1 "\n"; if (NR % 5 == 2) printf "%-30s",$1/2/1024/1024 " GB"; if (NR % 5 == 2) printf "%-30s", $4/2/1024/1024 " GB free "}'


/ 33.6627 GB 18.4351 GB free fstype ufs
/devices 0 GB 0 GB free fstype devfs
/system/contract 0 GB 0 GB free fstype ctfs
/proc 0 GB 0 GB free fstype proc
/etc/mnttab 0 GB 0 GB free fstype mntfs
/etc/svc/volatile 7.88214 GB 7.8813 GB free fstype tmpfs
/system/object 0 GB 0 GB free fstype objfs
/lib/libc.so.1 33.6627 GB 18.4351 GB free fstype ufs
/dev/fd 0 GB 0 GB free fstype fd
/tmp 7.88142 GB 7.8813 GB free fstype tmpfs
/var/run 7.88134 GB 7.8813 GB free fstype tmpfs
/export/home 74.4858 GB 1.87458 GB free fstype ufs
/storage 108.639 GB 66.9259 GB free fstype nfs

You can also add a comma (,) to the separators and output > csv (you can open the comma separated values table in Excel or OpenOffice or any other Spreadsheet application) :-).

Add a regular user account to your DB2 zone:

db2# mkdir -p /export/home/cmihai
db2# useradd -s /usr/bin/zsh -d /export/home/cmihai cmihai
db2# chown cmihai /export/home/cmihai
db2# passwd cmihai
New Password:
Re-enter new Password:
passwd: password successfully changed for cmihai
db2# su - cmihai
db2% cd /opt/IBM/db2/V9.1/bin

Check locale(1) and export LC_ALL=C if needed or db2 will complain:

db2% ./db2fs
couldn't set locale correctly

Make sure you read the install log in /tmp.

Here's a tip though: if you can, use the Graphical installer (ssh -X and run db2setup instead of db2_install).
All you need now is add various tuning, limitations and zfs quotas, etc.

If you need to start over, there's always ZFS snapshots or db2_deinstall.

Either way, if you need to create an instance:

# ./db2icrt -s wse -u db2fenc1 db2inst1
Sun Microsystems Inc. SunOS 5.11 snv_90 January 2008
Sun Microsystems Inc. SunOS 5.11 snv_90 January 2008
DBI1070I Program db2icrt completed successfully.

You can now use db2 to create a database and connect to it.

# db2
db2=> CREATE DATABASE test
db2 => CONNECT TO test
Database Connection Information

Database server = DB2/SUN64 9.1.1
SQL authorization ID = DB2INST1
Local database alias = TEST

db2 => CREATE TABLE clients (name char(25), surname char(50))
DB20000I The SQL command completed successfully.
db2 => LIST TABLES

Table/View Schema Type Creation time
------------------------------- --------------- ----- --------------------------
CLIENTS DB2INST1 T 2008-06-11-05.39.58.167896

1 record(s) selected.

db2 => INSERT INTO clients VALUES ('Some','Guy')
DB20000I The SQL command completed successfully.
db2 => SELECT * FROM clients

NAME SURNAME
------------------------- --------------------------------------------------
Some Guy

1 record(s) selected.

Deploying IBM DB2 inside a Solaris 10 Container

1. Creating the ZFS filesystem:
# zfs create rpool/export/zones

2. Configuring the DB2 zone:
# zonecfg -z db2
db2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:db2> create
zonecfg:db2> set zonepath=/export/zones/db2
zonecfg:db2> set autoboot=true
zonecfg:db2> add net
zonecfg:db2:net> set address=192.168.1.100/24
zonecfg:db2:net> set physical=iwk0
zonecfg:db2:net> end
zonecfg:db2> verify
zonecfg:db2> commit
zonecfg:db2> exit

3. Installing the DB2 zone:
# zoneadm -z db2 install
A ZFS file system has been created for this zone.
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <9648> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1319> packages on the zone.
Initialized <1319> packages on zone.
Zone is initialized.
Installation of these packages generated errors:
Installation of these packages generated warnings:
The file contains a log of the zone installation.
4. Listing the zones:
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- db2 installed /export/zones/db2 native shared

5. Booting the DB2 zone:
# zoneadm -z db2 boot

6. Logging into the zone:
# zlogin -C db2
[Connected to zone 'db2' console]

Configure the initial system (locale, etc).

7. Install IBM DB2 Database 9:
db2# gunzip db2_v9fp1_ese_solaris_x64.tar.gz
db2# tar xvf db2_v9fp1_ese_solaris_x64.tar
db2# cd ese/disk1/
db2# ./db2_install
Default directory for installation of products - /opt/IBM/db2/V9.1

***********************************************************
Do you want to choose a different directory to install [yes/no] ?
no

Specify one or more of the following keywords,
separated by spaces, to install DB2 products.

CLIENT
RTCL
ESE

Enter "help" to redisplay product names.

Enter "quit" to exit.

Mortal Kombat 4 on Solaris - Wine


Using Wine, DosBOX, DosEMU, GSNEX, GBA, ePSX and various other Windows, DOS and game console emulators you can get a fair amount of fun old games running on Solaris (like StarCraft, Mortal Kombat Series, Final Fantasy 1-8, etc). Not to mention the whole Doom, Quake 1,2,3 series using the open sourced engines.




Solaris ZFS to ZFS LiveUpgrade

Regular UFS to UFS LiveUpgrade used to take a while to create the boot environment, etc. Complicated :-).
As of Solaris Express Community Edition 90, you can use LiveUpgrade with ZFS. You can also LU a UFS system to ZFS.
One of the benefits of ZFS root is the ZFS clone command (lucreate -n happens in a second):

# lucreate -n sxce91
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name .
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name PBE Boot Device .
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment to create boot environment .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
Creating snapshot for on .
Creating clone for on .
No entry for BE in GRUB menu
Population of boot environment successful.
Creation of boot environment successful.

Fixing Java WebConsole ZFS Administration on Solaris Express

Application Error
com.iplanet.jato.NavigationException: Exception encountered during forward
Root cause = [java.lang.IllegalArgumentException: No enum const class com.sun.zfs.common.model.AclInheritProperty$AclInherit.restricted]
Notes for application developers:

    * To prevent users from seeing this error message, override the onUncaughtException() method in the module servlet and take action specific to the application
    * To see a stack trace from this error, see the source for this page

Generated Sun Jun 22 14:22:44 EEST 2008

If this happens to you, you need to set acl inherit to passthrough:

# zfs set aclinherit=passthrough rpool

Now WebConsole ZFS Admin will work.

You also need to make sure the webconsole serivce is enabled before you can use it:

# svcs -a | grep webconsole
disabled 15:32:25 svc:/system/webconsole:console

# svcadm enable webconsole

# svcs -xv webconsole
svc:/system/webconsole:console (java web console)
State: online since Sun Jun 22 15:34:40 2008
See: man -M /usr/share/man -s 1M smcwebserver
See: /var/svc/log/system-webconsole:console.log
Impact: None.

Using AWK to convert UNIX passwords from HP-UX to Solaris

Converting password hashes from HP-UX 11.11 to Solaris is pretty simple if you are using UNIX crypt passwords (if HP-UX isn't a Trusted System. If it is, it will use bigcrypt passwords, > 8 characters, converting them to Solaris UNIX crypt could be problematic).

Here's the gest of it:

On the HP-UX System, we create a test user:

    # useradd test
    # passwd test
    test

Now we convert the passwd file to generate passwd entries for Solaris:

        * # awk ' BEGIN { FS = ":" } { print $1":x:" $3 ":" $4 "::/export/home/" $1 ":/usr/bin/sh" }' /etc/passwd
        * test:x:107:20::/export/home/test:/usr/bin/sh


And we create the shadow file entries, including the password hash:

        * # awk ' BEGIN { FS = ":" } { print $1":"$2"::::::" }' /etc/passwd
        * test:lsDWJo7M.iAhY::::::

Just add them using /usr/ucb/vipw to the password file, edit the shadow file for consistency and test. Be sure to backup the files and to test using a few users at first.

        * $ su test
        * Password:
        * $ id
        * uid=127(test) gid=120
        * $ whoami
        * test
        * $ echo $HOME
        * /export/home/test
        * $ echo $SHELL
        * /usr/bin/sh

    Mix with some shell scripting and mkdir's and you're set :-). Next time, use LDAP :P.

Posted by cmihai at 10:49 PM 1 comments

D-Light DTrace script for Sun Studio 12 in Solaris

Here's a pretty cool tool for developers, similar to the DTrace GUI from XCode in OS X 10.5 Leopard (Instruments):

It's part of Sun Studio 12.

 

Solaris Performance Monitoring

Performance monitoring tools on Solaris using split GNU screen windows :-).


Cell accelerator boards, NVIDIA GPU servers and HPC

 This is pretty interesting considering your basic CPU does something like 30GFLOPS (something around 16 GFLOPs per POWER 6 cores, 10GFLOPS for a Itanium cores). A cell board like this does 180GFLOPs.
(Don't take this is a benchmark or anything. This is just some RAW data).

Some NVDIA something like 500 (technically the G80 has 128 fp32 ALUs @ 1350MHz with MADD - about 350 GFLOPs), a R600 is supposed to have like 500 and a Realizm 800 (Dual Wildcat VPUs) about 700 GFLOPS :-). So yeah, with 16 or so of these cards used right, you could score yourself a place on TOP500 SuperComputers. "Hey, my 4 graphic stations can beat your 1000-node Xeon cluster!".

And this is no joke, since GF8 series and the whole NVIDIA CUDA thing, NVIDIA has also started making... erm.. servers.

NVIDIA Tesla S870 GPU computing system peaks something like 2TFLOPS.

While one of those "low powered MIPS 64 CPU's" in the SiCortex, about 1GFLOP :-). But they have clusters of up to 5832.

PCI-E Cell accelerator board:

    * Cell BE processor at 2.8 GHz
    * More than 180 GFLOPS in PCI Express accelerator card
    * PCI Express x16 interface with raw data rate of 4 GB/s in each direction
    * Gigabit Ethernet interface
    * 1-GB XDR DRAM, 2 channels each, 512 MB
    * 4 GB DDR2, 2 channels each, 2 GB
    * Optional MultiCore Plus™ SDK software

Runnig Symantec Veritas Cluster Server on Windows Vista 64bit

Getting Veritas Cluster Server Simulator (VCS Simulator) to install and run can be a bit of a pain. Here's the deal:

Start a command prompt as Administrator, and install it using:

msiexec /i vcs_simulator.msi

Once the software is installed, download MSVCR70.DLL and put it in the VCS directory (C:\Program Files (x86)\VERITAS\Cluster Manager\bin\).

Then run "Veritas VCS Simulator - Java Console" and "Veritas Cluster Manager - Java Console" as Administrator (Right Click - Run as Administrator) and unblock the Windows Firewall ports (allow exception).

Resource types in Solaris 10 configuration

* net - a network interface. As you remember, when adding such a resource, you have to specify a physically present network adapter card you have in your box, and zone’s network interface will be a virtual interface on this network adapter.
* device - any additional device. Using device names mask (for instance, /dev/pts*), you can allow a non-global zone access any devices you have on your actual system.
* fs - a file system. You can grant access to a physical disk or any directory of your actual system to any non-global zone. You can specify a file system type along with mount options, which is very convenient.
* inherit-pkg-dir - a globa zone root filesystem directory which is inherited by a non-global zone. Specifying a directory name, you’re pointing to the fact that all the files from this directory of your actual system (global zone) will not be physically copied into the non-global zone, but insteal will be inherited. The fact is, files from these directories will be accessible through a read-only loopback filesystem in your non-global zone (thanks, Dan!)
* attr - an attribute. With resources of this type you can create text comments for your zones – these comments might come in handy when you get back to reconfiguring your zone some time later.
* rctl - a zone-wide resource control. At this stage, there are only two parameters of this type -zone.cpu-shares and zone.max-lwps, but there will be more in the future. These parameters allow you to limit a CPU time given to a zone, and limit a max number of lwp processes which can be created in a zone.

Exclusive IP configuration for non-global Solaris zones

Configured using this statement in zone configuration:

set ip-type=exclusive

… this mode implies that a given non-global zone will have exclusive access to one of the NICs on your system.

While for me the most important aspect of such exclusivity was the possibility to configure zone-specific routing, there’s obviously much more offered by this mode:

* DHCPv4 and IPv6 stateless address autoconfiguration
* IP Filter, including network address translation (NAT) functionality
* IP Network Multipathing (IPMP)
* IP routing
* ndd for setting TCP/UDP/SCTP as well as IP/ARP-level knobs
* IP security (IPsec) and IKE, which automates the provision of authenticated keying material for IPsec security association

So here it is – another design lesson for you – make sure you know what kind of networking your zones will need.
See also:

Install Solaris 8 zone using flar-archive

solaris# zoneadm -z solaris8 install -u -a /export/solaris8.flar
Log File: /var/tmp/solaris8.install.13597.log
Source: /export/solaris8.flar
Installing: This may take several minutes…
Postprocessing: This may take several minutes…
WARNING: zone did not finish booting.
Result: Installation completed successfully.
Log File: /export/solaris8/root/var/log/solaris8.install.13597.log

In my case the Solaris 8 zone got stuck on sys-unconfig, and so I had to connect to the virtual console of the zone to help it move on:

Here’s how you connect to a zone’s console:

solaris# zlogin -C solaris8

That’s it! The rest was easy – just a few minutes of configuring the network parameters and DNS/NIS settings. Finally, I was able to ssh into the new zone and run uname:

solaris8 #uname -a
SunOS solaris8 5.8 Generic_Virtual sun4u sparc SUNW,Sun-Fire-V490

I liked Solaris 8 Migration Assistant very much. It’s an incredibly quick and easy way to have a whole bunch of Solaris 8 systems virtualized and running under on one of the most advanced servers with the most advanced OS – Solaris 10u4.

Set up a Solaris 8 zone

Here’s how you do it:
solaris# zonecfg -z solaris8
solaris8: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:solaris8-system> create -t SUNWsolaris8
zonecfg:solaris8> set zonepath=/export/solaris8
zonecfg:solaris8> add net
zonecfg:solaris8:net> set address=172.21.7.155/24
zonecfg:solaris8:net> set physical=ce0
zonecfg:solaris8:net> end
zonecfg:solaris8> commit
zonecfg:solaris8> exit
Naturally, your IP and network device name will be different. As of now, our zone is fully configured, but not yet installed.

Solaris 8 Migration Assistant (Project Etude)

First of all, just a few words about the niche for this product. Many companies are rather conservative about their Solaris upgrades. Most systems are still running Solaris 8, if not something older. Quite often this is also dictated by third-party software dependencies – products which were bought and configured for Solaris 8, which are now so tightly integrated that there isn’t an easy way to migrated them into Solaris 10. Such systems are doomed for slow but very expensive death. Expensive, because with every year the hardware support for servers capable of running Solaris 8 raises again and again.
That’s where the Solaris 8 zones come in. It’s very easy, really: you create a flar-copy fo your existing physical server under Solaris 8, then create a Solaris 8 zone, import your flar-archive and get a virtual copy of your Solaris 8 environment, with all your processes, programs and startup scripts.
To make things easier, it’s even possible to configure your hostid in Solaris 8 zone to match the one of the physical Solaris 8 system, this way no programs running in the zone will even guess that they’ve been virtualized.
Who knows, maybe I’ll tell you more about this technology some other day, but for now – just the simplest list of actions and commands for your S8MA proof of concept.
Preparation: Solaris 10u4 server an S8MA packages
1. Find and prepare a sparc box with Solaris 10u4. It is important to have the latest Solaris 10 update. Preparations are usually limited to applying a kernel patch, 127111-01 in my case.
2. Download the Solaris 8 Migration Assitant (current version is 1.0) from this location: Solaris 8 Migration Assistant. The 3 packages in archive are dead easy to install using standard pkgadd.
Here are the packages you’ll get:
SUNWs8brandr Solaris 8 Migration Assistant: solaris8 brand support (Root)
SUNWs8brandu Solaris 8 Migration Assistant: solaris8 brand support (Usr)
SUNWs8p2v Solaris 8 p2v Tool
Preparation: Solaris 10u4 server an S8MA packages
1. Find and prepare a sparc box with Solaris 10u4. It is important to have the latest Solaris 10 update. Preparations are usually limited to applying a kernel patch, 127111-01 in my case.
2. Download the Solaris 8 Migration Assitant (current version is 1.0) from this location: Solaris 8 Migration Assistant. The 3 packages in archive are dead easy to install using standard pkgadd.
Here are the packages you’ll get:
SUNWs8brandr Solaris 8 Migration Assistant: solaris8 brand support (Root)
SUNWs8brandu Solaris 8 Migration Assistant: solaris8 brand support (Usr)
SUNWs8p2v Solaris 8 p2v Tool

Solaris 10 patch error codes

Exit Meaning
code

0 No error
1 Usage error
2 Attempt to apply a patch that's already been applied
3 Effective UID is not root
4 Attempt to save original files failed
5 pkgadd failed
6 Patch is obsoleted
7 Invalid package directory
8 Attempting to patch a package that is not installed
9 Cannot access /usr/sbin/pkgadd (client problem)
10 Package validation errors
11 Error adding patch to root template
12 Patch script terminated due to signal
13 Symbolic link included in patch
14 NOT USED
15 The prepatch script had a return code other than 0.
16 The postpatch script had a return code other than 0.
17 Mismatch of the -d option between a previous patch install and the current one.
18 Not enough space in the file systems that are targets of the patch.
19 $SOFTINFO/INST_RELEASE file not found
20 A direct instance patch was required but not found
21 The required patches have not been installed on the manager
22 A progressive instance patch was required but not found
23 A restricted patch is already applied to the package
24 An incompatible patch is applied
25 A required patch is not applied
26 The user specified backout data can't be found
27 The relative directory supplied can't be found
28 A pkginfo file is corrupt or missing
29 Bad patch ID format
30 Dryrun failure(s)
31 Path given for -C option is invalid
32 Must be running Solaris 2.6 or greater
33 Bad formatted patch file or patch file not found
34 Incorrect patch spool directory
35 Later revision already installed
36 Cannot create safe temporary directory
37 Illegal backout directory specified
38 A prepatch, prePatch or a postpatch script could not be executed
39 A compressed patch was unable to be decompressed
40 Error downloading a patch
41 Error verifying signed patch
42 Error unable to retrieve patch information from SQL DB.
43 Error unable to update the SQL DB.
44 Lock file not available
45 Unable to copy patch data to partial spool directory.

How to show future timestamps in Solaris

And now comes the moment to reveal the little trick I was talking about. Even though the standard /bin/ls command won't show you the future timestamps, you can still check them using the /usr/ucb/ls version of the ls command. The syntax is very similar, but you can also see the future timestamps:

solaris$ /usr/ucb/ls -al *myserver1*
-rw-r--r-- 1 bbuser 48 Jan 9 10:59 np_greys@solaris-server.com_myserver1.conn
-rw-r--r-- 1 bbuser 50 Jan 9 10:41 np_greys@solaris-server.com_myserver1.cpu
-rw-r--r-- 1 bbuser 51 Jan 9 10:41 np_greys@solaris-server.com_myserver1.disk
-rw-r--r-- 1 bbuser 53 Jan 9 10:36 np_greys@solaris-server.com_myserver1.memory
-rw-r--r-- 1 bbuser 51 Jan 9 10:41 np_greys@solaris-server.com_myserver1.msgs
-rw-r--r-- 1 bbuser 52 Jan 9 11:16 np_greys@solaris-server.com_myserver1.procs

Looking at them, you can see that BigBrother simply set the modification time for these files to be 45min into the future.

That's it for today – hope you liked this trick!

How to see future file timestamps in Solaris

I know I've spoken about timestamps already, but I'd like to further expand the topic.

While there's a great GNU stat command in Linux systems, there's no such thing in Solaris by default, and so you usually depend on ls command with various options to look at file's creation, modification or access time.

The standard /bin/ls command in Solaris doesn't always show you the full timpestamp, usually if it's about a time too far in the past or a bit into the future – so today I'm going to show you a trick to work around it and still confirm such timestamps for any file.
Standard ls command in Solaris doesn't always show full timestamps
Here's an example: BigBrother monitoring suite creates np_ files for internal tracking of times to send out email notifications. It deliberately alters the timestamps so that they're set for a future date – that's how it tracks the time elapsed between the event and the next notification about it.
However, not all of these np_ files are shown with their full timestamps, some just show the date, with no time:
solaris$ ls -l *myserver1*
-rw-r--r-- 1 bbuser bbgroup 48 Jan 9 2009 np_greys@solaris-server.com_myserver1.conn
-rw-r--r-- 1 bbuser bbgroup 50 Jan 9 10:41 np_greys@solaris-server.com_myserver1.cpu
-rw-r--r-- 1 bbuser bbgroup 51 Jan 9 10:41 np_greys@solaris-server.com_myserver1.disk
-rw-r--r-- 1 bbuser bbgroup 53 Jan 9 10:36 np_greys@solaris-server.com_myserver1.memory
-rw-r--r-- 1 bbuser bbgroup 51 Jan 9 10:41 np_greys@solaris-server.com_myserver1.msgs
-rw-r--r-- 1 bbuser bbgroup 52 Jan 9 2009 np_greys@sola

How To Confirm if Your CPU is 32bit or 64bit

Obtaining CPU information from /proc/cpuinfo

Most Linux distros will have the special /proc/cpuinfo file which contains a textual description of all the features your processors have. This is a very useful file – depending on your task it may help you identify any features of your processors, as well as confirm the overall number of CPUs your system has installed.

Most commonly, the following information is obtained from /proc/cpuinfo:

* processor model name and type
* processor speed in Mhz
* processor cache size
* instruction flags supported by CPU

Here's how the typical output will look:

processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 3.20GHz
stepping : 3
cpu MHz : 3192.320
cache size : 2048 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts
acpi mmx fxsr sse sse2 ss ht tm syscall nx lm pni monitor ds_cpl cid cx16 xtpr
bogomips : 6388.78
clflush size : 64
cache_alignment : 128
address sizes : 36 bits physical, 48 bits virtual
power management:The same block of information will be shown for each CPU visible to your system. There will be 2 processor instances for each physical CPU if hyper-treading is enabled, and there will be 2 or 4 processor entries for each physical CPU on dual- and quad-core systems configurations.
How to confirm the 64bit capability of your CPU in Linux

Based on /proc/cpuinfo file, it is quite easy to confirm whether your CPU is capable of 64bit or not. All you have to do is look at the flags which tell you what instruction sets your CPU is capable of.

All the CPUs on your system will have the same type and therefore support the same instruction sets, that's why in this example the grep command returns 4 similar lines – for the 4 CPU instances found on my system:

How to Confirm Disks Capacity in Linux

show disk size in Unix is a very popular request visitors use to arrive at my Unix Tutorial pages. Since I never addressed the question of confirming the number of hard drivers available on your system or the task of finding out a disk's capacity, I'd like to document a quick and easy way of doing just that.

I hope that when someone looks for a way to show disk size, what's really expected is a command to help you confirm the capacity of a disk in gigabytes.
Using fdisk command in Linux

One of the easiest ways to learn a lot about hard drives installed on your Linux system is to use the fdisk command:suse# fdisk -l

Disk /dev/sda: 145.4 GB, 145492017152 bytes
255 heads, 63 sectors/track, 17688 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 262 2104483+ 82 Linux swap / Solaris
/dev/sda2 * 263 17688 139974345 83 Linux

As you can see, there are two sections in the output provided: disk information (capacity and geometry) and disk layout (partitions). The same pattern is repeated if you have more than one disk installed.

What you should look for is the lines starting with "Disk" word: they usually specify the device names for each drive and also provide the capacity in gigabytes. Thus, a time saver would be to grep the necessary information from the command above, this way:

suse# fdisk -l | grep Disk
Disk /dev/sda: 145.4 GB, 145492017152 bytes

On a system with multiple disks, the output will look more useful:

redhat# fdisk -l | grep Disk
Disk /dev/sda: 21.4 GB, 21474836480 bytes
Disk /dev/sdb: 4294 MB, 4294967296 bytes

That's it – a very simple way for you to determine the number of disks in your system while also confirming the capacity available for your needs. fdisk command is actually a very powerful disks management tool which allows you to manage partitions – create and delete them or modify the type of each partition. I will be sure to revisit this command some other time cause usage above doen't do this wonderful Unix command any justice.

Shared IP configuration for non-global Solaris zones

By default, non-global zones will be configured with a shared IP functionality. What this means is that IP layer configuration and state is shared between the zone you’re creating and the global zone. This usually implies both zones being on the same IP subnet for each given NIC.

Shared IP mode is defined by the following statement in zone configuration:

set ip-type=shared

Here’s all the commands needed to enable it for a zone called s10zone in my example:

solaris# zonecfg -z s10zone
zonecfg:s10zone> set ip-type=shared
zonecfg:s10zone> verify
zonecfg:s10zone> commit
zonecfg:s10zone> end
solaris#

While I’ve deployed quite a few zones before, it was only recently that I learned what sharing IP layer configuration meant in practical terms: no IP routing within non-global zone. So if for some reason you want your non-global zone to use a different IP route for connecting one of the available networks, you really can’t don it in shared IP mode, because your non-global zone can only inherit the routing rules of the global zone.

You still have an option of assigning different IP addresses to different virtual interfaces of a non-global zone, but unless their routing is catered for by the global zone, it won’t be of much use.
Exclusive IP configuration for non-global Solaris zones

Configured using this statement in zone configuration:

set ip-type=exclusive

… this mode implies that a given non-global zone will have exclusive access to one of the NICs on your system.

While for me the most important aspect of such exclusivity was the possibility to configure zone-specific routing, there’s obviously much more offered by this mode:

* DHCPv4 and IPv6 stateless address autoconfiguration
* IP Filter, including network address translation (NAT) functionality
* IP Network Multipathing (IPMP)
* IP routing
* ndd for setting TCP/UDP/SCTP as well as IP/ARP-level knobs
* IP security (IPsec) and IKE, which automates the provision of authenticated keying material for IPsec security association

So here it is – another design lesson for you – make sure you know what kind of networking your zones will need.

Tuesday, December 15, 2009

Oracle 11g installation on sun solaris10

1)Create groups for Oracle account
#groupadd oinstall
#groupadd dba
#groupadd oper
2)Create Oracle Default Home directory
# mkdir /export/home
# mkdir /export/home/oracle
3)Create Oracle user
# useradd -g oinstall -G dba -d /export/home/oracle -s /usr/bin/bash oracle
# chown oracle:oinstall /export/home/oracle
4)Create Project for Oracle for setting the kernel parameters
In case of Solaris 10, you can use projects to configure the kernel parameters instead of /etc/system file. This can be done as following
# projadd -U oracle -K "project.max-shm-memory=(priv,4g,deny)" oracle
# projmod -sK "project.max-sem-nsems=(priv,256,deny)" oracle
# projmod -sK "project.max-sem-ids=(priv,100,deny)" oracle
# projmod -sK "project.max-shm-ids=(priv,100,deny)" oracle
There are many more ways of creating project entries such as group.group-name or user.user-name .

prctl -n project.max-sem-ids -i task `ps -o taskid= -p $$`
5)Create .bash_profile for Oracle user
#Oracle Environment Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/u03/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1; export ORACLE_HOME
ORACLE_SID=TESTDB11G; export ORACLE_SID
PATH=$PATH:/usr/local/bin:/usr/ccs/bin:/usr/sfw/bin:$ORACLE_HOME/bin
Now Set the Display to a X-windowing enabled system.
$ export DISPLAY=192.168.4.47:0.0
Also allow the host to accept the connection by
$xhost +
Oracle Software Installation
Go to the Oracle dump location and run runInstaller as Oracle user
$./runInstaller
This will open Oracle Universal Installer(OUI) screen. If Oracle Universal Installer is not displayed, then ensure DISPLAY variable is set correctly. Select “Software only” option and install the software. If any of the pre-requisite’s are not met , then installation will fail. You would be required to make necessary changes to proceed.
Database Creation
We will be using ASM for the Database files. For this we need to perform some configuration
1)Prepare the Raw device for using as ASM Disks
# ls -l
total 0
crw------- 1 root root 125, 1 Jun 20 10:39 1
Disk should be owned by Oracle user and should have permission set to 660
# chown oracle:dba 1
# chmod 660 1

- # ls -ltr
total 0
crw-rw---- 1 oracle dba 125, 1 Jun 20 10:39 1
2)Configure CSS Service

3) Configure ASM Instance
a)Go to $ORACLE_HOME/bin
b)Execute dbca from this directory (ensure dbca is properly set)
$./dbca
c) Select Configure ASM Instance option. This will create ASM instance for you. After this you can create Diskgroups using GUI or else use sqlplus to do the same.
4)Now continue creating database normally and enter Diskgroup Name after selecting Oracle Managed files as database file location.
While you navigate through GUI screens, it will prompt you to Specifying Security Settings
- Keep the enhanced 11g security settings(recommended)
- Revert to pre 11g settings
Select the 11g settings which will enable Auditing by default and also enable Case sensitive passwords with Stronger password hashing algorithm.
• I have not discussed GUI screens for DBCA and OUI in this article. These are pretty much standard screens. In case you need more information about it, then you can refer to

Oracle 10g installation on sun solaris10

#/usr/sbin/prtconf | grep “Memory size” [Check RAM size]
# /usr/sbin/swap -s [check swap]
# df -k /tmp [check /tmp size (>400mb)]
# uname -r [check solaris version]
# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibm SUNWlibms SUNWsprot SUNWsprox SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt
# cat /etc/nsswitch.conf | grep hosts
# hostname
# domainname
RUN INSTALL:
—————-
A. create group name “dba”, oracle inventory group “oinstall” and “oracle” user
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
{Determine oracle user exist or not
# id -a oracle
{if exist, should be look like this=
uid=440(oracle) gid=200(oinstall) groups=201(dba),202(oper)
{create oracle user=
# useradd -d /export/home/oracle -g dba -G oinstall -m -s /bin/ksh oracle
#mkdir /export/home/oracle
#chown oracle:dba /export/home/oracle
{set password=
# passwd -r files oracle
{to determine nobody user=
# id nobody
# /usr/sbin/useradd nobody >>run if does not exist
B. EDIT FILE /export/home/oracle/.profile
————————————–
umask 022
TMP=/tmp
TMPDIR=$TMP
DISPLAY=localhost:0.0
export TMP TMPDIR DISPLAY
ORACLE_BASE=/u01/app/oracle [replace with ur Oracle base Directory]
ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 [replace with ur Oracle home Directory]
ORACLE_SID=jktdb [replace with your database]
PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH
C. Configure Kernel Parameter
—————————–
Note: Do not follow the official installation instruction, they contain misleading and out errors of fact!
#projadd oracle [This command will create a new 'resource project']
edit the /etc/user_attr file:
adm::::profiles=Log Management
lp::::profiles=Printer Management
root::::auths=solaris.*,solaris.grant;profiles=Web Console Management,All;lock_after_retries=no
oracle::::project=oracle [add this line]
then:
#su – oracle
$ id -p
$ prctl -n project.max-shm-memory -i project oracle
The display look like this:
project: 100: oracle
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 126MB – deny -
system 16.0EB max deny -
leaving the oracle user still connected in the original one Then, as root in the new terminal, you can issue this command:
#prctl -n project.max-shm-memory -v 4gb -r -i project oracle [create max memory to 4GB]
As soon as you’ve issued that command, switch back to the oracle user’s session and re-issue the earlier command:
$ prctl -n project.max-shm-memory -i project oracle
Note:
#prctl -n project.max-shm-memory -v 4gb -r -i project oracle [this setting will lost after reboot]
to set permanently, run this: #projmod -s -K “project.max-shm-memory=(priv,4gb,deny)” oracle
D. Performing the Oracle Installation
————————————-
#su – oracle
$xhost +
$export DISPLAY=localhost;0.0
$ xhost + >>run this if you install from remote PC
$ cd /export/home/database/ [the source unzipped here]
./runInstaller
FOR SOLARIS SPARC:
====================
$ gunzip ship_rel10_sol64_db.cpio.gz
$ cpio -idm < ship_rel10_sol64_db.cpio $./runInstaller If you found unsufficient SWAP disk space on your disk, create folder under / then run this command: ————————————————— $ TMP=/directory $ TMPDIR=/directory $ export TMP TMPDIR Follow the screen>>NEXT>>NEXT
last, run this as root user:
—————————-
/u01/app/oracle/oraInventory/orainstRoot.sh
/u01/app/oracle/product/10.2.0/db_1/root.sh
Create db:
———-
orc1
jktdb
E. 6.0 On-going Administration
——————————–
Finally, it’s time to get the web-based Enterprise Manager database administration tool up and running.
Since we’re using 10g Release 2, you should be able to launch a browser (Launch -> Web Browser) and simply navigate to : http://localhost:1158/em
If you do not know the correct port number to use, look for the following line in the $ORACLE_HOME/install/portlist.ini file.
in order to be able to log on as SYS with a password of whatever you supplied to the first screen of the Oracle installation wizard. In fact, getting a meaningful result at this point relies on three things having been performed successfully:
1. starting a listener (lsnrctl start)
2. opening the database (sqlplus / as sysdba then startup)
3. starting the Enterprise Manager agent (emctl start dbconsole)
F. Automating Database Startup
———————————————–
edit file “/var/opt/oracle/oratab” script to find lines with ‘Y’ at their ends
Create file “/etc/init.d/dbora”
——-
#!/bin/sh
ORA_HOME=/u01/app/oracle/product/10.2.0/db_1
ORA_OWNER=oracle
if [ ! -f $ORA_HOME/bin/dbstart ]
then
echo “Oracle startup: cannot start”
exit
fi
case “$1? in
’start’)
su – $ORA_OWNER -c “$ORA_HOME/bin/dbstart”
;;
’stop’)
su – $ORA_OWNER -c “$ORA_HOME/bin/dbshut”
;;
esac
———
#chmod 777 /etc/init.d/dbora
#/etc/init.d/dbora stop
To integrate dbora file to standart Solaris startup and shutdown process:
————————————————————————
#ln -s /etc/init.d/dbora /etc/rc0.d/K01dbora
#ln -s /etc/init.d/dbora /etc/rc2.d/S99dbora
IF u found error this:
———————–
ORACLE_HOME_LISTNER is not SET, unable to auto-stop Oracle Net Listener
edit file “dbstart” & “dbshut”, find line $ORACLE_HOME_LISTNER=$1
and change to = $ORACLE_HOME_LISTNER=/u01/app/oracle/product/10.2.0/db_1
RECOMMENDED DIRECTORY STRUCTURE:
———————————————-
[Oracle Base Directory:]
/u01/app/oracle
/u01/app/orauser
/opt/oracle/app/oracle
[Oracle Inventory Directory:]
ORACLE_BASE/oraInventory
[Oracle Home Directory:]
ORACLE_BASE/product/10.2.0/db_1
[Identify an existing oracle base directory:]
#more /var/opt/oracle/oraInst.loc
[the output should be:]
inventory_loc=/u01/app/oracle/oraInventory
inst_group=oinstall
# more /var/opt/oracle/oratab
*:/u03/app/oracle/product/10.2.0/db_1:N
*:/opt/orauser/infra_904:N
*:/oracle/9.2.0:N
COMMON INSTALLATION ERROR:
===========================
Unable to convert from “UTF-8? to “646? for NLS!
Solution: Install SUNWuiu8 package.
error adduser:
———————
UX: useradd: ERROR: Inconsistent password files. See pwconv(1M)
This is because the /etc/passwd and /etc/shadow files are out of synchronization on your machine. [CSCdi74894]
To fix this, run the pwconv command, and then rerun cwconfigure.
try to run:
wc -l /etc/passwd /etc/shadow
————–
ERROR Checking monitor: must be configured to display at least 256 colors >>> Could not execute auto check for
display colors using command /usr/openwin/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<<
Some requirement checks failed. You must fulfill these requirements before continuing with theinstallation, at which time they will be rechecked.
Solution(s):
1. Install SUNWxwplt package
2. Set DISPLAY variable
3. Execute xhost + on target (set in DISPLAY) computer
———————————————————-
Exception in thread “main” java.lang.UnsatisfiedLinkError:
… libmawt.so: ld.so.1: java: fatal: libXm.so.4: open failed: No such file or directory
Solution: Install the SUNWmfrun package.
—————————————————————————————————-
Can’t load ‘/usr/perl5/5.8.4/lib/i86pc-solaris-64int/auto/Sun/Solaris/Project/Project.so’ for module
Sun::Solaris::Project: ld.so.1: perl: fatal: libpool.so.1: open failed: No such file or directory at
/usr/perl5/5.8.4/lib/i86pc-solaris-64int/DynaLoader.pm line 230. at /usr/sbin/projadd line 19 Compilation
failed in require at /usr/sbin/projadd line 19. BEGIN failed–compilation aborted at /usr/sbin/projadd line 19.
Solution: Install the SUNWpool SUNWpoolr packages.
———————————————————————–
bash-3.00$ /u01/app/oracle/product/10.2.0/db_1/bin/./emctl start dbconsole
Exception in getting local host
java.net.UnknownHostException: -a: -a
at java.net.InetAddress.getLocalHost(InetAddress.java:1191)
at oracle.sysman.emSDK.conf.TargetInstaller.getLocalHost(TargetInstaller.java:4977)
at oracle.sysman.emSDK.conf.TargetInstaller.main(TargetInstaller.java:3758)
Exception in getting local host
Solution : check server hostname and /etc/hosts
————————————————————————-
UNINSTALL ORACLE 10G:
———————
1. remove all database, by running $dbca
2. stop any oracle process running:
Database Control : $ORACLE_HOME/bin/emctl stop dbconsole
Oracle Net listener : $ORACLE_HOME/bin/lsnrctl stop
iSQL*Plus : $ORACLE_HOME/bin/isqlplusctl stop
Ultra Search : $ORACLE_HOME/bin/searchctl stop
3. Start Oracle Universal installer:
$ORACLE_HOME/oui/bin/runInstaller
4. In the Welcome window, click Deinstall Products.
5. In the Inventory screen, select the Oracle home and the products that you want to remove,