Monday, December 14, 2009

Sun Cluster 3.2 has the following features and limitations:

Support for 2-16 nodes.
Global device capability--devices can be shared across the cluster.
Global file system --allows a file system to be accessed simultaneously by all cluster nodes.
Tight implementation with Solaris--The cluster framework services have been implemented in the kernel.
Application agent support.
Tight integration with zones.
Each node must run the same revision and update of the Solaris OS.
Two node clusters must have at least one quorum device.
Each cluster needs at least two separate private networks. (Supported hardware, such as ce and bge may use tagged VLANs to run private and public networks on the same physical connection.)
Each node's boot disk should include a 500M partition mounted at /globaldevices
Attached storage must be multiply connected to the nodes.
ZFS is a supported file system and volume manager. Veritas Volume Manager (VxVM) and Solaris Volume Manager (SVM) are also supported volume managers.
Veritas multipathing (vxdmp) is not supported. Since vxdmp must be enabled for current VxVM versions, it must be used in conjunction with mpxio or another similar solution like EMC's Powerpath.
SMF services can be integrated into the cluster, and all framework daemons are defined as SMF services
PCI and SBus based systems cannot be mixed in the same cluster.
Boot devices cannot be on a disk that is shared with other cluster nodes. Doing this may lead to a locked-up cluster due to data fencing.

The overall health of the cluster may be monitored using the cluster status or scstat -v commands. Other useful options include:

scstat -g: Resource group status
scstat -D: Device group status
scstat -W: Heartbeat status
scstat -i: IPMP status
scstat -n: Node status

Failover applications (also known as "cluster-unaware" applications in the Sun Cluster documentation) are controlled by rgmd (the resource group manager daemon). Each application has a data service agent, which is the way that the cluster controls application startups, shutdowns, and monitoring. Each application is typically paired with an IP address, which will follow the application to the new node when a failover occurs.

"Scalable" applications are able to run on several nodes concurrently. The clustering software provides load balancing and makes a single service IP address available for outside entities to query the application.

"Cluster aware" applications take this one step further, and have cluster awareness programmed into the application. Oracle RAC is a good example of such an application.

All the nodes in the cluster may be shut down with cluster shutdown -y -g0. To boot a node outside of the cluster (for troubleshooting or recovery operations, run boot -x

clsetup is a menu-based utility that can be used to perform a broad variety of configuration tasks, including configuration of resources and resource groups.

Cluster Configuration
The cluster's configuration information is stored in global files known as the "cluster configuration repository" (CCR). The cluster framework files in /etc/cluster/ccr should not be edited manually; they should be managed via the administrative commands.

The cluster show command displays the cluster configuration in a nicely-formatted report.

The CCR contains:

Names of the cluster and the nodes.
The configuration of the cluster transport.
Device group configuration.
Nodes that can master each device group.
NAS device information (if relevant).
Data service parameter values and callback method paths.
Disk ID (DID) configuration.
Cluster status.

Troubleshooting Hard Drive Connectivity

Disk drive connectivity problems on a Solaris 2.x systems can be caused by software, hardware or PROM configuration problems.
Software Problems
New devices may require that the appropriate /dev and /devices files be created. This can be done through use of the drvconfig and disks commands, but it is usually done by performing a boot -r from the ok> prompt.
Once the system is back, the root user should be able to run format and see the disk listed as available. The disk can then be partitioned and labelled with format and have filesystems created with newfs or mkfs, as appropriate.
The presence of the appropriate /dev and /devices files can be verified by running the commands ls -lL /dev/dsk and ls -lL /dev/rdsk and making sure that they are block and character special files respectively, with major numbers depending on the driver used. (See the ls man page if you are not sure what this means.)
Files that can cause problems with hard drive connectivity include:
/dev/dsk/c#t#d#s# or /dev/rdsk/c#t#d#s# and related /devices files
/etc/name_to_major
/etc/minor_perm
Problems with the /dev and /devices files can be corrected directly by removing the offending files and recreating them, either directly with mknod and ln -s or indirectly with drvconfig , disks or boot -r (as appropriate).
Hardware Problems
The most common sources of hard drive connectivity problems (once the device files are built) are loose cables and terminators. Check these first before proceeding.
The system SCSI buses can be probed at the ok> prompt. To set this up, perform the following:
ok> setenv auto-boot? false
ok> reset
ok> probe-scsi-all
(after output)
ok> setenv auto-boot? true
(if appropriate)
This will give a hardware mapping of all SCSI devices on the system. If the hard drive in question does not appear, you have either a hardware problem or a PROM search path problem. To check for the PROM search path problem, run the following:
ok> printenv
Look for the pcia-probe-list or sbus-probe-default parameters and make sure that they are set to the default for your system.
Some additional hardware diagnostics are available at the PROM monitor (ok>) prompt. Additional information may come from navigating the PROM device tree at the ok> prompt.

Solaris Fault Management

Solaris Fault Management
The Solaris Fault Management Facility is designed to be integrated into the Service Management Facility to provide a self-healing capability to Solaris 10 systems.

The fmd daemon is responsible for monitoring several aspects of system health.

The fmadm config command shows the current configuration for fmd.

The Fault Manager logs can be viewed with fmdump -v and fmdump -e -v.

fmadm faulty will list any devices flagged as faulty.

fmstat shows statistics gathered by fmd.

Fault Management
With Solaris 10, Sun has implemented a daemon, fmd, to track and react to fault management. In addition to sending traditional syslog messages, the system sends binary telemetry events to fmd for correlation and analysis. Solaris 10 implements default fault management operations for several pieces of hardware in Sparc systems, including CPU, memory, and I/O bus events. Similar capabilities are being implemented for x64 systems.

Once the problem is defined, failing components may be offlined automatically without a system crash, or other corrective action may be taken by fmd. If a service dies as a result of the fault, the Service Management Facility (SMF) will attempt to restart it and any dependent processes.

The Fault Management Facility reports error messages in a well-defined and explicit format. Each error code is uniquely specified by a Universal Unique Identifier (UUID) related to a document on the Sun web site at http://www.sun.com/msg/ .

Resources are uniquely identified by a Fault Managed Resource Identifier (FMRI). Each Field Replaceable Unit (FRU) has its own FMRI. FMRIs are associated with one of the following conditions:

ok: Present and available for use.
unknown: Not present or not usable, perhaps because it has been offlined or unconfigured.
degraded: Present and usable, but one or more problems have been identified.
faulted: Present but not usable; unrecoverable problems have been diagnosed and the resource has been disabled to prevent damage to the system.

The fmdump -V -u eventid command can be used to pull information on the type and location of the event. (The eventid is included in the text of the error message provided to syslog.) The -e option can be used to pull error log information rather than fault log information.

Statistical information on the performance of fmd can be viewed via the fmstat command. In particular, fmstat -m modulename provides information for a given module.

Solaris 10 Beta Feature Summary:

Storage Technologies NFSv4 - The NFSv4 protocol is supported for both client and server, providing performance and functionality enhancements over NFSv3.

ZFS - (Zettabyte File System) A new storage system which replaces traditional file systems and volume managers in order to provide truly logical filesystems on top of any arrangement of storage devices.

Multipath I/O Boot - Allows booting from MPxIO controlled devices (such as T3 storage arrays).

Solaris Volume Manager - Further ease-of-use enhancements, such as a new tool for volume creation.

Multi-terabyte UFS/SVM System Management Consolidation and Virtualization Technologies Service Management Facility - An expanded mechanism for starting, managing, and monitoring long-running Solaris applications. Services started by this facility benefit from enhanced fault-tolerance, managability, and observability.

Solaris Zones - Zones are isolated, secure application environments hosted by the Solaris kernel. Zones provide improved server consolidation, allowing a single server to look like many.

Resource pools - A flexible configuration mechanism for partitioning of kernel resources among a set of workloads. Resource pools can be dynamically and automatically resized in accordance with system and application goals.

IP Quality of Service - Implements the IETF differentiated Service model enabling the definition of class of service for flows, users, projects, and a mapping to IEEE 802.1p user priorities.

Debugging, Tracing, and ObservabilityTechnologies KMDB (Kernel Modular Debugger) - A more flexible replacement for the existing kernel debugger (kadb), based on the mdb(1M)debugger.

Dtrace - A comprehensive and powerful new tracing facility, which allows dynamic instrumentation of kernel and user programs.

Networking Technologies IPMP Enhancements - Probe-based failure/repair detection is now optional, removing the need for IPMP test addresses and simplifying configuration. The set of drivers that support link up/down notifications has also been greatly extended.

IP Filter - IPFilter provides a stateful packet filter that can be used to provide Network Address Translation (NAT) or firewall services.

Wide Area Network Boot (WAN Boot) - Boot and install Solaris over http and/or https protocols; boot servers connected to the local subnet are not required.

Security Technologies Privilege-based security model - Fine grained control over the privileges with which a process runs. Provides the ability to limit privilege escalation attacks and further secures the system against hackers.

Encryption framework interfaces - A wide range of encryption algorithms are made available to user programs; hardware acceleration is automatically enabled when available.

Solaris Multipathing

Solaris Multipathing
The “Solaris FC and Storage Multipathing Software” is included with the Solaris 10 license. It is enabled by default with the Solaris 10x86 installation, but is optional in the Sparc installlation.
Currently, the software supports multipathing for fibre channel connections using supported host bus adapters. It does not currently support multipathing for parallel SCSI devices or IP over FC.
For Sparc-based systems, multipathing support is enabled and disabled via the stmsboot -e and stmsboot -d commands. This command reboots the system to complete the process, so make sure that the right boot-device is included in the EEPROM settings before proceeding.
When multipathing is enabled, copies of the /etc/vfstab and /kernel/drv/fp.conf files are preserved to allow the changes to be backed out if necessary.
For x86-based systems, directly edit the fp.conf to change the value of mpxio-disable to “no.” (Disabling it will involve changing it to “yes.”) After the change, run a reconfiguration reboot.
To enable or disable multipathing on a per-port basis, the mpxio-disable parameter may be set on a port-specific line in the fp.conf. (Syntax guidance is included in the comments of the fp.conf file.)

solaris jumpstart bootserver with veritas netbackup

This is a mini cookbook how you can integrate the Veritas Netbackup Client
into a Solaris Jumpstart Server bootimage, so you can do a "boot net" and restore your files disk.

Step 1 - prepare server
Setup a standard Jumpstart server, using "setup_install_server",
"add_to_install_server" scripts on the Solaris CD-ROM. See Sun documentation for details.
I assume that you created the jumpstart server below
/opt/jumpstart/sol_8_202_sparc.
The boot image is now located in
/opt/jumpstart/sol_8_202_sparc/Solaris_8/Tools/Boot.

Step 2 - install files
On a server with Netbackup installed, go to /usr/openv and create a tarball with
"tar cf /tmp/nbu.tar ./*"
Back to your jumpstart server:
mkdir -p /opt/jumpstart/sol_8_202_sparc/Solaris_8/Tools/Boot/usr/openv
copy your nbu.tar in the newly created directory and "tar xvf" it.

Step 3 - create necessary links
When you boot Solaris over the net, all filesystem except /tmp are read-only.
To make Netbackup usable, some files/dirs must be linked to a writeable location. Go to
/opt/jumpstart/sol_8_202_sparc/Solaris_8/Tools/Boot/usr/openv/netbackup.
do:
cp bp.conf bp.conf.org
ln -s ../../../tmp/bp.conf bp.conf
mv logs logs.org
ln -s ../../../tmp logs

Step 4 - edit system files
Netbackup needs entries in inetd.conf and the services file. Add at the end of

/opt/jumpstart/sol_8_202_sparc/Solaris_8/Tools/Boot/etc/inetd.conf:

bpcd stream tcp nowait root /usr/openv/netbackup/bin/bpcd bpcd
bpjava-msvc stream tcp nowait root /usr/openv/netbackup/bin/bpjava-msvc bpjava-msvc -transient

And at the end of /opt/jumpstart/sol_8_202_sparc/Solaris_8/Tools/Boot/etc/services:

bprd 13720/tcp bprd
bpcd 13782/tcp bpcd

Step 5 - test it

Now you need to add client that you can boot with this modified image. Get the
hardware and run "add_install_client", see sun doc for details. Shutdown the
client and do "boot net -s". Once the box is up and you exited the Installer,
you are ready to go. Edit bp.conf (is
now empty) by adding some servers or just cat bp.conf.org > bp.conf
(Now it's all in its usual location below /usr/openv. You may also want to edit resolv.conf
and nsswitch.conf so you can lookup hostnames in DNS.
After this, mount a disk, run bp or bprestore and hope it works...

Installing Veritas volume manager

At the time of this writing, Solaris 8 is the most commonly deployed version of Solaris, so we'll use that as the basis for this example. The steps are basically identical for the other releases.
1. After having completed the installation of the Solaris 8 operating system, the root filesystem should appear similar to the following:
2. # df -k
3. Filesystem kbytes used avail capacity Mounted on
4. /dev/dsk/c0t0d0s0 6607349 826881 5714395 13% /
5. /proc 0 0 0 0% /proc
6. fd 0 0 0 0% /dev/fd
7. mnttab 0 0 0 0% /etc/mnttab
8. /dev/dsk/c0t0d0s4 1016863 8106 947746 1% /var
9. swap 1443064 8 1443056 1% /var/run
10. swap 1443080 24 1443056 1% /tmp
11. Apply any operating system patches required by either Veritas volume manager and/or Veritas filesystem. At the time of this writing, this includes the latest versions of following patches to Solaris 8: SUNWsan package, 109529-06, 111413-06, 108827-19, 108528-14, 110722-01. However, please check the Veritas volume manager 3.5 installation guide for the authoritative list. Finally, you should also refer to Veritas volume manager 3.5 hardware notes for any specific requirements for your storage hardware.
12. Insert the Veritas volume manager software cdrom into the cdrom drive. If volume management is enabled, it will automatically mount to /cdrom/volume_manager3.5 (depending on the precise release iteration, the exact path may differ in your case):
13. Change to the directory containing the Veritas Volume Manager packages:
# cd /cdrom/volume_manager3.5/pkgs
14. Add the required packages. Note that the order specified is siginificant in that the VRTSvlic package must be first, the VRTSvxvm package must be second, and then any remaining packages:
15. # pkgadd -d . VRTSvlic VRTSvxvm VRTSvmdoc VRTSvmman VRTSob VRTSobgui \
16. VRTSvmpro VRTSfspro VRTSvxfs VRTSfsdoc
17.
18.
19. Processing package instance from
20.
21. VERITAS License Utilities
22. (sparc) 3.00.007d
23. VERITAS Software Corp
24.
25. VERITAS License Utilities 3.00.007d
26. Using as the package base directory.
27. ## Processing package information.
28. ## Processing system information.
29. 3 package pathnames are already properly installed.
30. ## Verifying disk space requirements.
31. ## Checking for conflicts with packages already installed.
32. ## Checking for setuid/setgid programs.
33.
34. . . .package installation proceeds, accepting input from administrator...
35.
36.
37. *** IMPORTANT NOTICE ***
38. This machine must now be rebooted in order to ensure
39. sane operation. Execute
40. shutdown -y -i6 -g0
41. and wait for the "Console Login:" prompt.
42. # shutdown -y -i6 -g0
43.
44. Once the system reboots, apply any Veritas volume manager patches. At the time of this writing, there are no patches for neither volume manager or filesystem. However, future patches (and there will be patches ;-) can be obtained from http://seer.support.veritas.com. Note that the patch installation instructions may require that a reboot be performed after the patch is installed
45. Proceed to the process of mirroring the operating system.

Installing the DiskSuite

The DiskSuite packages have moved around a bit with each release of Solaris. For Solaris 2.6 and Solaris 7, the DiskSuite packages are located on the Easy Access CD. With Solaris 8, DiskSuite moved to the "Solaris 8 Software" cdrom number two, in the EA directory. Starting the Solaris 9, DiskSuite is now included with the operating system.
At the time of this writing, Solaris 8 is the most commonly deployed version of Solaris, so we'll use that as the basis for this example. The steps are basically identical for the other releases.
1. After having completed the installation of the Solaris 8 operating system, insert the Solaris 8 software cdrom number two into the cdrom drive. If volume management is enabled, it will automatically mount to /cdrom/sol_8_401_sparc_2 (depending on the precise release iteration of Solaris 8, the exact path may differ in your case):
2. # df -k
3. Filesystem kbytes used avail capacity Mounted on
4. /dev/dsk/c0t0d0s0 6607349 826881 5714395 13% /
5. /proc 0 0 0 0% /proc
6. fd 0 0 0 0% /dev/fd
7. mnttab 0 0 0 0% /etc/mnttab
8. /dev/dsk/c0t0d0s4 1016863 8106 947746 1% /var
9. swap 1443064 8 1443056 1% /var/run
10. swap 1443080 24 1443056 1% /tmp
11. /vol/dev/dsk/c0t6d0/sol_8_401_sparc_2
12. 239718 239718 0 100% /cdrom/sol_8_401_sparc_2
13. Change to the directory containing the DiskSuite packages:
# cd /cdrom/sol_8_401_sparc_2/Solaris_8/EA/products/Disksuite_4.2.1/sparc/Packages
14. Add the required packages (we're taking everything except the Japanese-specific package):
15. # pkgadd -d .

logical volume management

There are two logical volume management packages commonly deployed in Solaris implementations. DiskSuite (newly referred to as Solaris Volume Manager) is available for free with the Solaris operating system. It is well-suited to hosts with a small (<10) disks with fairly static configurations. Veritas volume manager (commonly sold as part of Veritas Foundation Suite with Veritas Filesystem) is a third-party product that is better suited to more complex installations with larger numbers of disks. However, as we'll see, there are many installations that combine both packages on a single host.
Several factors are relevant in deciding which volume management tool to deploy. Because it's free, DiskSuite is a very appealing option for many hosts. It is also easy to administer and to install. However, Veritas volume manager should be considered for installations with more complex requirements, including:
Clustered installations with shared storage, in which Veritas diskgroups reduce the likelihood of filesystem corruption due to unintended simultaneous write access by multiple hosts.Installations with large numbers of disks. Though DiskSuite is certainly capable of handling a large number of disks, volume manager provides both graphical and text-baed interfaces that scale well as the number of disks increases.If there is a need to grow and shrink filesystems dynamically, Veritas filesystem is currently the only filesystem available for Solaris that allows one to shrink filesystems on the fly. Note, however, that one can purchase Veritas filesystem without Veritas volume manager as well.

SunCluster

i am working with both for some years now, and i am certified as "high availability designer for unix"... on my own opinion, i like the integration of the SunCluster more than VCS. like blowtorch said, the SunCluster is a kernel cluster which gives you very fast error detection, VCS just monitors the environment with "scripts" and daemons. to say that VCS uses "no kernel-level integration" is not true at all, because the cluster communication uses two kernel modules called "gab" and "llt"; anyhow, these two modules comes with the VCS but can be also found at some other products..
VCS has a nice user interface (GUI) and is "easy" to manage, where the SunCluster is a bit more "unix like". it's true to say that VCS was one of the best cluster solution in the past, comparing it to SunCluster 2.2. but since SC 3.0 the SunCluster became better and better. Looking at SC_3.1-U4 vs. VCS 4.1 i would miss nothing in SC what i need in a cluster environment. The latest version of SunCluster is 3.2 (released in 01/07), which brings a lot of new features. no question, VCS 5.0 is a massive product, but i when i am reading the release notes of VCS, i always think about a big table where 100 product managers think about new features no one really needs; reading the SunCluster release notes reminds me on 100 engineers planning how they could increase the availability of a cluster environment.
another point to think about are the costs, with VCS you must buy VxVM aswell, and every nice feature is an extra license (best example io-fencing) licenses based on tier levels... SunCluster could be used for free inside the Java Enterprise System without support... if you buy SunCluster, you get all features it would provide for one price... the SunCluster uses standard features from solaris like IPMP, MPxIO, UFS or SVM/LVM, with VCS you get thinks like VxVM, VxDMP, GAB/LLT or even VxFS
at the end of the day, i like to get support for all products from one hand, there can be no finger pointing in extreme emergency.... but once again, many of my costumers are very happy with their VCS and many are very happy with SC... It's purely a matter of taste....

Add ip in sun machine.

1st chk ip... with
#ifconfig -a
Set hostname in
#vi /etc/nodename
sunsolaris
After that put hostname entry in
#vi /etc/hostname.pcn0 (pcn0 lan card name it depends upon ur hardware conf)
sunsolaris
Then in
#vi /etc/hosts (entry like this)
ip address hostname
save the file nad restart network service.
#svcadm disable /network/physical:default
#svcadm enable /network/physical:default

For temp add ip use following cmd...
#ifconfig hme0 192.168.1.100 netmask 255.255.255.0 up
for interface up and down..
#ifconfig hme0 up
#ifconfig hme0 down

For multipal IP
#vi /etc/hostname.hme0:1
add host name and put entry in /etc/hosts

# vi /etc/defaultrouter :--- your default gateway.
#vi /etc/resolve.conf :- DNS entry

route add, delete, display
for routing enable
#routeadm -e ipv4-forwarding (-d for disable)
(netstat -rn) netstat -rn # show current routes
netstat -rnv # show current routes and their masks
route add destIP gatewayIP
route add destIP -netmask 255.255.0.0 gatewayIP
route delete destIP gatewayIP
route delete destIP -netmask 255.255.0.0 gatewayIP
eg:- #route add -net 30.0.0.0 -netmask 255.0.0.0 -gateway 192.168.1.100

You want to add a static route to network 192.168.16.0 to your default gateway of 10.236.74.1
#route add –net 192.168.16.0 10.236.74.1
then create a script, so that when the system rebooted the route will automatically added
#cd /etc/rc2.d
#vi S168staticroute
Add the following line
route add –net 192.168.16.0 10.236.74.1
You want to add a static route to host 192.168.64.4 to your default gateway of 10.236.74.1
#route add 192.168.64.4 10.236.74.1
then create a script, so that when the system rebooted the route will automatically added
#cd /etc/rc2.d
#vi S168staticroute
Add the following line
route add 192.168.64.4 10.236.74.1
You want to delete the static route to network 192.168.16.0 to your default gateway of 10.236.74.1
#route delete –net 192.168.16.0 10.236.74.1
You want to delete the static route to host 192.168.64.4 to your default gateway of 10.236.74.1
#route delete 192.168.64.4 10.236.74.1
command to restore a system configuration to an unconfigured state,ready to be reconfigured again.when md finished,it perform a syatem shutdown.
#sys-unconfig
POSTED BY RAMAN KUMAR AT 11:04 AM 0 COMMENTS LINKS TO THIS POST
MONDAY, APRIL 14, 2008
NFS (Network Filw System)
Solaris 10 use NFSv4 by default.
NFSv4 features:-- 1) Stateful connection
2) Single protocol,reducing the number of service-side daemons.
3) Improved Firewall support. NFSv4 uses the well-know port 2049.
4)Strong security.
5)Extended attribute.

NFS server files:-
/etc/dfs/dfstab :- list the local resources to share at boot time.
/etc/dfs/sharetab :- rutime read file..(dont edit this file)
/etc/dfs/fstype :- list the default file system type for remote file system.
/etc/rmtab :- list file system remotely mounted by nfs client.
/etc/nfs/nfslog.conf :- list information defining the location of conf logs used for nfs server logging.
/etc/default/nfslogd :- lists conf infomation describing the behaviour of the nfslogd daemon for nfsv2/3
/etc/default/nfs :-- contain parameter value for NFS protocols and NFS daemons.

For e.g #vi /etc/dfs/dfstab
share -F nfs -o rw-192.168.0.203,ro -d "homedir" /share
save file
#mkdir /share
#shareall
#dfshares
service stop,start
NFS server daemons.
#svcadm -v enable nfs/server
#svcs -a grep nfs

NFS server daemons
mountd:- chk /etc/dfs/sharetab file to determine a particular file or dir is being shared and whether the requesting client has permission to access the shared resources.
statd,lockd
nfslogd:- provides operational logging for an NFs server.
nfsd:- handle client file system request.
nfsmapid:- nfs user and group ID mapping daemon, implemented in NFSv4. there is no user interface to this daemon,but parameters can be set in /etc/default/nfs
daemons started by #svc:/network nfs/mapid service

NFS server command.
share,shareall,unshare,unshareall
dfshares:- list available shared resource from a remote or local NFS server.
dfmounts:- Display a list of NFS server dir that are currently mounted.

FRom client side access:-
#mount serverip :/share /test

Enabling NFS server logging.
/etc/default/nfslogd files defines default parameter used for NFS server logging.
server side
#vi /etc/dfs/dfstab
share -F nfs -o rw,log=global /test
save file
#mkdir /test
#vi /etc/nfs/nfslog.conf
add in last logformat=extended
then service stop and start
client side
#mkdir /test1
#mount serverip:/test /test1
cd test1

To see log in server side
#cat /var/nfs/nfslog
How add SWAP Space in sun solaris
How add swap space in solaris.
# swap -l (swap partition)
#swap -s (summary)
#df -h
Two ways are available to add swap space in solaris.
1) swap slices
2) swap files
swap slicesAdd a disk partition as a swap slice to ur existing swap space.
#swap -a /dev/dsk/c0t0d0s1
for delete
#swap -d /dev/dsk/c0t0d0s1
swap files
#df -h
find path where u want create the swap file...after that
#mkdir -p /raman/swap
#mkfile 20m /raman/swap/swapfile
#swap -a /raman/swap/swapfile
verify with
#swap -s or swap -l command
For delete swap space
#swap -d /raman/swap/swapfile
#rm /raman/swap/swapfile

In above both cases,to make the partition permanent do entry in /etc/vfstab
#vi /etc/vfstab
/swapfile - - swap - no -
save file and reboot system to check the effect.

Unix Command Summary

See the Unix tutorial for a leisurely, self-paced introduction on how to use the commands listed below. For more documentation on a command, consult a good book, or use the man pages. For example, for more information on grep, use the command man grep.
Contents
cat --- for creating and displaying short files
chmod --- change permissions
cd --- change directory
cp --- for copying files
date --- display date
echo --- echo argument
ftp --- connect to a remote machine to download or upload files
grep --- search file
head --- display first part of file
ls --- see what files you have
lpr --- standard print command (see also print )
more --- use to read files
mkdir --- create directory
mv --- for moving and renaming files
ncftp --- especially good for downloading files via anonymous ftp.
print --- custom print command (see also lpr )
pwd --- find out what directory you are in
rm --- remove a file
rmdir --- remove directory
rsh --- remote shell
setenv --- set an environment variable
sort --- sort file
tail --- display last part of file
tar --- create an archive, add or extract files
telnet --- log in to another machine
wc --- count characters, words, lines

Implementing shared memory resource controls on Solaris hosts

With the availability of the Solaris 10 operating system, the way IPC facilities (e.g., shared memory, message queues, etc.) are managed changed. In previous releases of the Solaris operating system, editing /etc/system was the recommended way to increase the values of a given IPC tunable. With the release of Solaris 10, IPC tunables are now managed through the Solaris resource manager. The resource manager makes each tunable available through one or more resource controls, which provide an upper bound on the size of a given resource.
Merging the management of the IPC facilities into the resource manager has numerous benefits. The biggest benefit is the ability to increase and decrease the size of a resource control on the fly (prior to Solaris 10, you had to reboot the system if you made changes to the IPC tunables in /etc/system). The second major benefit is that the default values were increased to sane defaults, so you no longer need to fiddle with adjusting the number of message queues and semaphores when you configure a new Oracle database. That said, there are times when the defaults need to be increased. If your running Oracle, this is especially true, since the default size of the shared memory resource control (project.max-shm-memory) is sized a bit low (the default value is 254M).
default for the entire system, you can add a resource control with the desired value to the system project (projects are used to There are two ways to increase the default value of a resource control. If you want to increase the group resource controls). If you want to increase the value of a resource control for a specific user or group, you can create a new project and then assign one or more resource controls to that project.
So lets say that you just installed Oracle, and got the lovely “out of memory” error when attempting to create a new database instance. To fix this issue, you need to increase the amount of shared memory that is available to the oracle user. Since you don’t necessarily want all users to be able to allocate gobs of shared memory, you can create a new project, assign the oracle user to that project and then add the desired amount of shared memory to the shared memory resource control in that project
To create a new project, the projadd utility can be executed with the name of the project to create:
$ projadd user.oracle
Once a project is created, the projmod utility can be used to add a resource control to the project (you can also edit the projects file directly if you want). The following example shows how to add a shared memory resource control with an upper bounds of 1GB to the user.oracle project we created above:
$ projmod -sK “project.max-shm-memory=(privileged,1073741824,deny)” user.oracle
To verify that a resource control was added to the system, the grep utility can be run with the project name and the name of the system project file (project information is stored in /etc/project):
$ grep user.oracle /etc/project

RMAN CONFIGURACTION

===================================
R-M-A-N =
===================================
Connect:
$rman target /
$rman nocatalog target /
$rman nocatalog target sys/aziz@JKTDB2
$rman target sys/aziz@JKTDB2
RMAN> shutdown;
RMAN> shutdown immediate;
RMAN> startup mount; >>startup an instance and mounting database
RMAN> startup nomout; >>startup an instance without mounting database
RMAN> startup restrict; >>restricting access to an instance at startup
RMAN> ALTER SYSTEM DISABLE RESTRICTED SESSION; >>disable the restricted session
RMAN> STARTUP FORCE; >>forcing an instance to start
RMAN> STARTUP OPEN RECOVER >>Starting an Instance, Mounting a Database,
and Starting Complete Media Recovery
RMAN> connect target sys/aziz@jktdb2;
RMAN> startup mount;
RMAN> register database; >>register target database to recovery catalog
RMAN> reset database; >>create new database incarnation record in the
recovery catalog
RMAN> backup database;
RMAN> backup current controlfile;
RMAN> backup database plus archivelog;
RMAN> backup archivelog all;
RMAN> list backup; >>list all file that was backed up
RMAN> list backup summary;
RMAN> restore database;
RMAN> restore archivelog all;
RMAN> recover database;
RMAN> restore datafile ‘C:\ORACLE\ORADATA\JKTDB2\USERS01.DBF’;
RMAN> recover datafile ‘C:\ORACLE\ORADATA\JKTDB2\USERS01.DBF’;
RMAN> Recover database until cancel using backup controlfile;
RMAN> Alter database open resetlogs;
RMAN> allocate channel for maintenance type disk;
RMAN> crosscheck backup of database;
RMAN> delete expired backup of database;
RMAN> crosscheck backup of archivelog all;
RMAN> delete expired backup of archivelog all;
RMAN> change archivelog all crosscheck;
RMAN> resync catalog;
RMAN> crosscheck backup of archivelog all;
RMAN> sql ‘alter tablespace USERS offline immediate’;
RMAN> recover tablespace USERS;
RMAN> sql ‘alter tablespace USERS online’;
—————————————————————
configure =
—————————————————————
RMAN>configure default device type to ’sbt_tape’; >>to tape
RMAN>configure default device type to DISK; >>to disk
rman>configure controlfile autobackup format for device type ’sbt_tape’ to ‘%F’;
RMAN>configure controlfile autobackup on;
RMAN>CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
RMAN>CONFIGURE RETENTION POLICY TO REDUNDANCY 3; >>Retain three backups of each datafile:
RMAN> resync catalog;
—————————————————————
Restore & Recover The Whole Database=
—————————————————————
run {
shutdown immediate; # use abort if this fails
startup mount;
restore database;
recover database;
alter database open;
}
—————————————————————-

Solaris Volume Manager

Solaris Volume Manager (SVM; formerly known as Online: DiskSuite, and later Solstice DiskSuite) is a software package for creating, modifying and controlling RAID-0 (concatenation and stripe) volumes, RAID-1 (mirror) volumes, RAID 0+1 volumes, RAID 1+0 volumes, RAID-5 volumes, and soft partitions.
Version 1.0 of Online: DiskSuite was released as an add-on product for SunOS [...]
As we may know already that, tape device are in “/dev/rmt” directory. Actually tapes creates symbolic links in the “/dev/rmt” directory to the actual tape device special files under the “/devices” directory tree. tapes searches the kernel device tree to see what tape devices are attached to the system.
Each tape LUN seen by the system is represented by 24 minor nodes in the form of /dev/rmt/N, /dev/rmt/Nb, and /dev/rmt/Nbn, where N is an integer counter starting from 0. This number is picked by devfsadm during enumeration of new devices. Every new tape logical unit number (LUN) found by devfsadm gets the next available number in /dev/rmt.
Example:
#tar cvf /dev/rmt/0cbn / {backup root (/) + no rewind + compress}
#tar cvf /dev/rmt/0 / {backup root (/) + no compress + rewind tape when finished}
#mt -f /dev/rmt/0 offline {rewind and eject the tape}
#tar cvzf /dev/rmt/0cbn /
#tar tvf /dev/rmt/0cbn {to list the file in the archive}
#tar tf /dev/rmt/0
#tar xvf /dev/rmt/0 {retrieve/restore all file from tape}
tar -czf /dev/rmt/0cbn /home
ufsdump 0uf
ex:
ufsdump 0uf /dev/rmt/0 /home
ufsrestore -i
ufsrestore f /dev/rmt/0 filename {restore filename}
ufsrestore rf sparc1:/dev/rmt/0 filename {restore entire directory sparc1}
ufsrestore rf /dev/rmt/0 {restore the entire content on tape drive}
ufsrestore ivf /dev/rmt/0

VERITAS Volume Manager

VERITAS Volume Manager software is an advanced, system-level disk and storage array solution that alleviates downtime during system maintenance by enabling easy, online disk administration and configuration. The product also helps ensure data integrity and high availability by offering fast failure recovery and fault tolerant features. VERITAS Volume Manager software provides easy-to-use, online storage management for enterprise computing and emerging Storage Area Network (SAN) environments. Through the support of RAID redundancy techniques, VERITAS Volume Manager software helps protect against disk and hardware failures, while providing the flexibility to extend the capabilities of existing hardware. By providing a logical volume management layer, VERITAS Volume Manager overcomes the physical restriction imposed by hardware disk devices.
So, there you have it. But let me briefly list some of the things they totally missed.
•Support for: Simple, RAID0, RAID1, RAID0+1, RAID1+0, RAID5
•Dynamic Multipathing (DMP): Load balancing and redundant I/O paths to disk arrays supporting multi-controller attachment.
•Online Relayout: VERITAS allows you to change the layout of a VERITAS volume while it is live and mounted. Change a RAID0 to a RAID5 without a second of downtime!
•Snapshoting: Take a snapshot of your data, creating a "shadow" of it which you can use for online backups.
•Hot Relocation: Designate "spares" which will take the place of failed disks on-the-fly.
•Dirty Region Logging (DRL): Volume transaction logs which provide fast recoveries after system crashes.
•More...... more than I can list!

install oracle 10g R2 in sun solaris 10

Unzip the files:

unzip 10202_database_solx86.zip

You should now have a single directory called "database" containing installation files.
Hosts File
The /etc/hosts file must contain a fully qualified name for the server:



Set Kernel Parameters
In previous versions of Solaris, kernel parameters were amended by adding entries to the "/etc/system" file, followed by a system reboot.

set semsys:seminfo_semmni=100
set semsys:seminfo_semmsl=256
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmni=100

The Oracle installer recognizes kernel parameters set using this method, but it is now deprecated in favour of resource control projects, explained below.

As the root user, issue the following command.

projadd oracle

Append the following line to the "/etc/user_attr" file.

oracle::::project=oracle

If you've performed a default installation, it is likely that the only kernel parameter you need to alter is "max-shm-memory". To check the current value issue the following command.

# prctl -n project.max-shm-memory -i project oracle
project: 100: oracle
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 254MB - deny -
system 16.0EB max deny -
#

To reset this value, make sure at least one session is logged in as the oracle user, then from the root user issue the following commands.

# prctl -n project.max-shm-memory -v 4gb -r -i project oracle
# projmod -s -K "project.max-shm-memory=(priv,4gb,deny)" oracle

The first dynamically resets the value, while the second makes changes to the "/etc/project" file so the value is persistent between reboots.

# cat /etc/project
system:0::::
user.root:1::::
noproject:2::::
default:3::::
group.staff:10::::
oracle:100::::project.max-shm-memory=(priv,4294967296,deny)
#

The Oracle installer seems incapable of recognising kernel parameter set using resource control projects, but if you ignore the warnings the installation completes successfully.
Setup
Add the "SUNWi1cs" and "SUNWi15cs" packages using the "pkgadd" command.

# pkgadd -d /cdrom/sol_10_106_x86/Solaris_10/Product SUNWi1cs SUNWi15cs

Processing package instance from

X11 ISO8859-1 Codeset Support(i386) 2.0,REV=2004.10.17.15.04
Copyright 2004 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.

This appears to be an attempt to install the same architecture and
version of a package which is already installed. This installation
will attempt to overwrite this package.

Using as the package base directory.
## Processing package information.
## Processing system information.
16 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of [y,n,?] y

Installing X11 ISO8859-1 Codeset Support as

## Installing part 1 of 1.

Installation of was successful.

Processing package instance from

X11 ISO8859-15 Codeset Support(i386) 2.0,REV=2004.10.17.15.04
Copyright 2004 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.

This appears to be an attempt to install the same architecture and
version of a package which is already installed. This installation
will attempt to overwrite this package.

Using as the package base directory.
## Processing package information.
## Processing system information.
21 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of [y,n,?] y

Installing X11 ISO8859-15 Codeset Support as

## Installing part 1 of 1.

Installation of was successful.
#

Create the new groups and users:

groupadd oinstall
groupadd dba
groupadd oper

useradd -g oinstall -G dba -d /export/home/oracle oracle
mkdir /export/home/oracle
chown oracle:oinstall /export/home/oracle
passwd -r files oracle

Create the directories in which the Oracle software will be installed:

mkdir -p /u01/app/oracle/product/10.2.0/db_1
chown -R oracle:oinstall /u01

If you have not partitioned your disks to allow a "/u01" mount point, you may want to install the software in the "/export/home/oracle" directory as follows:

mkdir -p /export/home/oracle/product/10.2.0/db_1
chown -R oracle:oinstall /export/home/oracle

Login as the oracle user and add the following lines at the end of the .profile file, making sure you have set the correct ORACLE_BASE value:

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

# Select the appropriate ORACLE_BASE
#ORACLE_BASE=/export/home/oracle; export ORACLE_BASE
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_SID=TSH1; export ORACLE_SID
PATH=$ORACLE_HOME/bin:$PATH; export PATH

Installation
Log into the oracle user. If you are using X emulation then set the DISPLAY environmental variable:

DISPLAY=:0.0; export DISPLAY

Start the Oracle Universal Installer (OUI) by issuing the following command in the database directory:

./runInstaller

UX: useradd: ERROR: Inconsistent password files. See pwconv(1M).

wc -l /etc/passwd /etc/shadow
# pwconv
useradd -d /export/home/someuser-m -s /bin/ksh -c "Regular User Account" someuser

...it creates a userid with the home dir where I expected it.

On the other box if I run exactly the same command I get and error telling me:

UX: useradd: ERROR: Inconsistent password files. See pwconv(1M).

Both commands are run as root and I can't see anything in /etc/passwd or /etc/shadow which would be a nice and simple explanation!
# useradd OlliLang
UX: useradd: ERROR: Inconsistent password files. See pwconv(1M).
# wc -l /etc/passwd /etc/shadow
24 /etc/passwd
24 /etc/shadow
48 total