Monday, December 14, 2009

Sun Cluster 3.2 has the following features and limitations:

Support for 2-16 nodes.
Global device capability--devices can be shared across the cluster.
Global file system --allows a file system to be accessed simultaneously by all cluster nodes.
Tight implementation with Solaris--The cluster framework services have been implemented in the kernel.
Application agent support.
Tight integration with zones.
Each node must run the same revision and update of the Solaris OS.
Two node clusters must have at least one quorum device.
Each cluster needs at least two separate private networks. (Supported hardware, such as ce and bge may use tagged VLANs to run private and public networks on the same physical connection.)
Each node's boot disk should include a 500M partition mounted at /globaldevices
Attached storage must be multiply connected to the nodes.
ZFS is a supported file system and volume manager. Veritas Volume Manager (VxVM) and Solaris Volume Manager (SVM) are also supported volume managers.
Veritas multipathing (vxdmp) is not supported. Since vxdmp must be enabled for current VxVM versions, it must be used in conjunction with mpxio or another similar solution like EMC's Powerpath.
SMF services can be integrated into the cluster, and all framework daemons are defined as SMF services
PCI and SBus based systems cannot be mixed in the same cluster.
Boot devices cannot be on a disk that is shared with other cluster nodes. Doing this may lead to a locked-up cluster due to data fencing.

The overall health of the cluster may be monitored using the cluster status or scstat -v commands. Other useful options include:

scstat -g: Resource group status
scstat -D: Device group status
scstat -W: Heartbeat status
scstat -i: IPMP status
scstat -n: Node status

Failover applications (also known as "cluster-unaware" applications in the Sun Cluster documentation) are controlled by rgmd (the resource group manager daemon). Each application has a data service agent, which is the way that the cluster controls application startups, shutdowns, and monitoring. Each application is typically paired with an IP address, which will follow the application to the new node when a failover occurs.

"Scalable" applications are able to run on several nodes concurrently. The clustering software provides load balancing and makes a single service IP address available for outside entities to query the application.

"Cluster aware" applications take this one step further, and have cluster awareness programmed into the application. Oracle RAC is a good example of such an application.

All the nodes in the cluster may be shut down with cluster shutdown -y -g0. To boot a node outside of the cluster (for troubleshooting or recovery operations, run boot -x

clsetup is a menu-based utility that can be used to perform a broad variety of configuration tasks, including configuration of resources and resource groups.

Cluster Configuration
The cluster's configuration information is stored in global files known as the "cluster configuration repository" (CCR). The cluster framework files in /etc/cluster/ccr should not be edited manually; they should be managed via the administrative commands.

The cluster show command displays the cluster configuration in a nicely-formatted report.

The CCR contains:

Names of the cluster and the nodes.
The configuration of the cluster transport.
Device group configuration.
Nodes that can master each device group.
NAS device information (if relevant).
Data service parameter values and callback method paths.
Disk ID (DID) configuration.
Cluster status.

1 comment:

  1. what about other ways of how to repair sql server, you may take a closer look at this application, for example

    ReplyDelete