Wednesday, December 16, 2009

Sun SPARC Enterprise T5120


Product : Sun SPARC Enterprise T5120, T5220   
Date of Resolved Release : 12-Feb-2008  

Some Sun SPARC Enterprise T5120 and T5220 Servers Shipped With an Incorrect Solaris 10 Image Containing an Insecure Configuration
Impact
Sun SPARC Enterprise T5120 and T5220 servers with datecode prior to BEL07480000 have been mistakenly shipped with factory settings in the pre-installed Solaris 10 OS image. These settings may allow a local or remote user to be able to execute arbitrary commands with the privileges of the root (uid 0) user.
(To determine if your systems are affected by this issue please look for the changed parameters and extra files listed in the Contributing Factors section below).
2. Contributing Factors
This issue can occur on the following platforms:
Sun SPARC Enterprise T5120 and T5220 Servers with datecode prior to BEL07480000
Note: Systems are only impacted by this issue if they have an incorrect factory image installed.
To determine the datecode on the T5120 or T5220, use either "Lights Out Management" (LOM) or prtdiag(1M) commands:

    ILOM CLI:  > show /SYS/
    ALOM CLI:  sc> showplatform
    prtdiag –v




The Sun Storage 7410

The Sun Storage 7410 Unified Storage System is ideal for enterprises requiring mission-critical storage and provides dramatically easier and faster ways to manage and scale your storage up to 288 TB[1].
At a Glance
Easy-to-use DTrace Analytics increase production uptime
Scales up to 288 TB[1]
High-availability Cluster option protects against downtime
Flash Hybrid Storage Pool improves response times
Scales throughput, performance, and capacity to meet application needs
More Features
--------------------------------------------------------------------------------
Key Applications
Bulk unified storage
HPC
Web 2.0
Server virtualization
Database/BIDW
Backup

--------------------------------------------------------------------------------
Change Your Storage
Discover how the Sun Storage 7000 can radically simplify your storage, improving performance and cutting costs.
 

Troubleshooting grub error

Introduction to the Boot Process
When an x86-based system powers on, the BIOS initializes the CPU, memory, and platform hardware. Upon completion, the BIOS loads the initial bootstrap software (that is, the bootloader) from the configured boot device and hands control over to the bootloader. The Solaris 10 3/05 OS and earlier releases use a Sun-developed bootloader that includes an interactive shell and a menu-driven device configuration assistant based on realmode drivers.
Starting with the Solaris 10 1/06 release, the open source GRUB or GNU GRand Unified Bootloader is used as the bootloader. The initial delivery is based on GRUB version 0.95 and will be updated as newer versions become available. The Solaris kernel is fully compliant with the Multiboot Specification (reference 2); hence, the Solaris OS can be booted via any bootloader implementing the Multiboot Specification.
The switch to GRUB brings several benefits to Solaris customers.
With GRUB it is very easy to specify kernel and boot options in the boot menu.
For end users, booting and installing from USB DVD drives is now supported.
It is easier for the Solaris OS to coexist with other operating systems on the same machine. In particular, it is possible for the Solaris OS to share the same GRUB bootloader with Linux.
Deploying the Solaris OS via the network is also simplified, particularly in the area of DHCP server setup. Vendor-specific options are no longer required in the DHCP server setup.
Developers no longer need to deal with realmode drivers, which were part of the bootloader required for previous Solaris releases.
For IHVs, it is now possible to deliver drivers at install time via CD/DVD in addition to floppies.
Finally, by adopting a bootloader developed by the open source community, Sun's customers can leverage the considerable GRUB experience gained within that community.

2. Booting the Solaris OS With GRUB
Once GRUB gains control, it displays a menu on the console asking the user to choose an OS instance to boot. The user may pick a menu item, modify a menu item using the built-in editor, or manually load an OS kernel in command mode. To boot the Solaris OS, GRUB must load a boot_archive file and a "multiboot" program. The boot archive is a ramdisk image containing Solaris kernel modules and data. GRUB simply puts it in memory without any interpretation. The multiboot program is an ELF executable with a Multiboot Specification-compliant header. Once loading is complete, GRUB hands control over to the multiboot program. GRUB itself then becomes inactive, and its memory is reclaimed.
The multiboot program is responsible for assembling core kernel modules in memory by reading the boot_archive, and passing boot-related information (as specified in the Multiboot Specification) to the kernel. Note that the multiboot program goes hand-in-hand with the boot_archive file. You cannot mix and match multiboot and boot_archive information from different releases or OS instances.
Once the kernel gains control, it will initialize CPU, memory, and I/O devices, and it will mount the root file system on the device as specified by the bootpath property with a file system type as specified by the property fstype. Properties can be set in /boot/solaris/bootenv.rc via the eeprom(1M) command or in the GRUB command line via the GRUB menu or shell. If the properties are not specified, the root file system defaults to UFS on /devices/ramdisk:a, which is the case when booting the install miniroot.

3. Installation
The Solaris OS may be installed from CD, DVD, and net install servers. The Solaris 10 1/06 release differs from the Solaris 10 3/05 release in several ways:
Minimum memory requirement: The system must have 256MB of main memory to boot the install miniroot. Systems with insufficient memory will get a message from GRUB: "Selected item can not fit in memory".
USB drive support: Installation from CD/DVD drives connected via USB interfaces is fully supported.
Net install: The standard procedure for setting up net install images remains the same. Clients are assumed to boot via the Preboot eXecution Environment (PXE) mechanism. Clients not capable of PXE boot can use a GRUB floppy (see Appendix B).
When booting the install miniroot, a GRUB menu is displayed. A user may interactively edit boot options (see section 4.2). After GRUB loads the Solaris OS, the following install menu is displayed:
1. Solaris Interactive (default)
2. Custom JumpStart
3. Solaris Interactive Text (Desktop session)
4. Solaris Interactive Text (Console session)
5. Apply driver updates
6. Single user shell
The Device Configuration Assistant and associated interactive shell, which users are familiar with from the Solaris 10 3/05 OS and earlier, are no longer present. Users wishing to add drivers required during install (for example, host adapter drivers) should choose option 5 and supply an ITU (Install Time Update) floppy or CD/DVD.
Option 6 is available for system recovery. It provides quick access to a root prompt without going through system identification. This option is identical to booting a Solaris Failsafe session (see section 4.4).

4. Managing the Boot Subsystem
4.1 BIOS
It is generally a good idea to update the BIOS firmware to the latest revision before installing the Solaris OS. This is typically accomplished by visiting the support page for the vendor that manufactured the computer.
Compared to the Solaris 10 3/05 release, the Solaris 10 1/06 OS uses a different subset of BIOS features. In particular, the kernel makes use of more information from the Advanced Configuration and Power Management Interface (ACPI) table, using the parser from Intel's ACPI CA software.
On systems that do not conform to BIOS 2.0 specifications, the syslog may contain messages related to parsing the ACPI table, such as:
ACPI-0725: *** Warning:
Type override - [4s] had invalid type (DEB_[\200IODB
Such messages are harmless and do not impact normal system operation. If ACPI errors prevent normal system boot, the user can disable the ACPI parser by setting acpi-user-options to 2 (see eeprom(1M)) in the kernel line of the GRUB menu:
kernel .. -B ...,acpi-user-options=2
In this case, the system assumes a set of standard ISA devices is present, including a keyboard, a mouse, two serial ports, and a parallel port.
4.2 Boot Options
To boot the Solaris OS, a user may specify the kernel to load, options to be passed to the kernel (see kernel(1M)), and a list of property names and values to customize system behaviors (see eeprom(1M)). At Solaris installation time, a set of default values are chosen for the system and stored in /boot/solaris/bootenv.rc. Users may change the settings by editing the GRUB menu or modifying the bootenv.rc file indirectly via the eeprom(1M) command.
Specifying kernel name and kernel options via eeprom requires setting the boot-file property. To boot the 32-bit kernel in verbose mode, run the following command:
# eeprom boot-file="kernel/unix -v"
To specify the same thing on the GRUB menu, modify the kernel command of the GRUB menu from:
kernel /platform/i86pc/multiboot
to:
kernel /platform/i86pc/multiboot kernel/unix -v
See kernel(1M) for additional boot arguments that the Solaris kernel accepts.
Properties other than boot-file can be specified on the GRUB kernel command line with this syntax:
kernel /platform/i86pc/multiboot -B prop1=val1[,prop2=val2...]
To configure the serial console on ttya (com1), set the console property to ttya:
kernel /platform/i86pc/multiboot -B console=ttya
If the property value contains commas, the value should be quoted. The following GRUB command sets the Solaris console to ttya in high speed.
kernel /platform/i86pc/multiboot -B console=ttya,ttya-mode="115200,8,n,1,-"
In short, specifying "-B foo=bar" in the GRUB menu is equivalent to running "eeprom foo=bar". The -B option in GRUB is primarily for temporary overrides. Permanent settings should be specified via eeprom(1M) so that they are preserved by the Solaris upgrade process.
4.3 Boot Archive
The boot archive refers to the file platform/i86pc/boot_archive. It is a collection of core kernel modules and configuration files packed in either UFS or ISOFS format. At boot time, GRUB loads the boot archive into system memory. The kernel can now initialize itself from data and text in the boot archive without performing I/O to the root device. Once the kernel gains sufficient I/O capability, it will mount the root file system on the real root device as specified by the bootpath property. At this point, the boot archive loaded by GRUB is discarded from memory.
The content of the boot archive is specified in /boot/solaris/filelist.ramdisk. Upon system shutdown, the system checks for updates to the root file system and updates the boot archive when necessary. The system may manually update the boot archive prior to system shutdown by running the bootadm(1M) command.
4.4 The Failsafe Menu Entry
New to the Solaris 10 1/06 OS is a file, /boot/x86.miniroot-safe, containing a bootable, standalone Solaris image. This file can be loaded by choosing the Solaris failsafe entry from the GRUB menu. This is for the convenience of system administrators when the normal entry fails to boot.
Suppose you add a new package containing a faulty driver, and the system panics at boot time. Upon reboot, you can pick the Solaris failsafe menu entry. While in the failsafe session, mount the root file system on /a and run pkgrm -R to remove the faulty package. Once this is complete, you can reboot to the normal Solaris entry to resume system operation.
The file /boot/x86.miniroot-safe can also be copied to portable media, such as a USB stick, as a recovery tool.
4.5 Keeping the System Bootable
To ensure that the system remains bootable, the GRUB boot blocks, the GRUB menu, and the boot archive must be up-to-date.
The GRUB boot blocks reside in the Solaris partition. If the boot blocks become corrupt, they should be reinstalled using the installgrub(1M) command. Note that installboot(1M) and fmthard(1M) cannot be used to write GRUB boot blocks.
The GRUB menu resides in /boot/grub/menu.lst (or /stubboot/boot/grub/menu.lst if a Solaris boot partition is used). The menu is maintained with the bootadm(1M) update-menu subcommand. Because GRUB names disks by the BIOS disk number, a change in BIOS boot device configuration may render GRUB menu entries invalid in some cases. Running bootadm update-menu will create the correct menu entry in such cases.
The boot archive must be updated as the root file system is modified. In cases of system failure (power failure or kernel panic) immediately following a kernel file update, the boot archive may be out of sync with the root file system. In such cases, the system/boot-archive service, managed through the Solaris Service Manager (see svcadm(1M) for example), will fail on the next reboot. Solaris will print a message telling the user that it is still possible to boot and clear the event that triggered the error, but it is safer to reboot the system, select failsafe session, and update the boot archive while in the failsafe session.

Custom df (diskfree) column output in Solaris using nawk

Let's say you want to combine some features of "df -h" with "df -n" to show filesystem type and some other custom modifications to the output. This is where awk/nakw/gawk/whatever come in handy:

% df -g | nawk '{if (NR % 5 == 1) printf "%-22s", $1 ; if (NR % 5 == 4) printf "%-10s", "fstype " $1 "\n"; if (NR % 5 == 2) printf "%-30s",$1/2/1024/1024 " GB"; if (NR % 5 == 2) printf "%-30s", $4/2/1024/1024 " GB free "}'


/ 33.6627 GB 18.4351 GB free fstype ufs
/devices 0 GB 0 GB free fstype devfs
/system/contract 0 GB 0 GB free fstype ctfs
/proc 0 GB 0 GB free fstype proc
/etc/mnttab 0 GB 0 GB free fstype mntfs
/etc/svc/volatile 7.88214 GB 7.8813 GB free fstype tmpfs
/system/object 0 GB 0 GB free fstype objfs
/lib/libc.so.1 33.6627 GB 18.4351 GB free fstype ufs
/dev/fd 0 GB 0 GB free fstype fd
/tmp 7.88142 GB 7.8813 GB free fstype tmpfs
/var/run 7.88134 GB 7.8813 GB free fstype tmpfs
/export/home 74.4858 GB 1.87458 GB free fstype ufs
/storage 108.639 GB 66.9259 GB free fstype nfs

You can also add a comma (,) to the separators and output > csv (you can open the comma separated values table in Excel or OpenOffice or any other Spreadsheet application) :-).

Add a regular user account to your DB2 zone:

db2# mkdir -p /export/home/cmihai
db2# useradd -s /usr/bin/zsh -d /export/home/cmihai cmihai
db2# chown cmihai /export/home/cmihai
db2# passwd cmihai
New Password:
Re-enter new Password:
passwd: password successfully changed for cmihai
db2# su - cmihai
db2% cd /opt/IBM/db2/V9.1/bin

Check locale(1) and export LC_ALL=C if needed or db2 will complain:

db2% ./db2fs
couldn't set locale correctly

Make sure you read the install log in /tmp.

Here's a tip though: if you can, use the Graphical installer (ssh -X and run db2setup instead of db2_install).
All you need now is add various tuning, limitations and zfs quotas, etc.

If you need to start over, there's always ZFS snapshots or db2_deinstall.

Either way, if you need to create an instance:

# ./db2icrt -s wse -u db2fenc1 db2inst1
Sun Microsystems Inc. SunOS 5.11 snv_90 January 2008
Sun Microsystems Inc. SunOS 5.11 snv_90 January 2008
DBI1070I Program db2icrt completed successfully.

You can now use db2 to create a database and connect to it.

# db2
db2=> CREATE DATABASE test
db2 => CONNECT TO test
Database Connection Information

Database server = DB2/SUN64 9.1.1
SQL authorization ID = DB2INST1
Local database alias = TEST

db2 => CREATE TABLE clients (name char(25), surname char(50))
DB20000I The SQL command completed successfully.
db2 => LIST TABLES

Table/View Schema Type Creation time
------------------------------- --------------- ----- --------------------------
CLIENTS DB2INST1 T 2008-06-11-05.39.58.167896

1 record(s) selected.

db2 => INSERT INTO clients VALUES ('Some','Guy')
DB20000I The SQL command completed successfully.
db2 => SELECT * FROM clients

NAME SURNAME
------------------------- --------------------------------------------------
Some Guy

1 record(s) selected.

Deploying IBM DB2 inside a Solaris 10 Container

1. Creating the ZFS filesystem:
# zfs create rpool/export/zones

2. Configuring the DB2 zone:
# zonecfg -z db2
db2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:db2> create
zonecfg:db2> set zonepath=/export/zones/db2
zonecfg:db2> set autoboot=true
zonecfg:db2> add net
zonecfg:db2:net> set address=192.168.1.100/24
zonecfg:db2:net> set physical=iwk0
zonecfg:db2:net> end
zonecfg:db2> verify
zonecfg:db2> commit
zonecfg:db2> exit

3. Installing the DB2 zone:
# zoneadm -z db2 install
A ZFS file system has been created for this zone.
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <9648> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1319> packages on the zone.
Initialized <1319> packages on zone.
Zone is initialized.
Installation of these packages generated errors:
Installation of these packages generated warnings:
The file contains a log of the zone installation.
4. Listing the zones:
# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- db2 installed /export/zones/db2 native shared

5. Booting the DB2 zone:
# zoneadm -z db2 boot

6. Logging into the zone:
# zlogin -C db2
[Connected to zone 'db2' console]

Configure the initial system (locale, etc).

7. Install IBM DB2 Database 9:
db2# gunzip db2_v9fp1_ese_solaris_x64.tar.gz
db2# tar xvf db2_v9fp1_ese_solaris_x64.tar
db2# cd ese/disk1/
db2# ./db2_install
Default directory for installation of products - /opt/IBM/db2/V9.1

***********************************************************
Do you want to choose a different directory to install [yes/no] ?
no

Specify one or more of the following keywords,
separated by spaces, to install DB2 products.

CLIENT
RTCL
ESE

Enter "help" to redisplay product names.

Enter "quit" to exit.

Mortal Kombat 4 on Solaris - Wine


Using Wine, DosBOX, DosEMU, GSNEX, GBA, ePSX and various other Windows, DOS and game console emulators you can get a fair amount of fun old games running on Solaris (like StarCraft, Mortal Kombat Series, Final Fantasy 1-8, etc). Not to mention the whole Doom, Quake 1,2,3 series using the open sourced engines.




Solaris ZFS to ZFS LiveUpgrade

Regular UFS to UFS LiveUpgrade used to take a while to create the boot environment, etc. Complicated :-).
As of Solaris Express Community Edition 90, you can use LiveUpgrade with ZFS. You can also LU a UFS system to ZFS.
One of the benefits of ZFS root is the ZFS clone command (lucreate -n happens in a second):

# lucreate -n sxce91
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name .
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name PBE Boot Device .
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment to create boot environment .
Creating snapshot for on .
Creating clone for on .
Setting canmount=noauto for in zone on .
Creating snapshot for on .
Creating clone for on .
No entry for BE in GRUB menu
Population of boot environment successful.
Creation of boot environment successful.

Fixing Java WebConsole ZFS Administration on Solaris Express

Application Error
com.iplanet.jato.NavigationException: Exception encountered during forward
Root cause = [java.lang.IllegalArgumentException: No enum const class com.sun.zfs.common.model.AclInheritProperty$AclInherit.restricted]
Notes for application developers:

    * To prevent users from seeing this error message, override the onUncaughtException() method in the module servlet and take action specific to the application
    * To see a stack trace from this error, see the source for this page

Generated Sun Jun 22 14:22:44 EEST 2008

If this happens to you, you need to set acl inherit to passthrough:

# zfs set aclinherit=passthrough rpool

Now WebConsole ZFS Admin will work.

You also need to make sure the webconsole serivce is enabled before you can use it:

# svcs -a | grep webconsole
disabled 15:32:25 svc:/system/webconsole:console

# svcadm enable webconsole

# svcs -xv webconsole
svc:/system/webconsole:console (java web console)
State: online since Sun Jun 22 15:34:40 2008
See: man -M /usr/share/man -s 1M smcwebserver
See: /var/svc/log/system-webconsole:console.log
Impact: None.

Using AWK to convert UNIX passwords from HP-UX to Solaris

Converting password hashes from HP-UX 11.11 to Solaris is pretty simple if you are using UNIX crypt passwords (if HP-UX isn't a Trusted System. If it is, it will use bigcrypt passwords, > 8 characters, converting them to Solaris UNIX crypt could be problematic).

Here's the gest of it:

On the HP-UX System, we create a test user:

    # useradd test
    # passwd test
    test

Now we convert the passwd file to generate passwd entries for Solaris:

        * # awk ' BEGIN { FS = ":" } { print $1":x:" $3 ":" $4 "::/export/home/" $1 ":/usr/bin/sh" }' /etc/passwd
        * test:x:107:20::/export/home/test:/usr/bin/sh


And we create the shadow file entries, including the password hash:

        * # awk ' BEGIN { FS = ":" } { print $1":"$2"::::::" }' /etc/passwd
        * test:lsDWJo7M.iAhY::::::

Just add them using /usr/ucb/vipw to the password file, edit the shadow file for consistency and test. Be sure to backup the files and to test using a few users at first.

        * $ su test
        * Password:
        * $ id
        * uid=127(test) gid=120
        * $ whoami
        * test
        * $ echo $HOME
        * /export/home/test
        * $ echo $SHELL
        * /usr/bin/sh

    Mix with some shell scripting and mkdir's and you're set :-). Next time, use LDAP :P.

Posted by cmihai at 10:49 PM 1 comments

D-Light DTrace script for Sun Studio 12 in Solaris

Here's a pretty cool tool for developers, similar to the DTrace GUI from XCode in OS X 10.5 Leopard (Instruments):

It's part of Sun Studio 12.

 

Solaris Performance Monitoring

Performance monitoring tools on Solaris using split GNU screen windows :-).


Cell accelerator boards, NVIDIA GPU servers and HPC

 This is pretty interesting considering your basic CPU does something like 30GFLOPS (something around 16 GFLOPs per POWER 6 cores, 10GFLOPS for a Itanium cores). A cell board like this does 180GFLOPs.
(Don't take this is a benchmark or anything. This is just some RAW data).

Some NVDIA something like 500 (technically the G80 has 128 fp32 ALUs @ 1350MHz with MADD - about 350 GFLOPs), a R600 is supposed to have like 500 and a Realizm 800 (Dual Wildcat VPUs) about 700 GFLOPS :-). So yeah, with 16 or so of these cards used right, you could score yourself a place on TOP500 SuperComputers. "Hey, my 4 graphic stations can beat your 1000-node Xeon cluster!".

And this is no joke, since GF8 series and the whole NVIDIA CUDA thing, NVIDIA has also started making... erm.. servers.

NVIDIA Tesla S870 GPU computing system peaks something like 2TFLOPS.

While one of those "low powered MIPS 64 CPU's" in the SiCortex, about 1GFLOP :-). But they have clusters of up to 5832.

PCI-E Cell accelerator board:

    * Cell BE processor at 2.8 GHz
    * More than 180 GFLOPS in PCI Express accelerator card
    * PCI Express x16 interface with raw data rate of 4 GB/s in each direction
    * Gigabit Ethernet interface
    * 1-GB XDR DRAM, 2 channels each, 512 MB
    * 4 GB DDR2, 2 channels each, 2 GB
    * Optional MultiCore Plus™ SDK software

Runnig Symantec Veritas Cluster Server on Windows Vista 64bit

Getting Veritas Cluster Server Simulator (VCS Simulator) to install and run can be a bit of a pain. Here's the deal:

Start a command prompt as Administrator, and install it using:

msiexec /i vcs_simulator.msi

Once the software is installed, download MSVCR70.DLL and put it in the VCS directory (C:\Program Files (x86)\VERITAS\Cluster Manager\bin\).

Then run "Veritas VCS Simulator - Java Console" and "Veritas Cluster Manager - Java Console" as Administrator (Right Click - Run as Administrator) and unblock the Windows Firewall ports (allow exception).

Resource types in Solaris 10 configuration

* net - a network interface. As you remember, when adding such a resource, you have to specify a physically present network adapter card you have in your box, and zone’s network interface will be a virtual interface on this network adapter.
* device - any additional device. Using device names mask (for instance, /dev/pts*), you can allow a non-global zone access any devices you have on your actual system.
* fs - a file system. You can grant access to a physical disk or any directory of your actual system to any non-global zone. You can specify a file system type along with mount options, which is very convenient.
* inherit-pkg-dir - a globa zone root filesystem directory which is inherited by a non-global zone. Specifying a directory name, you’re pointing to the fact that all the files from this directory of your actual system (global zone) will not be physically copied into the non-global zone, but insteal will be inherited. The fact is, files from these directories will be accessible through a read-only loopback filesystem in your non-global zone (thanks, Dan!)
* attr - an attribute. With resources of this type you can create text comments for your zones – these comments might come in handy when you get back to reconfiguring your zone some time later.
* rctl - a zone-wide resource control. At this stage, there are only two parameters of this type -zone.cpu-shares and zone.max-lwps, but there will be more in the future. These parameters allow you to limit a CPU time given to a zone, and limit a max number of lwp processes which can be created in a zone.

Exclusive IP configuration for non-global Solaris zones

Configured using this statement in zone configuration:

set ip-type=exclusive

… this mode implies that a given non-global zone will have exclusive access to one of the NICs on your system.

While for me the most important aspect of such exclusivity was the possibility to configure zone-specific routing, there’s obviously much more offered by this mode:

* DHCPv4 and IPv6 stateless address autoconfiguration
* IP Filter, including network address translation (NAT) functionality
* IP Network Multipathing (IPMP)
* IP routing
* ndd for setting TCP/UDP/SCTP as well as IP/ARP-level knobs
* IP security (IPsec) and IKE, which automates the provision of authenticated keying material for IPsec security association

So here it is – another design lesson for you – make sure you know what kind of networking your zones will need.
See also:

Install Solaris 8 zone using flar-archive

solaris# zoneadm -z solaris8 install -u -a /export/solaris8.flar
Log File: /var/tmp/solaris8.install.13597.log
Source: /export/solaris8.flar
Installing: This may take several minutes…
Postprocessing: This may take several minutes…
WARNING: zone did not finish booting.
Result: Installation completed successfully.
Log File: /export/solaris8/root/var/log/solaris8.install.13597.log

In my case the Solaris 8 zone got stuck on sys-unconfig, and so I had to connect to the virtual console of the zone to help it move on:

Here’s how you connect to a zone’s console:

solaris# zlogin -C solaris8

That’s it! The rest was easy – just a few minutes of configuring the network parameters and DNS/NIS settings. Finally, I was able to ssh into the new zone and run uname:

solaris8 #uname -a
SunOS solaris8 5.8 Generic_Virtual sun4u sparc SUNW,Sun-Fire-V490

I liked Solaris 8 Migration Assistant very much. It’s an incredibly quick and easy way to have a whole bunch of Solaris 8 systems virtualized and running under on one of the most advanced servers with the most advanced OS – Solaris 10u4.

Set up a Solaris 8 zone

Here’s how you do it:
solaris# zonecfg -z solaris8
solaris8: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:solaris8-system> create -t SUNWsolaris8
zonecfg:solaris8> set zonepath=/export/solaris8
zonecfg:solaris8> add net
zonecfg:solaris8:net> set address=172.21.7.155/24
zonecfg:solaris8:net> set physical=ce0
zonecfg:solaris8:net> end
zonecfg:solaris8> commit
zonecfg:solaris8> exit
Naturally, your IP and network device name will be different. As of now, our zone is fully configured, but not yet installed.

Solaris 8 Migration Assistant (Project Etude)

First of all, just a few words about the niche for this product. Many companies are rather conservative about their Solaris upgrades. Most systems are still running Solaris 8, if not something older. Quite often this is also dictated by third-party software dependencies – products which were bought and configured for Solaris 8, which are now so tightly integrated that there isn’t an easy way to migrated them into Solaris 10. Such systems are doomed for slow but very expensive death. Expensive, because with every year the hardware support for servers capable of running Solaris 8 raises again and again.
That’s where the Solaris 8 zones come in. It’s very easy, really: you create a flar-copy fo your existing physical server under Solaris 8, then create a Solaris 8 zone, import your flar-archive and get a virtual copy of your Solaris 8 environment, with all your processes, programs and startup scripts.
To make things easier, it’s even possible to configure your hostid in Solaris 8 zone to match the one of the physical Solaris 8 system, this way no programs running in the zone will even guess that they’ve been virtualized.
Who knows, maybe I’ll tell you more about this technology some other day, but for now – just the simplest list of actions and commands for your S8MA proof of concept.
Preparation: Solaris 10u4 server an S8MA packages
1. Find and prepare a sparc box with Solaris 10u4. It is important to have the latest Solaris 10 update. Preparations are usually limited to applying a kernel patch, 127111-01 in my case.
2. Download the Solaris 8 Migration Assitant (current version is 1.0) from this location: Solaris 8 Migration Assistant. The 3 packages in archive are dead easy to install using standard pkgadd.
Here are the packages you’ll get:
SUNWs8brandr Solaris 8 Migration Assistant: solaris8 brand support (Root)
SUNWs8brandu Solaris 8 Migration Assistant: solaris8 brand support (Usr)
SUNWs8p2v Solaris 8 p2v Tool
Preparation: Solaris 10u4 server an S8MA packages
1. Find and prepare a sparc box with Solaris 10u4. It is important to have the latest Solaris 10 update. Preparations are usually limited to applying a kernel patch, 127111-01 in my case.
2. Download the Solaris 8 Migration Assitant (current version is 1.0) from this location: Solaris 8 Migration Assistant. The 3 packages in archive are dead easy to install using standard pkgadd.
Here are the packages you’ll get:
SUNWs8brandr Solaris 8 Migration Assistant: solaris8 brand support (Root)
SUNWs8brandu Solaris 8 Migration Assistant: solaris8 brand support (Usr)
SUNWs8p2v Solaris 8 p2v Tool

Solaris 10 patch error codes

Exit Meaning
code

0 No error
1 Usage error
2 Attempt to apply a patch that's already been applied
3 Effective UID is not root
4 Attempt to save original files failed
5 pkgadd failed
6 Patch is obsoleted
7 Invalid package directory
8 Attempting to patch a package that is not installed
9 Cannot access /usr/sbin/pkgadd (client problem)
10 Package validation errors
11 Error adding patch to root template
12 Patch script terminated due to signal
13 Symbolic link included in patch
14 NOT USED
15 The prepatch script had a return code other than 0.
16 The postpatch script had a return code other than 0.
17 Mismatch of the -d option between a previous patch install and the current one.
18 Not enough space in the file systems that are targets of the patch.
19 $SOFTINFO/INST_RELEASE file not found
20 A direct instance patch was required but not found
21 The required patches have not been installed on the manager
22 A progressive instance patch was required but not found
23 A restricted patch is already applied to the package
24 An incompatible patch is applied
25 A required patch is not applied
26 The user specified backout data can't be found
27 The relative directory supplied can't be found
28 A pkginfo file is corrupt or missing
29 Bad patch ID format
30 Dryrun failure(s)
31 Path given for -C option is invalid
32 Must be running Solaris 2.6 or greater
33 Bad formatted patch file or patch file not found
34 Incorrect patch spool directory
35 Later revision already installed
36 Cannot create safe temporary directory
37 Illegal backout directory specified
38 A prepatch, prePatch or a postpatch script could not be executed
39 A compressed patch was unable to be decompressed
40 Error downloading a patch
41 Error verifying signed patch
42 Error unable to retrieve patch information from SQL DB.
43 Error unable to update the SQL DB.
44 Lock file not available
45 Unable to copy patch data to partial spool directory.

How to show future timestamps in Solaris

And now comes the moment to reveal the little trick I was talking about. Even though the standard /bin/ls command won't show you the future timestamps, you can still check them using the /usr/ucb/ls version of the ls command. The syntax is very similar, but you can also see the future timestamps:

solaris$ /usr/ucb/ls -al *myserver1*
-rw-r--r-- 1 bbuser 48 Jan 9 10:59 np_greys@solaris-server.com_myserver1.conn
-rw-r--r-- 1 bbuser 50 Jan 9 10:41 np_greys@solaris-server.com_myserver1.cpu
-rw-r--r-- 1 bbuser 51 Jan 9 10:41 np_greys@solaris-server.com_myserver1.disk
-rw-r--r-- 1 bbuser 53 Jan 9 10:36 np_greys@solaris-server.com_myserver1.memory
-rw-r--r-- 1 bbuser 51 Jan 9 10:41 np_greys@solaris-server.com_myserver1.msgs
-rw-r--r-- 1 bbuser 52 Jan 9 11:16 np_greys@solaris-server.com_myserver1.procs

Looking at them, you can see that BigBrother simply set the modification time for these files to be 45min into the future.

That's it for today – hope you liked this trick!

How to see future file timestamps in Solaris

I know I've spoken about timestamps already, but I'd like to further expand the topic.

While there's a great GNU stat command in Linux systems, there's no such thing in Solaris by default, and so you usually depend on ls command with various options to look at file's creation, modification or access time.

The standard /bin/ls command in Solaris doesn't always show you the full timpestamp, usually if it's about a time too far in the past or a bit into the future – so today I'm going to show you a trick to work around it and still confirm such timestamps for any file.
Standard ls command in Solaris doesn't always show full timestamps
Here's an example: BigBrother monitoring suite creates np_ files for internal tracking of times to send out email notifications. It deliberately alters the timestamps so that they're set for a future date – that's how it tracks the time elapsed between the event and the next notification about it.
However, not all of these np_ files are shown with their full timestamps, some just show the date, with no time:
solaris$ ls -l *myserver1*
-rw-r--r-- 1 bbuser bbgroup 48 Jan 9 2009 np_greys@solaris-server.com_myserver1.conn
-rw-r--r-- 1 bbuser bbgroup 50 Jan 9 10:41 np_greys@solaris-server.com_myserver1.cpu
-rw-r--r-- 1 bbuser bbgroup 51 Jan 9 10:41 np_greys@solaris-server.com_myserver1.disk
-rw-r--r-- 1 bbuser bbgroup 53 Jan 9 10:36 np_greys@solaris-server.com_myserver1.memory
-rw-r--r-- 1 bbuser bbgroup 51 Jan 9 10:41 np_greys@solaris-server.com_myserver1.msgs
-rw-r--r-- 1 bbuser bbgroup 52 Jan 9 2009 np_greys@sola

How To Confirm if Your CPU is 32bit or 64bit

Obtaining CPU information from /proc/cpuinfo

Most Linux distros will have the special /proc/cpuinfo file which contains a textual description of all the features your processors have. This is a very useful file – depending on your task it may help you identify any features of your processors, as well as confirm the overall number of CPUs your system has installed.

Most commonly, the following information is obtained from /proc/cpuinfo:

* processor model name and type
* processor speed in Mhz
* processor cache size
* instruction flags supported by CPU

Here's how the typical output will look:

processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 3.20GHz
stepping : 3
cpu MHz : 3192.320
cache size : 2048 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts
acpi mmx fxsr sse sse2 ss ht tm syscall nx lm pni monitor ds_cpl cid cx16 xtpr
bogomips : 6388.78
clflush size : 64
cache_alignment : 128
address sizes : 36 bits physical, 48 bits virtual
power management:The same block of information will be shown for each CPU visible to your system. There will be 2 processor instances for each physical CPU if hyper-treading is enabled, and there will be 2 or 4 processor entries for each physical CPU on dual- and quad-core systems configurations.
How to confirm the 64bit capability of your CPU in Linux

Based on /proc/cpuinfo file, it is quite easy to confirm whether your CPU is capable of 64bit or not. All you have to do is look at the flags which tell you what instruction sets your CPU is capable of.

All the CPUs on your system will have the same type and therefore support the same instruction sets, that's why in this example the grep command returns 4 similar lines – for the 4 CPU instances found on my system:

How to Confirm Disks Capacity in Linux

show disk size in Unix is a very popular request visitors use to arrive at my Unix Tutorial pages. Since I never addressed the question of confirming the number of hard drivers available on your system or the task of finding out a disk's capacity, I'd like to document a quick and easy way of doing just that.

I hope that when someone looks for a way to show disk size, what's really expected is a command to help you confirm the capacity of a disk in gigabytes.
Using fdisk command in Linux

One of the easiest ways to learn a lot about hard drives installed on your Linux system is to use the fdisk command:suse# fdisk -l

Disk /dev/sda: 145.4 GB, 145492017152 bytes
255 heads, 63 sectors/track, 17688 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 262 2104483+ 82 Linux swap / Solaris
/dev/sda2 * 263 17688 139974345 83 Linux

As you can see, there are two sections in the output provided: disk information (capacity and geometry) and disk layout (partitions). The same pattern is repeated if you have more than one disk installed.

What you should look for is the lines starting with "Disk" word: they usually specify the device names for each drive and also provide the capacity in gigabytes. Thus, a time saver would be to grep the necessary information from the command above, this way:

suse# fdisk -l | grep Disk
Disk /dev/sda: 145.4 GB, 145492017152 bytes

On a system with multiple disks, the output will look more useful:

redhat# fdisk -l | grep Disk
Disk /dev/sda: 21.4 GB, 21474836480 bytes
Disk /dev/sdb: 4294 MB, 4294967296 bytes

That's it – a very simple way for you to determine the number of disks in your system while also confirming the capacity available for your needs. fdisk command is actually a very powerful disks management tool which allows you to manage partitions – create and delete them or modify the type of each partition. I will be sure to revisit this command some other time cause usage above doen't do this wonderful Unix command any justice.

Shared IP configuration for non-global Solaris zones

By default, non-global zones will be configured with a shared IP functionality. What this means is that IP layer configuration and state is shared between the zone you’re creating and the global zone. This usually implies both zones being on the same IP subnet for each given NIC.

Shared IP mode is defined by the following statement in zone configuration:

set ip-type=shared

Here’s all the commands needed to enable it for a zone called s10zone in my example:

solaris# zonecfg -z s10zone
zonecfg:s10zone> set ip-type=shared
zonecfg:s10zone> verify
zonecfg:s10zone> commit
zonecfg:s10zone> end
solaris#

While I’ve deployed quite a few zones before, it was only recently that I learned what sharing IP layer configuration meant in practical terms: no IP routing within non-global zone. So if for some reason you want your non-global zone to use a different IP route for connecting one of the available networks, you really can’t don it in shared IP mode, because your non-global zone can only inherit the routing rules of the global zone.

You still have an option of assigning different IP addresses to different virtual interfaces of a non-global zone, but unless their routing is catered for by the global zone, it won’t be of much use.
Exclusive IP configuration for non-global Solaris zones

Configured using this statement in zone configuration:

set ip-type=exclusive

… this mode implies that a given non-global zone will have exclusive access to one of the NICs on your system.

While for me the most important aspect of such exclusivity was the possibility to configure zone-specific routing, there’s obviously much more offered by this mode:

* DHCPv4 and IPv6 stateless address autoconfiguration
* IP Filter, including network address translation (NAT) functionality
* IP Network Multipathing (IPMP)
* IP routing
* ndd for setting TCP/UDP/SCTP as well as IP/ARP-level knobs
* IP security (IPsec) and IKE, which automates the provision of authenticated keying material for IPsec security association

So here it is – another design lesson for you – make sure you know what kind of networking your zones will need.