Wednesday, December 30, 2009

Fast way to unencapsulate rootdisk on Veritas

1. boot single user mode alternate boot device like cdrom, spare disk or net
ok> boot cdrom -s

2. fsck the root filesystem
# fsck -y /dev/rdsk/c0t0d0s0 (assuming root disk is c0t0d0)

3. mount the root filesystem on /a
# mount /dev/dsk/c0t0d0s0 /a

4. cp /a/etc/system /a/etc/system.vxvm

5. vi /a/etc/system and remove the following 2 entries

rootdev:/pseudo/vxio@0:0
set vxio:vol_rootdev_is_volume=1

6. cp /a/etc/vfstab /a/etc/vfstab.vxvm

7. cp /a/etc/vfstab.prevm /a/etc/vfstab

8. rm /a/etc/vx/reconfig.d/state.d/root-done

9. touch /a/etc/vx/reconfig.d/state.d/install-db

10. reboot

Monday, December 28, 2009

df: cannot canonicalize .: Permission denied

I often use Solaris Live Upgrade to run quarter patching process in one of my customers. It's a very safe way to patch your system and you also have an untouched disk available (just in case you need to back out).

 

During this year, we've experienced a "not-well-known" issue with Live Upgrade. If you go to a directory (it usually happens with /var) and try the command below, you get a permission denied message (it seems to happen only with older versions of LU):

 

$ sudo pkginfo -l SUNWluu
   PKGINST:  SUNWluu
      NAME:  Live Upgrade 2.0 10/01 (usr)
  CATEGORY:  application
      ARCH:  sparc
   VERSION:  11.8,REV=2001.10.30.17.21

$ cd /var
$ df -k .
df: cannot canonicalize .: Permission denied

 

This happens because Live Upgrade doesn't do a proper copy of the mount-point directory's permission (it usually does a chmod 750 on the mount-point dir, instead of a 755). It also denies a normal user to list/create files on the file system (can impact some apps to work properly).

 

File system directory un-mounted would look like this:

 

$ ls -ld /var
drwx------  41 root     sys         1024 Mar 16  2009 /var
$

 

We have opened a case with Sun to help us troubleshooting (unfortunately they didn't help us finding a permanent solution for this issue).

 

So, based on the history above I decided to develop by myself a permanent fix (feel free to use/suggest any new idea you might have):

 

1 - This issue only happens with Solaris 8 and/or 9 (haven't seen any similar issue on S10).

 

2 - If you want to permanently avoid this (and other issues) when you are starting to create the NewBE, I suggest you to install the Solaris 10 version of Live Upgrade on your Solaris 8/9 (you can find the packages in the following directory tree)

 

Solaris_10/Product/SUNWlur

Solaris_10/Product/SUNWluu

 

Copy the whole directory tree (use tar) to your server and remove the old packages:

 

pkgrm SUNWluu SUNWlur

 

Now, install the new packages:

 

pkgadd –d . SUNWluu SUNWlur

 

 

Common errors that happens using older versions of Live Upgrade:

 

cpio: Error with fstatat() of "opt/ecc/exec/MSR520/diskqueue/SST/GMACBR120.00101806c8be_ENW.que", errno 2, No such file or directory

 

cpio: Cannot open "./usr/emc/API/symapi/ipc/storwatchd", skipped, errno 122, Operation not supported on transport endpoint

cpio: pathname is too long

 

 

3 – If you want to be 100% safe from this issue (also after upgrading your Live Upgrade), you can use the following simple script that my co-worker developed together with myself:

 

#!/sbin/sh

#

# Script to fix the issue regarding the mountpoint permissions.

#

 

 

case "$1" in

'start')

        echo "Fixing mountpoint permissions..."

        for i in `grep -v "^#" /etc/vfstab| awk '{print $3}'| egrep -v '^-$|^/tmp$|^/$|^/proc$|^/dev/fd$' `

        do

        echo $i

        chmod u+rwx,g+rx,o+rx  $i

        done

        ;;

 

*)

        echo "Usage: $0 { start  }"

        exit 1

        ;;

 

esac

exit 0

 

 

Copy this output and save it to a file on /etc/init.d (we named it as mountpoint_fix_permissions).

 

Then, do the following to make this script always run when you reboot the server:

 

# chmod 755 /etc/init.d/mountpoint_fix_permissions

# ln /etc/init.d/mountpoint_fix_permissions /etc/rcS.d/S65mountpoint_fix_permissions

 

# ls -li S65mountpoint_fix_permissions /etc/init.d/mountpoint_fix_permissions

    122723 -rwxr--r--   2 root     other        383 Dec 19 19:44 /etc/init.d/mountpoint_fix_permissions

    122723 -rwxr--r--   2 root     other        383 Dec 19 19:44 S65mountpoint_fix_permissions

 

 

This script will grep /etc/vfstab looking for file system paths and will chmod 755 all of them before they get mounted (to avoid the issue).

 

This is what will happen when the script runs:

 

VxVM starting special volumes ( swapvol rootvol var )...

Fixing mountpoint permissions...

/var

/apps

/u001

/u003

/u010

/u011

/u012

/u013

/u014

/u099

/u100

/u110

/u510

 

Now you should be all set to avoid complaints from your customer :-)

 

PS: This issue can sometimes impact on application’s behavior (so make sure that you have it fixed).

 

Bug Reference:

 

http://forums.sun.com/thread.jspa?threadID=5065412

 

This is bug #4697677

Wednesday, October 7, 2009

Can't open boot_archive

During a disaster recovery exercise in one of my customers, I had to deal with this frustrating error and tried almost everything to solve it (since we didn't had too much time for debugging, I've opened a software case with Sun.

This is a sample of the msg:

 

Boot device: /pci@0/pci@0/pci@2/scsi@0/disk@2,0:a File and args: -s -r

Can't open boot_archive

Evaluating:
The file just loaded does not appear to be executable.
{0} ok  

 

 

After a few hours debugging the issue with Sun, we came in the conclusion that the disk had a new bootblock (kernel 139555-08) which was seeking for the boot archive area (but it wasn't there, therefore that's the reason for the msg error). Actually, we did some OS patch changes prior to the DR exercise, but unfortunately we had to back out.

 

The mystery of the entire history is that the image that was applied to the disk a few days ago was using an old kernel version (127111-11) and the bootblock that is compatible with this kernel version should be there (but it wasn't :\)

 

Anyway, after the whole damn troubleshooting this is what we've done to fix the issue:

 

1) Boot from an old disk (that has an older kernel version - in our case we booted using 127111-11)

2) Install a new boot block:

 

************************************************
# To install boot block in the root disk:
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk/dev/rdsk/cXtXdXs0
************************************************

 

3) Reboot the server and be happy

 

If you still face issues, you might need to reboot in failsafe mode and sync your bootarchive. To do that, go to {OK} prompt and type:

 

boot -F failsafe

 

A message like this will appear:

 

An out of sync boot archive was detected on /dev/dsk/c1t2d0s0.
The boot archive is a cache of files used during boot and
should be kept in sync to ensure proper system operation.

 

Do you wish to automatically update this boot archive? [y,n,?] y


Updating boot archive on /dev/dsk/c1t2d0s0.
The boot archive on /dev/dsk/c1t2d0s0 was updated successfully.

 

Reboot and you should be all set :)

 

 

 

Finding the port number of a process (useful when lsof doesn't help).

Hi Folks,

 

I've been away for many months due to work/vacation but now I'm back (I'll try to post new things on a regular basis :)

 

Sometime ago I had a request from a WEBLOGIC guy to kill a process that was using a specific port, but the real issue was: *he didn't know the PID # and LSOF was not working*

 

So, I started g00gling around and found a small script that does the job. The script uses one of the proc tools "PFILES" to grep for the port # given in the command line:

 

# ./portfind.sh 161

Greping for your port, please be patient (CTRL+C breaks)

        sockname: AF_INET 10.13.204.20  port: 8161

Is owned by pid 1438

..

        sockname: AF_INET 0.0.0.0  port: 161

Is owned by pid 2474

..

 

 

# ps -ef | egrep '1438|2474'

    root  1438     1  0   Sep 28 ?        0:03 ./snmpmagt /opt/patrol/PATROL/Solaris28-sun4/lib/snmpmagt.cfg NOV

    root  2474     1  0   Sep 28 ?        0:00 /usr/lib/snmp/snmpdx -f 0 -y -c /etc/snmp/conf

    root  2490  2474  0   Sep 28 ?       11:36 mibiisa -r -p 32855

 

 

Here is the script code:

 

 

#!/bin/bash

# is the port we are looking for

 

if [ $# -lt 1 ]

then

echo "Please provide a port number parameter for this script"

echo "e.g. %content 1521"

exit

fi

 

echo "Greping for your port, please be patient (CTRL+C breaks)"

 

for i in `ls /proc`

do

pfiles $i | grep AF_INET | grep $1

if [ $? -eq 0 ]

then

echo Is owned by pid $i

echo ——

fi

done

 

Enjoy

Tuesday, March 10, 2009

Installing and configuring Sun Cluster 3.2 x86 on VMware Server

 

This tutorial will explain step-by-step how to setup Sun Cluster 3.2 (two-node cluster) using VMware Server (free).

 

This environment is very useful for lab purposes and also as a tool to study for SC 3.2 certification.

 

First of all, you need to download and install VMware Server (I'm currently using version 1.0.8). After installing VMware, we'll need to setup two Solaris 10 VM servers (use the custom option on the virtual machine configuration).

 

 

Now, this is a very important part to pay attention (#1):

 

1 - You will need to check if your CPU has the Intel Virtualization Technology (Intel® VT) or Pacifica (AMD) :    
 
 

2 - If your CPU doesn't have support for 64-bit virtual machines (in my case), VMware will show a message stating that 64-bit OSs won't be supported:

 
 
 

3 - If this message doesn't appear, then you probably have a CPU that supports 64-bit virtualization (so you can proceed with Solaris 10 - 64-bits).

 

4 - If you don't have support for 64 bits VM, you will have to use "Solaris 10" version (I will give further instructions regarding how to download/install 32-bit modules for Sun Cluster - without this module you won't be able to use SC 3.2 under 32-bit VMs).

 

5 - Memory for VM - I strongly recommend setting at least 1GB for each VM, if you plan to use Oracle DBs under the cluster management (otherwise 512-768 MB should suffice).

 

6 - Network connection - Leave the default option (Use bridged networking - later on we'll setup two additional network adapters that will be used as the transports for the cluster).

 

7 - SCSI Adapter - Choose "LSI Logic"

 

8 - Disk - Choose "Create a new virtual disk", SCSI as virtual disk type and 15 GB of disk space

 

9 - Now, we need to setup two additional network adapters for the cluster transporter (for that, click on Edit virtual machine settings and then click on Add button.

 

10 - Choose Ethernet Adapter as the hardware type.

 

11 - Choose Custom and VMnet1 (Repeat the steps 9-10-11, but now you will have to choose VMnet2)

 

12 - Go to Host >> Virtual network settings in the menu.

 

13 - Click on Host virtual adapters tab and add a new VMnet -> (VMnet2)
 

 

 
 

14 - Now we need to add another disk that will be the quorum disk (click on Edit Virtual Machines and then click on the ADD button)

 

15 - Add the disk as a SCSI disk, name it as quorum.vmdk and use around 100MB of disk space.

 

16 - Create a similar VM (another hostname) with the same HW settings (after that, add the quorum.vmdk using the same SCSI ID using the option "Use an existing virtual disk")


17 - Now, another important part - you will need to edit your VM config file to allow SCSI reservations (for that you need to disable disk locking and add the following lines into your VM config file - Hint: VM config file ends with a .VMX extension):

 

disk.locking = "false"
scsi0.sharedBus = "virtual"

 

-- These lines must be added on both VMs config file --

 

 

Now we are all set to start installing Solaris 10 and configure Sun Cluster. Boot your VM from the CD-ROM using a recent Solaris 10 ISO image (I'm assuming you have enough knowledge to make a fresh Solaris 10 install, so I will only specify the filesystem sizes to be set during the install):

 

 

  

Wait for the installation/reboot to finish and we should have 2 VMs running Solaris 10 (32 or 64 bits). Now we need to install VMware tools to convert the PCN devices to VMXNET devices (I had some problems while trying to install SC 3.2 using PCN, so I strongly recommend to install it).

 

To install VMware tools, click with the right button on your VM name and choose "Install VMware tools" (have in mind that you need to unmount any cd-rom currently mounted - after clicking it, VMware will mount a virtual cdrom on the server(s) containing the drivers you will have to install):


/vol/dev/dsk/c0t0d0/vmwaretools 1568 1568 0 100% /cdrom/vmwaretools


Go to /cdrom/vmwaretools (or whatever the mount point is), copy the file to /var/tmp/vmtools, uncompress it and run the installation perl script:

 

# cd /cdrom/vmwaretools

# ls -la
 

total 2973

dr-xr-xr-x 2 root sys       2048 out 30 22:39 .

drwxr-xr-x 3 root nobody     512 mar 12 23:38 ..
-r--r--r-- 1 root root   1519013 out 30 22:39 vmware-solaris-tools.tar.gz
 

# mkdir /var/tmp/vmtools

# cp vmware-solaris-tools.tar.gz /var/tmp/vmtools

# cd /var/tmp/vmtools

# gzcat vmware-solaris-tools.tar.gz | tar xvf -
# cd vmware-tools-distrib

# ./vmware-install.pl

 

Hit ENTER for all options (default) and when you finish don't forget to do the following:

 

* Rename your hostname.* files on /etc:

 

# ls -l /etc/hostname*
-rw-r--r-- 1 root root 6 mar 12 16:48 /etc/hostname.pcn0
# mv /etc/hostname.pcn0 /etc/hostname.vmxnet0

 

 

* Replace entries on /etc/path_to_inst (from PCN to VMXNET):

 

# cp /etc/path_to_inst /etc/path_to_inst.backup ; sed -e 's/pcn/vmxnet/g' /etc/path_to_inst >> /etc/path_to_inst.2 ; mv /etc/path_to_inst.2 /etc/path_to_inst

 

 * Reboot the server (init 6)

 

After reboot process you should have the vmxnet module properly loaded and your vmxnet0 interface up (check with ifconfig -a).    

 
*Optional* - I strongly recommend to follow this step, but you can configure SC without remote access control if you want:
 

To make things easier during the install, create a new ssh key as root (hit ENTER for everything) and copy the public file for the secondary node (/.ssh).

 

# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (//.ssh/id_dsa):
Created directory '//.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in //.ssh/id_dsa.
Your public key has been saved in //.ssh/id_dsa.pub.
The key fingerprint is:34:8e:32:54:b4:bb:d2:5e:71:ec:55:
01:09:cd:34:a3
root@earth

 

Connect on the secondary node and rename the public file to authorized_keys

 

Try to connect by SSH/SFTP and see if you will be asked for a password (if you did everything right, you won't be asked for a password).

 

 

We are ready to start installing/configuring Sun Cluster. Download Sun Cluster 3.2 x86, upload it on both VMs, unzip and run installer (on Solaris_x86 subdir).

 

 

* Choose option 4,5,6 to install (everything else is default - hit ENTER)

 

 Enter a comma separated list of products to install, or press R to refresh the list [] {"<" goes back, "!" exits}: 4,5,6

 

 Choose Software Components - Confirm Choices

 --------------------------------------------
Based on product dependencies for your selections, the installer will install:
[X] 4. Sun Cluster 3.2

* * Java DB 10.2
[X] 6. Sun Cluster Agents 3.2

 

 

Important Step:
 

 

If your CPU doesn't have support for IVT or Pacifica (as I explained before), you will have to install the 32-bit modules for Sun Cluster 3.2 (unfortunately I'm not authorized by the author to distribute the package, but your can obtain it by sending an email for Matthias Pfuetzner from Sun Microsystems - as I did to obtain it).

* State in the email that you wan to use it only for *LAB* purposes (he told me that Sun doesn't give support for 32-bit clusters, thus you will only be able to obtain this module by sending him an email)
 
When you download the 32-bit module, install it after unpacking by running pkgadd -d SUNWscka (if you do not install it and your VMs are 32-bit, the cluster WILL NOT work and the modules WILL NOT be loaded during the boot - so, remember to install it if your VMs are 32-bit).
 
Add the /usr/cluster/bin directory to your PATH variable on /etc/profile (so that you don't have to type the absolute path everytime you need to run a command).
 

 

At last! We will configure a cluster now... Before running scinstall, make sure you have both nodes added on each /etc/hosts file!
 
Run the command scinstall and follow the steps to install a two-node cluster:

 

 

# scinstall

 

  *** Main Menu ***

 

    Please select from one of the following (*) options:

 

      * 1) Create a new cluster or add a cluster node

        2) Configure a cluster to be JumpStarted from this install server

        3) Manage a dual-partition upgrade

        4) Upgrade this cluster node

        5) Print release information for this cluster node

 

      * ?) Help with menu options

      * q) Quit

 

    Option:  1

 

 

  *** New Cluster and Cluster Node Menu ***

 

    Please select from any one of the following options:

 

        1) Create a new cluster

        2) Create just the first node of a new cluster on this machine

        3) Add this machine as a node in an existing cluster

 

        ?) Help with menu options

        q) Return to the Main Menu

 

    Option:  1

 

 

  *** Create a New Cluster ***

 

 

    This option creates and configures a new cluster.

 

    You must use the Java Enterprise System (JES) installer to install

    the Sun Cluster framework software on each machine in the new cluster

    before you select this option.

 

    If the "remote configuration" option is unselected from the JES

    installer when you install the Sun Cluster framework on any of the

    new nodes, then you must configure either the remote shell (see

    rsh(1)) or the secure shell (see ssh(1)) before you select this

    option. If rsh or ssh is used, you must enable root access to all of

    the new member nodes from this node.

 

    Press Control-d at any time to return to the Main Menu.

 

 

    Do you want to continue (yes/no) [yes]? 

 

 

  >>> Typical or Custom Mode <<<

 

    This tool supports two modes of operation, Typical mode and Custom.

    For most clusters, you can use Typical mode. However, you might need

    to select the Custom mode option if not all of the Typical defaults

    can be applied to your cluster.

 

    For more information about the differences between Typical and Custom

    modes, select the Help option from the menu.

 

    Please select from one of the following options:

 

        1) Typical

        2) Custom

 

        ?) Help

        q) Return to the Main Menu

 

    Option [1]: 

 

 

  >>> Cluster Name <<<

 

    Each cluster has a name assigned to it. The name can be made up of

    any characters other than whitespace. Each cluster name should be

    unique within the namespace of your enterprise.

 

    What is the name of the cluster you want to establish?  cluster-lab

 

 

  >>> Cluster Nodes <<<

 

    This Sun Cluster release supports a total of up to 16 nodes.

 

    Please list the names of the other nodes planned for the initial

    cluster configuration. List one node name per line. When finished,

    type Control-D:

 

    Node name (Control-D to finish):  earth

    Node name (Control-D to finish):  mars

    Node name (Control-D to finish):  ^D

 

 

    This is the complete list of nodes:

 

        earth

        mars

 

    Is it correct (yes/no) [yes]? 

 

 

    Attempting to contact "mars" ... done

 

    Searching for a remote configuration method ... done

 

    The Sun Cluster framework is able to complete the configuration

    process with secure shell access (sshd).

 

   

Press Enter to continue: 

 

 

  >>> Cluster Transport Adapters and Cables <<<

 

    You must identify the two cluster transport adapters which attach

    this node to the private cluster interconnect.

 

 For node "earth",

    What is the name of the first cluster transport adapter?  vmxnet1

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

    All transport adapters support the "dlpi" transport type. Ethernet

    and Infiniband adapters are supported only with the "dlpi" transport;

    however, other adapter types may support other types of transport.

 

 For node "earth",

    Is "vmxnet1" an Ethernet adapter (yes/no) [no]?  yes

 

    Is "vmxnet1" an Infiniband adapter (yes/no) [no]? 

 

 For node "earth",

    What is the name of the second cluster transport adapter?  vmxnet2

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

 For node "earth",

    Name of the switch to which "vmxnet2" is connected [switch2]? 

 

 For node "earth",

    Use the default port name for the "vmxnet2" connection (yes/no) [yes]? 

 

 For node "mars",

    What is the name of the first cluster transport adapter?  vmxnet1

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

 For node "mars",

    Name of the switch to which "vmxnet1" is connected [switch1]? 

 

 For node "mars",

    Use the default port name for the "vmxnet1" connection (yes/no) [yes]? 

 

 For node "mars",

    What is the name of the second cluster transport adapter?  vmxnet2

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

 For node "mars",

    Name of the switch to which "vmxnet2" is connected [switch2]? 

 

 For node "mars",

    Use the default port name for the "vmxnet2" connection (yes/no) [yes]? 

 

 

  >>> Quorum Configuration <<<

 

    Every two-node cluster requires at least one quorum device. By

    default, scinstall will select and configure a shared SCSI quorum

    disk device for you.

 

    This screen allows you to disable the automatic selection and

    configuration of a quorum device.

 

    The only time that you must disable this feature is when ANY of the

    shared storage in your cluster is not qualified for use as a Sun

    Cluster quorum device. If your storage was purchased with your

    cluster, it is qualified. Otherwise, check with your storage vendor

    to determine whether your storage device is supported as Sun Cluster

    quorum device.

 

    If you disable automatic quorum device selection now, or if you

    intend to use a quorum device that is not a shared SCSI disk, you

    must instead use scsetup(1M) to manually configure quorum once both

    nodes have joined the cluster for the first time.

 

    Do you want to disable automatic quorum device selection (yes/no) [no]? 

 

 

 

    Is it okay to create the new cluster (yes/no) [yes]? 

 

    During the cluster creation process, sccheck is run on each of the

    new cluster nodes. If sccheck detects problems, you can either

    interrupt the process or check the log files after the cluster has

    been established.

 

    Interrupt cluster creation for sccheck errors (yes/no) [no]? 

 

 

  Cluster Creation

 

    Log file - /var/cluster/logs/install/scinstall.log.1312

 

    Testing for "/globaldevices" on "earth" ... done

    Testing for "/globaldevices" on "mars" ... done

 

    Started sccheck on "earth".

    Started sccheck on "mars".

 

    sccheck completed with no errors or warnings for "earth".

    sccheck completed with no errors or warnings for "mars".

 

 

    Configuring "mars" ... done

    Rebooting "mars" ... done

 

    Configuring "earth" ... done

    Rebooting "earth" ... done

 
 
Ok, now you should have both machines working as a cluster (let's check with scstat command):
 
 
 # scstat

------------------------------------------------------------------
 
-- Cluster Nodes --
                    Node name           Status
                    ---------           ------
  Cluster node:     mars                Online
  Cluster node:     earth               Online
 
------------------------------------------------------------------
 
-- Cluster Transport Paths --
 
                    Endpoint               Endpoint               Status
                    --------               --------               ------
  Transport path:   mars:vmxnet2           earth:vmxnet2          Path online
  Transport path:   mars:vmxnet1           earth:vmxnet1          Path online
 
------------------------------------------------------------------
 
-- Quorum Summary --
 
  Quorum votes possible:      3
  Quorum votes needed:        2
  Quorum votes present:       3

-- Quorum Votes by Node --
 
                    Node Name           Present Possible Status
                    ---------           ------- -------- ------

  Node votes:       mars                1        1       Online
  Node votes:       earth               1        1       Online
 

-- Quorum Votes by Device --
 
                    Device Name         Present Possible Status
                    -----------         ------- -------- ------

  Device votes:     /dev/did/rdsk/d3s2  1        1       Online
 
------------------------------------------------------------------
 
-- Device Group Servers --
 
                         Device Group        Primary             Secondary
                         ------------        -------             ---------
 

-- Device Group Status --
 
                              Device Group        Status
                              ------------        ------
 

-- Multi-owner Device Groups --
 
                              Device Group        Online Status
                              ------------        -------------
 
 
------------------------------------------------------------------
------------------------------------------------------------------
 
-- IPMP Groups --
 
              Node Name           Group   Status         Adapter   Status
              ---------           -----   ------         -------   ------

  IPMP Group: mars                sc_ipmp0 Online         vmxnet0   Online
  IPMP Group: earth               sc_ipmp0 Online         vmxnet0   Online
 
 
------------------------------------------------------------------
 
 
Your cluster is ready! =)
 
Below you can find some "must use" tips taken from Geoff Ongley's homepage and also an example taken from ES-345 course material (a simple daemon managed by the cluster - useful to make failover/failback simulations):
 
 
 
 
Discovered Problems
 
Interconnect (”Cluster Transport”) is marked faulted
 

For example, if you do an scstat, or an scstat -W you see:

  Transport path:   mail-store1:e1000g2    mail-store0:e1000g2    faulted

  Transport path:   mail-store1:e1000g1    mail-store0:e1000g1    Path online

 

(at boot it might be “waiting” for quite some time)

In some cases you can disconnect and reconnect the adapter in VMware. However, in others you may have to be more drastic.

Check you can ping the other node via this path - if you can, then you should be all good to run the following commands:

 scconf -c -m endpoint=mail-store0:e1000g2,state=disabled

 

where mail-store0 is your current node, and e1000g2 is the failed adapter. After you’ve done this, you can re-enable it:

scconf -c -m endpoint=mail-store0:e1000g2,state=enabled

 

And you should now have an online path shortly afterwards:

bash-3.00# scstat -W

 

-- Cluster Transport Paths --
 

                    Endpoint               Endpoint               Status

                    --------               --------               ------

  Transport path:   mail-store1:e1000g2    mail-store0:e1000g2    Path online

  Transport path:   mail-store1:e1000g1    mail-store0:e1000g1    Path online

 

All good
 

Cluster Panics with pm_tick delay [number] exceeds [another number]

Try the following:

  1. Stop VMs being paged to disk in VMWare (only use physical memory for your VMs). This is a VMWare server, host setting from memory
  2. Ensure Memory Trimming is disabled for your VMware Server Sun Cluster Guests
  3. On each Cluster node, in order, configure the heartbeats to be father apart, and have a longer timeout:

scconf -w heartbeat_timeout=60000

 

scconf -w heartbeat_quantum=10000

 

Hopefully this will leave you with a much more stable cluster on VMware.

 

Making a Customized Application Fail Over With a Generic Data Service Resource

 

In this task, you can see how easy it is to get any daemon to fail over in the cluster, by using the Generic Data Service (so that you do not have to invent your own resource type).

 

Perform the following steps on the nodes indicated:

 

1. On all nodes (or do it on one node and copy the file to other nodes in the same location), create a daemon that represents your customized application:

 

# vi /var/tmp/myappdaemon

#!/bin/ksh

while :

do

sleep 10

done

 

2. Make sure the file is executable on all nodes.

 

3. From any one node, create a new failover resource group for your application:

 

# clrg create -n node1,node2,[node3] myapp-rg

 

4. From one node, register the Generic Data Service resource type:

 

# clrt register SUNW.gds

 

5. From one node, create the new resource and enable the group:

 

# clrs create -g myapp-rg -t SUNW.gds \

-p Start_Command=/var/tmp/myappdaemon \

-p Probe_Command=/bin/true -p Network_aware=false \

myapp-res

 

# clrg online -M myapp-rg

 

6. Verify the behavior of your customized application.

 

a. Verify that you can manually switch the group from node to node.

 

b. Kill the daemon. Wait a little while and note that it restarts on the same node. Wait until clrs status shows that the resource is fully online again.

 

c. Repeat step b a few times. Eventually, the group switches over to the other node.

 

Web site design service
Web site design service