OpenSSI Logo SourceForge Logo

project
 home page
 sourceforge page
 mailing lists
 feature list
 demos
 screenshots
 Bruce's corner
 related links
 wiki
downloads
 notes
 1.2 stable
 1.9 development
 CVS
documentation
 1.2 stable
 1.9 development
 roadmap
 wiki
work items
 task list
 bug database
 feature requests
 process mgmt hooks
  hide sidebar
Configuring NFS Server in an OpenSSI Cluster
============================================

This is a general document for configuring NFS server on an OpenSSI 
cluster.  The commands and configuration described in this document 
are for Debian.  Most of the configuration and commands are the same 
across the different distributions, except in few cases.  Wherever it 
is required, distribution specific differences are discussed as "NOTE".

NOTE: On Fedora/Redhat, the equivalent commands for `invoke-rc.d` 
      and `update-rc.d` are `service` and `chkconfig`, respectively. 
      To start or stop a service, use command line arguments "start" 
      and "stop".

Requirements:
	HA-LVS must be enabled
	The clustername should correspond to the CVIP IP address

Limitations:
	NFS server only runs on the init node
	A limited failover capability exists


Server set-up steps:
===================

1. See /usr/share/doc/openssi/README.ipvs for setting up HA-LVS 
   and the Cluster Virtual IP (CVIP) address

2. The clustername must resolve to the CVIP you configure, either in your 
   DNS server or in /etc/hosts. If you need to change the clustername, 
   then edit /etc/clustername and reboot your cluster.

3. The services "nfs-common", "nfs-kernel-server", "portmap", and "ha-lvs"
   must be started for NFS server to be functional in the CVIP environment.
   Those services can be started using 'invoke-rc.d nfs-common start' 
   (similar command for other services) and stopped using 'invoke-rc.d 
   nfs-common stop'. 

   NOTE: On Fedora and Red Hat, the corresponding services to be started
         are "SSInfs" and "nfslock", instead of "nfs-common" and 
	 "nfs-kernel-server". Remember to use the `service` command, 
	 rather than `invoke-rc.d`.

4. Review standard NFS server configuration documentation. This step 
   involves creating a list of exported filesystems and it provides more 
   detail about starting the NFS service.

When NFS mounting from a client, use the clustername, NOT the nodename of any
particular node.  The clustername must resolve to the CVIP in the client's
/etc/hosts or DNS server.

For example, if the clustername of the cluster is 'cluster1':
# mount cluster1:/home /mnt


Failover
========

Limitation
1. lockd/statd failover not supported yet
2. Only failover chard mounts on initnode

You can export any chard CFS filesystem, but it is only supported
for filesystems which exists on the initnode and failover to the new initnode.
This generally has a node option in /etc/fstab like "node=1:2."  The
/etc/export entries MUST have a unique fsid=X option specified or they will not
be exported.  The value of fsid can simply be fsid=1 for the first entry,
fsid=2 for the second and so on.

Sample /etc/fstab entry
LABEL=/opt      /opt    ext3    rw,chard,node=1:2     1       2

Sample /etc/export entry
/opt    *(rw,sync,no_root_squash,fsid=1)


Verification of proper functioning of NFS server
===============================================

The /etc/rc.nodeinfo must have an entry for nfs-common to be started on all 
nodes and nfs-kernel-server on init node.

Proper functioning of NFS server should show following list upon executing 
"rpcinfo -p `clustername` either from the NFS client host or on the 
cluster itself.

   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100021    1   udp  32778  nlockmgr
    100021    3   udp  32778  nlockmgr
    100021    4   udp  32778  nlockmgr
    100021    1   tcp  32788  nlockmgr
    100021    3   tcp  32788  nlockmgr
    100021    4   tcp  32788  nlockmgr
    100005    1   udp    779  mountd
    100005    1   tcp    782  mountd
    100005    2   udp    786  mountd
    100005    2   tcp    789  mountd


Troubleshooting:
================

If NFS server is not displaying above list upon executiong rpcinfo, please
verify whether:

1. CVIP is setup properly. Look at cvip configuration and document.
2. ha-lvs is enabled and daemon has been started(verify using ps)
   . Look at ha-lvs document
3. /etc/rc.nodeinfo has an entry  for portmap, nfs-common and nfs-kernel-server
4. portmapper has been started (verify using ps)
5. mountd is running (verify using `ps`)
6. Verify cluster name is resolving either through /etc/hosts or DNS

If there are set of file paths are mentioned in /etc/exports, but could
not mount from the client, execute `exportfs -a` on the cluster. 

This page last updated on Wed Feb 16 03:19:14 2005 GMT
privacy and legal statement