OpenSSI Logo SourceForge Logo

project
 home page
 sourceforge page
 mailing lists
 feature list
 demos
 screenshots
 Bruce's corner
 related links
 wiki
downloads
 notes
 1.2 stable
 1.9 development
 CVS
documentation
 1.2 stable
 1.9 development
 roadmap
 wiki
work items
 task list
 bug database
 feature requests
 process mgmt hooks
  hide sidebar
		Configuring Networking with OpenSSI

   TCP/IP networking can be a little tricky in an OpenSSI cluster for 2
reasons - first because the network is being used as the cluster interconnect
and second because we provide a "single cluster name/ip address (CVIP)" in 
addition to standard interface addresses and names.  The 
CVIP and the HA-LVS subsystem around it are discussed in README.CVIP and for
more detail, README.ipvs.  NFS management in the cluster, which is very 
related to networking, is discussed in README.nfs.  This note will concentrate 
on the management of the physical interfaces and IP addresses etc. 
associated with them.
   Using an ethernet interface as the cluster interconnect means the system
must set up networking on this interface very early in the boot sequence
(i.e. in the ramdisk, before the root is mounted) so the node can 
participate in forming the cluster and deciding who should mount the root.
Because of this, configuring/modifying this interface is different than
other interfaces in the cluster.
   For each node, one interface was chosen for the cluster interconnect.
The "name" corresponding to the ip address of that interface is put in
/etc/nodename (which is a CDSL to /cluster/node#/etc/nodename) by the
installation and openssi-config-node scripts.  That name will also be 
the hostname for that node (via the /etc/sysconfig/network config file).
/etc/clustertab has the MAC and IP addresses for the cluster interconnect 
interfaces for all nodes in the cluster. (/etc/clustertab is managed 
by installation and openssi-config-node, and that
information is used by mkinitrd to create a file (etc/boottab) in the RAMDISK,
which all nodes consult during booting.  /etc/hosts may also have the name
and IP address of nodes in the cluster, as might an external DNS server.
/etc/dhcpd.conf may also have the MAC and IP adresses of each node.  This
file is generated by mkinitrd.  To determine programmatically which interface
is being used as the cluster interconnect, there is a routine in libcluster
called "clusternode_get_ip" which will give you the ip address of the 
cluster interface for any give node number.  It is often important that
all the nodes have a route to the world outside the cluster.  For the initial 
installation node, such a route would probably have been set up before the
installation of OpenSSI.  As other nodes are added to the cluster, it is
important each node have a gateway they can reach.
   There are several utilities one can use to confirm their configuration and 
the output of some of these commands is useful if you are having difficulties.
a. a "cat /etc/clustertab" will show you the ip address each node is using as a
   cluster interconnect.  
b. On each node, run ifconfig and you should see the corresponding address 
   associated with an eth device (say eth0).
c. Run "host <ipaddr> for each nodes ip address and you should get a name that 
   matches /etc/nodename for that node (eg. onall cat /etc/nodename should give
   you the unique names for each node in the cluster).
d. If nodes in the cluster are to PXE boot, each node's MAC address and ip
   address as in /etc/clustertab should also be in /etc/dhcpd.conf.
e. You can run "onnode -p # route;" or "onnode -p # netstat -r" to check the 
   routing table on a given node #. 

   Management of the cluster interconnect interface will eventually be 
done by the change node  option of openssi-config-node but for now there are a 
few manual steps one can take to make certain changes.  Below we outline how to:
A. change the IP address of the cluster interconnect interface;
B. change the name of the cluster interconnect interface (nodename/hostname);
C. change the MAC address of the cluster interconnect interface (replace 
   a card in a node)
D. change the netmask of all the network interfaces in the cluster;
E. change the gateway for the entire cluster or any given node in the cluster.
A discussion of adding and modifying other interfaces is included after
each of the above topics is discussed.

A. Change the IP address of the cluster interconnect interface!
   To change the address, edit /etc/clustertab, run mkinitrd (with all the
proper options  and ssi-ksync (to get the kernel/ramdisk to all the boot
partitions).  
NOTE: 
For redhat use "mkinitrd --tabonly -/boot/initrd-xxxx", where xxxx is the  name of ssi kernel
For debian use "mkinitrd -t  /boot/initrd.img-xxxx"

If needed, edit /etc/hosts and update any DNS entries.
For completeness, editing /cluster/node#/etc/sysconfig/network-scripts/ifcfg-eth# (Note: On Debian it is /cluster/node#/etc/network/interfaces)
if it exists is a good idea.  A reboot should make everything happen.  Note
that it is always a good idea to have all nodes that have local boot 
partitions up whenever running ssi-ksync.  If a node is not up it will not
see the RAMDISK changes.  It might have to netbooted into the cluster and
then ssi-ksync can be run again.

B. Change the name of the cluster interconnect interface.
    You just have to edit /cluster/node#/etc/nodename for the
node "#".  If needed, edit /etc/hosts and update any DNS entries.
Reboot of that node is needed.

C. Change of MAC address - very similar to A. above.
    Note that all NICs on the cluster interconnect must on the same subnet/
physical network.

D. Change of netmask of all network interfaces in the cluster.
    (note: this has not been tested)It is only likely you would want 
to change this if you are moving the whole cluster to 
a network with a different netmask.  It is a little
tricky.   To do it you must edit the etc/boottab file in the ramdisk in
/boot and then run ssi-ksync to update all the other copies.

In addition, the network-script/ifcfg-eth* NETMASK entries should
be fixed.  You must reboot without doing a mkinitrd, since that would
re-create the boottab file with the netmask of the running interface.

NOTE: 
For debian edit /etc/network/interfaces

E. Change the gateway for the cluster or a given node.
    You can specify a GATEWAY stanza in /etc/sysconfig/network(On Debian
this is not applicable) and that will apply to all nodes.  If you need 
a different GATEWAY for different nodes you will have to NOT put it in 
/etc/sysconfig/network, but instead put it in the 
network-scripts/ifcfg-eth# in each /cluster/node#/etc/sysconfig.

Note For Debian:
    The information about network interface is available in the file  
"/cluster/node#/etc/network/interfaces". To change gateway, broadcast 
address, add new interface etc. edit above mentioned file.
It will be effective upon reboot. To make it effective without reboot
please use "onnode node# ifconfig". Please see the man pages 
interfaces(5)  and ifconfig(8).

    Adding a new interface (non-cluster interconnect) to a node can be 
done using the system-config-network (redhat-config-network on RH9 
systems) or netconfig command on the node
you want to add the interface to.  NOTE that there 
appears to be no man page for netconfig.  "netconfig --help" gives some 
hints and it is imperative to run with --device=xxxx (eg --device=eth1).  
Otherwise it will default to eth0, which is likely your cluster 
interconnect interface.  

NOTE: 
For debian edit /etc/network/interfaces or use /usr/bin/network-admin ( part 
of gnome-system-tools package )


TBD: While most of the information above applies to any network configuration,
   some additional steps are needed in environments where not all the
   nodes in the cluster have an interface which connects directly outside
   the cluster.

This page last updated on Fri Feb 11 23:43:05 2005 GMT
privacy and legal statement