B.3 ARE /ETC/HOSTS, NIS, AND DNS CONFIGURED PROPERLY?
Are multi-homed hosts (hosts that have more than one IP interface) resolving to different
official hostname s? If they are, most likely you have seen the aforementioned problems. Configuring multi-homed hosts within any naming service with separate
official hostname s causes nothing but problems, especially when used with an NMS. If the official hostname in /etc/hosts does not match that of the NIS hosts map and the DNS, it causes problems. When separate official hostnames are configured within any naming service it adds additional administration to every file that is used to authenticate a host for a particular service. For NFS, r-commands, or OpenView products, every official hostname will need to be entered into the particular authentication file or any file used for authentication or configuration (/etc/exports, .rhosts, inetd.sec, netgroups, and so on) to ensure that authentication functions properly when an IP packet comes from a different interface. The administration time required to configure a host for each service adds up, and makes administration more cumbersome.
B.3.1 Proper Configuration of Naming Services
Whether or not OpenView products are used, standard UNIX services will also benefit from properly configuring these name services. The correct configuration of name services first requires the forward lookup, hostname to IP, and return all the IP addresses for the given hostname. The latter can only be done using the DNS. It cannot be done with an /etc/hosts file or the NIS hosts map. The /etc/hosts file and NIS hosts map are searched sequentially and stop at the first resolution. The inability to return all the addresses for a host can prevent authentication between client and server in some products. This is because the hostname sent within the IP datagram does not resolve to the source address from the IP packet. It could also fail because the source address does not resolve to a resolvable IP address in the product's authorization file.
The second requirement is that all reverse address lookups, IP address to hostname, for the IP addresses on a particular system point to the same official hostname or FQDN. This prevents multiple node names to appear for the same system in various databases, traps, and so on. It also makes certain that authentication between client and server functions correctly no matter what interface the packet is sent from.
Configuring name services as described reduces the amount of administration of service authentication files and provides a more reliable resolution for all system processes, services, and OpenView products.
B.3.2 NNM, SNMP, and Hostname Resolution
Some companies purchase NNM to manage their enterprise and refuse to run an SNMP agent on their systems because they believe SNMP is a security issue. That debate will not be discussed here. If there is no running SNMP agent on the managed system, NNM has nothing to interrogate to determine the number of interfaces, IP addresses, and netmasks associated with that system. NNM only knows the IP addresses in its
object database. In order to correlate these IP addresses to a particular system, NNM must do a reverse address lookup on the IP address to retrieve the hostname. Hostnames that match are placed in the same node container object in the
topology database; hostnames that don't match are deemed as separate hosts and added to the
topology database as such. Therefore, separate hosts are exactly what will be seen in the NNM maps. With no SNMP agent to interrogate, NNM has no way to determine the
Network protocol
System model
Number of interfaces
Protocol of each interface
Subnet mask
IP address(es) of each interface
Loopback address(es)
And more
If no SNMP agent is running, NNM cannot determine if IP addresses are software loopback, non-migratable IP addresses, or migratable IP addresses. All the addresses are deemed non-migratable IP addresses. There will also be no SNMP traps sent from the node.
When NNM has an SNMP agent to interrogate, it retrieves the interface information from the ifInterface branch of the MIB-II tree and uses this information to create the
Network protocol
System model
Number of interfaces
Protocol of each interface
Subnet mask
IP address(es) of each interface
Loopback address(es)
And so on
NNM associates all the IP addresses retrieved from the ifTable table of the SNMP agent of a specific node and properly associates each interface with the appropriate container object. NNM then determines a hostname by using the following rules:
If a non-migratable[1] software loopback IP address (other than 127.0.0.1) exists and resolves to an IP hostname, and both are true, that hostname will be used.
[1] A non-migratable IP address is an address that is permanently assigned to the interface. A migratable IP address is an IP address that is not permanently assigned to an interface on a system from M/C ServiceGuard. The migratable IP address is only available when the M/C ServiceGuard package is running and located on the node on which the package is currently running.
If not, NNM uses the lowest numbered[2] non-migratable IP address that resolves to an IP hostname.
[2] The "lowest numbered" IP address is determined by using a byte-by-bye comparison of the IP addresses.
If no IP addresses resolve to an IP hostname, the lowest numbered, non-migratable IP address is formatted as a string and used as a hostname.
When no SNMP agent is running on a system and the reverse-address resolution of the discovered IP addresses resolve to different hostnames, then NNM will have an inaccurate topology database. This inaccuracy will also be reflected in the IP map. In addition, other network management software that determines the hostname from reverse address resolution may not have determined the same hostname as NNM. It gets very confusing for operators.
Sometimes SNMP agents can also give NNM an inaccurate topology database, even with properly configured hostname resolution. If a multi-homed host shows up as separate nodes in the NNM map, use the following test to determine if there may be a problem with the agent:
Check to see if all SNMP agents of the same type exhibit the same problem.
Select a node that shows up in the map as separate hosts.
Change both the read and write community strings on the node so that NNM can no longer query the SNMP Agent.
Ensure that all IPs for the node resolve to the same FQDN.
Run
xnmsnmpconf clearCache on the management server.
Run
nnmdemandpoll against the node from the management server
The two node container objects will become one container object. NNM now is relying solely on reverse address lookups to determine what IP addresses belong to a node container object. There may be a problem with the SNMP agent on the system. See if there is an updated agent and ensure that you have the latest NNM patches installed.
B.3.2.1 Properly Configured /etc/hosts
The Official Hostname
The second column in any host file is the official hostname to which that IP address belongs. There is only one official hostname for a given system.. Always use the FQDN in the second column within the /etc/hosts file and NIS hosts master map. Hosts outside its DNS domain will resolve it as an FQDN, and it is a best practice to always make it an FQDN.
Any host outside the DNS domain of another host will use the FQDN to communicate with one another. If a process from a client resolves its official hostname to a short hostname through /etc/hosts and sends that name to a server in another domain, there is the possibility the client will not be authenticated. The server authentication process may take the source address from packet of the client and resolve the address to an FQDN through the DNS. Then take that resolved hostname and compare it to the short hostname that came in the client IP packet and deny authentication because they do not match.
The official hostname within the second column of /etc/hosts is used to determine the domain in Sendmail. The resolver will determine the domain using the official hosts name if the domain is not set in /etc/resolv.conf. Sendmail uses this entry to determine the domain name if the $j macro is not defined in sendmail.cf. Properly setting the official hostname in /etc/hosts can save configuration of files elsewhere on every system.
The /etc/nsswitch.conf File
The configuration of the name-service switch file, /etc/nsswitch.conf, determines what service or "database" source will be used for retrieving data within a particular database. It determines the order of use and when to switch from one service to another. The operating system uses this file to retrieve information on a number of items, such as password, groups, hosts, and so on. Here we will be dealing with the different databases from which to retrieve host information; a host's IP address and a host's official hostname.
The file has a simple syntax and the HP-UX manual page covers this completely. The following syntax will be used for demonstration purposes: <database> : <source> [<criteria>] <source>
The <database> entry can be one of many, such as passwd, group, hosts, and so on. The <source> that can be configured for data retrieval includes files, nis, nisplus, dns, or ldap, and the <criteria> by which the name service will "switch" or not switch from one <source> to another is based on the status of the particular <source>. The <criteria> can be based on the status of SUCCESS, UNAVAIL, NOTFOUND, and TRYAGAIN. Only the configuration of the hosts <database> using files, nis, and dns as a <source> with the <criteria> [NOTFOUND = continue] will be discussed here. It appears to be the best solution overall.
Using the <criteria> [NOTFOUND = continue] instructs the name-service switch to continue to the next <service> if the requested database entry was not found in the database. The following syntax instructs the name service switch to look in the local files (/etc/hosts) first, and if the requested host is not found to "continue" looking in the DNS:
hosts: files [NOTFOUND=continue] dns
Is this the best configuration for the /etc/nsswitch.conf file? It is, if the majority of hosts that are to be resolved are listed in the /etc/hosts file and the system must perform a number of resolutions, such as NNM. The DNS is the preferred name service to use for OpenView products, but it must be properly configured. Proper DNS configuration is found in section B.XX
The primary <source> should be the most accurate and reliable service.; Most clients can be configured as in Figure B-2 and without impacting performance. Placing DNS or NIS to be queried first may require any configuration file used in startup scripts to be configured with the IP address versus a hostname or alias. There is no network configured at boot, and when configuring the interface there is no way for the system to query the DNS or NIS to resolve the hostname to an IP address to configure the interface with the appropriate address. Until the
domainname is set and the
ypbind process is started, there is no way to query an NIS server for hostname resolution.
Figure B-2. Figure B-2 shows an /etc/nsswitch.conf configuration file that will try to resolve a hostname first within /etc/hosts and then, if not found, use the DNS.
The /etc/hosts File
A properly configured hosts file should look like the hosts file in Figure B-1. The first column is the IP address, the second the FQDN (or official hostname), the third column and beyond can be any alias. This example uses the short hostname in the third column and then an alias using the short hostname and the associated network interface. Using all lower case will prevent other hostname resolution problems.
Figure B-1. A properly configured /etc/hosts file will have an FQDN in the second column with the hostname and aliases following.
Ensure that the first entry line of the local host file is the IP of the network to which the default route is set, which is generally the fastest link. Using the /etc/hosts file as configured in Figure B-1, and communicating with ovwsrv from this system (itself), the 192.168.1.3 address will always be used. To connect to the IP address on lan0's interface, use either ovwsrv or ovwsrv-lan0. To connect to the IP address on lan1's interface use ovwsrv-lan1. Using the alias will always resolve to the IP address to which you want to connect. Configuring NIS and DNS using this methodology (which will be discussed later) will prove to be effective. Any aliasing scheme can be used in columns 3 and beyond in the /etc/hosts file and the NIS master map.
How Hostname Resolution Determines Route Determination
Based on the configuration of the /etc/nsswitch.conf file in Figure B-2, the local host will resolve the hostname (ovwsrv) from the
gethosbytname system call from the /etc/hosts file asovwsrv.nsr.hp.com with an IP of 192.168.1.3. Communication with ovwsrv will take place internally based on the routing table in Figure B-3. Packets bound for 192.168.1.3 are routed to 193.168.1.3, which is a host route. The Gateway column is known as the
immediate gateway for reaching the destination in the Destination/Netmask column. Some systems may show this route or gateway as 127.0.0.1. If the network cable were removed, the system would still be able to communicate with itself. The same situation applies to using the alias ovwsrv-lan0. The system will resolve ovwsrv-lan0 as ovwsrv.nsr.hp.com with an IP of 192.168.1.3 and internally communicate through lan0 based on the routing table in Figure B-3.
Figure B-3. The output from netstat rn shows the routing table. This table shows the route a packet will take based on its destination address.
height="204" SRC="/image/library/english/10090_ap02fig03.gif" >
The same functionality exists when communicating with 172.31.16.3 (ovwsrv-lan1), but the only way to resolve a hostname to 172.31.16.3 using the /etc/hosts file in Figure B-1 is to use the alias ovwsrv-lan1. This system will resolve ovwsrv-lan1 to an official hostname of ovwsrv.nsr.hp.com with an IP of 172.31.16.3, and packets bound for ovwsrv-lan1 (172.31.16.3) will communicate internally via the lan1 interface. This communication is based on the routing table in Figure B-3, which shows the route to 172.31.16.3 as an internal route to 172.31.16.3.
Any packet not bound for a local host route in the routing table will be checked against the next immediate gateway. In this instance, the destination address will be checked against the netmask of local networks 192.168.1.0/255.255.255.0 and 172.132.16.0/255.255.255.0. When a hostname is resolved to an IP address that falls within either of these two networks, but not IP addresses 192.168.1.3 or 172.31.16.3, the packet is sent to the gateway for that specific network, such as 192.168.1.3 (lan0) for IP addresses in 192.168.1.0 and 172.31.16.3 (lan1) for IP addresses 172.31.16.0. The source address for these packets will be either 192.168.1.3 or 172.31.16.3. The source address will be 192.168.1.3 for all packets destined for 192.168.1.0/255.255.255.0 and will be 172.31.16.3 for packets bound for 172.31.16.0/255.255.255.
If the destination address does not fall within any of the aforementioned routes, then it is sent to the default route, 192.168.1.1 with a source address of 192.168.1.3. This gives an example on how name resolution determines the route a packet will take to its destination.
B.3.3 Properly Configured NIS
B.3.3.1 The Domain Name
One of the first things that must be done when configuring an NIS domain is to determine the NIS domain name. Few system administrators know that within Solaris there are some system scripts and configuration files (sendmail.main, sendmail.subsidiary, X, and so on) that actually use the NIS domain name to determine the DNS domain. They also use it do create the FQDN of the system. Scripts will truncate the defined NIS domain name (returned from the
domainname() command) to the first period (dot) if it (the domain name) does not begin with a period to determine the DNS domain name. The script then attaches that domain name to the hostname from
hostname() command to create an FQDN for the local host. If the NIS domain name is set to a name that is not a sub-domain of the DNS domain, hostname resolution fails for that FQDN for the local host's name. If the NIS domain name begins with a period, it assumes that the NIS domainname is the actual DNS domain and attaches it to the hostname to create a FQDN for the host.
The NIS domain within the Solaris OS, including SunOS, and HP-UX is to be configured as a sub-domain of the DNS domain, but it is not defined within DNS for host resolution within the DNS. The NIS domain itself is
only the domain to which the NIS clients bind to retrieve NIS services , but is associated with the DNS domain by creating the NIS domain name as a sub-domain of the host's respective DNS domain. This makes NIS configuration flexible, but many administrators configure it incorrectly, adding time to perform hostname resolution for some things and even impairing system performance in some instances.
Setting up an NIS domain within the DNS domain nsr.hp.com for a group of administrative assistants could be done with an NIS domain of admin.nsr.hp.com. Within the host file and DNS hosts would reside in the nsr.hp.com DNS domain, such as ovwsrv.nsr.hp.com. All systems would be configured to bind to the NIS domain admin.nsr.hp.com.The scripts and configuration files that build the FQDN for lookup will truncate the "admin," the NIS sub domain, from the domain name retrieved from
domainname() command and leave the DNS .nsr.hp.com. In essence, hostname + (domain name NIS sub domain) = Fully Qualified Domain Name.
When using the Solaris operating system, the NIS domain can be configured to the DNS domain by setting the domain name to begin with a period (dot); for example, .nsr.hp.com. It can be set whether or not NIS is actually used by the system. This prevents those Solaris scripts that truncate to the first period from doing so and accurately builds the FQDN of the system for lookup, though it should be able to lookup the short hostname properly through the other naming services. Without modification of the startup scripts in /sbin/init.d, HP-UX will not set the domain name unless the system is also configured as an NIS client or NIS server.
The CDE Calendar Manager Service Daemon
HP-UX does not have any startup scripts or configuration files that truncate anything from the domain name retrieved from the
domainname(1M) command to use in conjunction with the
hostname(1M) command in the creation of an FQDN. The domain name will only be set if the NIS server or client is configured. But on both HP-UX and Solaris, the CDE calendar manager service daemon (/usr/dt/bin/rpc.cmsd) does create an FQDN as described previously. It will truncate the NIS domain name if configured as part of the DNS domain name and combine it with the hostname from the
gethostname() system call to create a FQDN. It will then place it in
every entry created by the user in the users callog file in /var/spool/calendar. Both the HP-UX and Solaris CDE rpc.cmsd will create a properly configured FQDN under the following circumstances:
Removal of the NIS sub-domain of the domain name returned by
domainname(1M) added to the hostname from
hostname(1M) results in a resolvable FQDN.
Removal of the leading dot of the domain name returned by
domainname(1M) and added to the hostname from
The domain name returned by
domainname(1M) added to the hostname from
hostname(1M) results in a resolvable FQDN.
If the NIS domain name is not set as a sub-domain of the DNS domain, or set equal to the DNS domain (beginning with a period) in which a host resides, the CDE calendar manager service daemon creates an FQDN of a host that does not exist and places the invalid FQDN in the user's call log file in the /var/spool/calendar directory. As the user adds calendar entries, the callog file grows larger and more entries will contain the invalid FQDN. The calendar manager service daemon will try to resolve the invalid FQDN for every entry in the user's callog file to contact the host. Eventually the calendar program will appear to be hung when running the calendar program. In actuality, the hostname resolution has to time out for every entry through every naming service listed in /etc/nsswitch.conf trying to resolve the invalid FQDN. It's not noticeable at first, but as a user's calendar gains entries it becomes slower and slower and eventually the user calls to complain that the calendar program "hangs." This problem is due to the incorrect configuration of the NIS domainname.
On a system that has no NIS domain name configured, the CDE calendar manager service daemon uses the short hostname returned from the
gethostname() system call in the users callog file in the /var/spool/calendar directory. The calendar manager service daemon will resolve the short hostname and work correctly.
Sendmail and NIS
When the sendmail daemon is started, it tries to fully qualify the host's hostname in order to use a FQDN in the message header for the return address (without site hiding configured) for mail that passes through it. Sendmail attempts to determine the FQDN, or canonical name, through the naming services configured in /etc/nsswitch.conf and assign it to the $j macro, assign the domain name to the $m macro (version 8), and the short hostname to the $w macro. If it cannot resolve a FQDN for the hostname, it uses the short hostname. If it can only resolve a short hostname, the sendmail daemon will continually generate syslog messages stating it cannot fully qualify the hostname.
Sendmail is most peculiar when it is trying to fully qualify the hostname within /etc/hosts or NIS. If Sendmail cannot find an FQDN for the short hostname in the second column of the line /etc/hosts file or NIS hosts map, it checks for an FQDN in the third column and then the fourth, and so on, until it finds one on that line. It doesn't matter if the FQDN is correct or not, it will use the first one it finds for $j and $m. If it doesn't find one, it uses the short hostname.
In SunOS, Sendmail's $m is set to the NIS domain by retrieving the domain name that was set at boot time. If the domain name begins with a period or a plus sign, the first character is truncated and the remainder is set to $m. If the domain name does not begin with a plus sign or a period, the whole domain is assigned to $m. In older Sun versions of sendmail.cf files, such as sendmail.main and sendmail.subsidiary, the rule sets would truncate the first sub-domain of the domain name if it did not begin with a period and assign the rest of the domain to $m.
B.3.3.1.1 The NIS Hosts Map
The NIS hosts map needs to be configured just as the /etc/hosts file. It provides hostname resolution just as an /etc/hosts file does, only it is a map that used in by all clients bound to the NIS domain. Just as in the /etc/hosts file, the order of multi-homed hosts within the host's file used as the NIS master file for the NIS master map is extremely important when using the short hostname to connect to a system. The resolution is sequential just as it is with the /etc/hosts file. The first entry for each host should be the primary interface to which all other systems will connect when using the host's actual hostname. The first line with the hostname to be resolved will be used to determine the IP address (first column) for connection to that system. System authentication remains as described earlier in the /etc/hosts section. Use an alias versus the hostname to connect to any specific interface on a host.
All systems using NIS will resolve hostnames and IP addresses to the same to the same host or IP. Though multiple IP addresses are configured to a single hostname in the NIS map, only one IP will be given. Just like using the /etc/hosts file, only the first one will be given.There is no
sortlist feature within NIS as there is with DNS to allow for sorting of IP addresses for multi-homed hosts in a given order.
Putting It All Together using NIS
Figure B-4 is the NIS hosts map from the NIS server. On the OpenView server ovwsrv.nsr.hp.com, the /etc/nsswitch.conf file is configured as it looks in Figure B-5, and the hosts file as Figure B-1. The OpenView NNM server will discover the node xyzzy. The node xyzzy has the same /etc/nsswitch.conf configuration as Figure B-5, and its host's file like Figure B-6. NNM has retrieved a list of IPs from the default routers' arp cache and pings 192.168.1.9, queries the SNMP agent, and retrieves the interfaces from xyzzy's iftable and builds the node container object with two Ethernet LAN interfaces with the IP addresses 192.168.1.9 and 172.31.16.9. With 172.31.16.9 being the lowest numbered, non-migratable IP address, NNM attempts to reversely resolve this IP to find a hostname for the newly discovered node as described here.
A reverse lookup is used to find a host with an IP of 172.31.16.9; the nsswitch.conf file states to first use the /etc/hosts file for resolution for the hosts name service.
The /etc/hosts file is searched for an IP address of 172.31.16.9.
No IP is found with the IP address of 172.31.16.9, return to /etc/nsswitch.conf.
The /etc/nsswitch.conf file states that for hosts name service, continue the search in NIS if the name is not found in the /etc/hosts file. The NIS host map is queried for resolution.
Resolution for the IP address of 172.31.16.9 is found in the NIS map. The official hostname is an FQDN of xyzzy.nsr.hp.com.
Figure B-4. Figure B-4 contains a portion of the NIS hosts map from the NIS server.
Figure B-5. On node ovwsrv.nsr.hp.com, the /etc/nsswitch.conf configuration file will try to resolve a hostname first within /etc/hosts and then, if not found, use the NIS.
Figure B-6. Figure B-6 shows the /etc/hosts file for host xyzzy.nsr.hp.com.
The host is entered into the NNM database with a FQDN, xyzzy.nsr.hp.com. The domain name is truncated from the FQDN and the short hostname, xyzzy, is used as the label of the node object on the map.
Any application or process on the management server ovwsrv, that communicates with the host xyzzy.nsr.hp.com will follow the same flow to find the IP of the hostname xyzzy. The process will not resolve the hostname xyzzy.nsr.hp.com by the FQDN or short hostname in its local hosts file. The resolution will continue to search for the hostname in the NIS host map. There it will retrieve the IP address of 192.168.1.9 for either the short hostname xyzzy, or its FQDN, because it is the first in the list. Based on the routing table in back in Figure B-3, the packet will be sent to the immediate gateway 193.168.1.3 and sent out lan0 where 192.168.1.9 will be listening and pick it up and continue communication. While communicating with each other, the source address of packets from the NNM server, ovwsrv.nsr.hp.com, will be 192.168.1.3 and the source address of packets from the host, xyzzy.nsr.hp.com, will be 192.168.1.9.
Tip
Configuring the NIS master /etc/hosts file as shown in Figure B-4 facilitates the use of h2n or HP's hosts_to_named script. That is, IP address in the first column, FQDN of the host in the second column, and aliases for the IP in the third and following columns. With either script, a hosts file is parsed to create domain name service db files. This /etc/hosts format, used in conjunction with hosts_to_named or h2n, create perfect DNS db files. Using this scheme in an NIS master hosts map allows for the use of one hosts file to create both the NIS hosts map and DNS db files. It will also give you the same resolution results in NIS and DNS. The only difference being NIS will only resolve one IP for a multi-homed host as where DNS will resolve all the IP addresses for the host.
B.3.3.1.2 Network File System (NFS) and NIS
Mounting remote files using NFS can be done in two ways: static mounts, which are configured in a mount list, /etc/fstab (HP-UX) or /etc/vfstab (Solaris), or automounts, which are configured through an NIS map. Whichever way is used, the NFS mount should mount the servers closest interface to the client. There are several ways to ensure that this occurs. Both poor hostname resolution and incorrect NFS configuration will give poor NFS performance and even failure to work as it should.
Figure B-7 shows a network that contains a router, two NFS clients (xyzzy and ovwsrv), and an NFS server (nfssrv). Each has two interfaces, lan0 on the 192.168.1.0 network and lan1 on the 172.31.16.0 network. The lan0 interfaces are the primary interfaces and the lan1 interfaces are for the NFS traffic. To mount the /home directory from nfssrv on each client xyzzy and ovwsrv, an entry must be made in each of the clients mount list:
nfssrv:/home | /home | nfs | defaults | 0 0 | #HP-UX fstab entry | |
nfssrv:/home - | /home | nfs | - | yes | intr,bg | # Solaris vfstab entry |
Figure B-7. The network diagram in Figure B-7 is used with the supplied text in demonstrating the importance of good hostname resolution and NFS configuration.
height="279" SRC="/image/library/english/10090_ap02fig07.gif" >
If either of the previous entries is used in the respective operating systems mount list, the behavior will not be as expected. NFS traffic takes place over the 192.168.1.0 network and not the 172.31.16.0 network as designed. Both clients will resolve nfssrv through the NIS host map (as configured in Figure B-4) to 192.168.1.6 because it is the first entry in the NIS host map that contains the alias nfssrv. All the other entries for nfssrv are ignored. All NFS communication will be through lan0 and not through lan1 based on the server and clients routing table. In order to get the expected results, a different alias for nfssrv must be used within the mount list:
nfssrv-lan1:/home | /home | defaults | 0 0 | # HP-UX fstab entry | |
nfssrv-lan1:/home - | /home | nfs | - yes | intr,bg | # Solaris vfstab entry |
These entries resolve nfssrv-lan1 to 172.31.16.6 through the NIS host map because it is the only entry with the alias nfssrv-lan1 and, based on the routing table of the clients and server, NFS communication between client and server will be over the 172.31.16.0 network through lan1 of both systems. The official hostname remains a FQDN for all IP addresses and functionality of OpenView NNM and Operations remain undisturbed.
The Netmasks File
The /etc/netmasks file is used to determine the subnet mask of a network on Solaris. The contents of the netmask file are the IP network and its respective subnet mask for each network on a single line.
172.31.0.0 | 55.255.255.0 |
192.168.1.0 | 55.255.255.0 |
The /etc/netmasks file does support both standard and variable length subnetting of the network column. Using a single NIS netmasks map for HP-UX NIS clients is not supported. HP-UX NFS does not read the NIS netmasks map, it only reads the /etc/netmasks file. The automounter (AutoFS) in both Solaris and HP-UX uses the networks and netmasks within the /etc/netmasks file to determine the clients local subnet. This ensures the automounter mounts an NFS server on a local subnet versus traversing a router for a specific mount point.
Automounted Directories
The automounter is used to automatically mount NFS filesystems as needed and unmount them when no longer in use. Using Figure B-7 as the network and NFS server and client layout, the mount for /home in the NIS automount map is configured so that NFS traffic is now over 192.168.1.0 network:
/home | nfssrv-lan0:/home |
All clients in the NIS domain would mount their home directory to nfssrv-lan0 (192.168.1.6). The hosts ovwsrv and xyzzy have no problem with this configuration. All NFS traffic runs over the 192.168.1.0 network. The node hp712 begins to see "238 NFS server not responding" errors in the syslog. These errors are seen because the default route on hp712 is set to 172.32.16.254. The node hp712 mounts /home on nfssrv-lan0 via routera (172.32.16.254). The source address of the NFS IP packets from hp712 is 172.32.16.18 and the destination address is 192.168.1.6. When the host nfssrv receives the packet, its sees the source address is 172.32.16.18. According to the routing table in nfssrv, the route to the 172.32.16.0 network is via its immediate gateway, 172.32.16.6. The host nfssrv redirects the packet to hp712 via 172.32.15.6. The packet is received by hp712 and acted upon. But hp712 is still waiting for a response from 192.168.1.6 and it appears the NFS server is not responding.
Because the NFS server has multiple interfaces, list all the unique aliases of interfaces in the automount map (replicated file systems). The hostname alone for a system cannot be used when using NIS as the name service. When NIS is used as the name service it will only return the one IP address (192.168.1.6). Configure the automount map using replicated file systems as shown in this example:
/home | nfssrv-lan0,nfssrv-lan1:/home |
In order to use an automount map with replicated file systems, the proper netmasks for the IP networks must be entered in the /etc/netmasks file. This ensures that a client, such as hp712 in Figure B-7, will mount an NFS file system from a local interface (172.32.16.18) to 172.32.16.6 and not send packets through routera and having nfssrv redirect them through a local interface. The OpenView products continue to work properly.
B.3.3.1.3 The r-commands and NIS
The r-commands are made up of clients and servers and act against the "remote" client specified with the command. The clients are remsh (HP remote shell), rsh (Solaris remote shell), rlogin (remote login), rexec (remote execute), and rcp (remote copy). The servers are remshd (remote shell daemon), rexecd (remote execute daemon), and rlogind (remote login daemon).
There are two files that can be configured to allow authorization of execution of the r-commands on a remote host: /etc/hosts.equiv and a .rhosts file within a user's home directory. The /etc/hosts.equiv file can globally authorize a host or specific users from a host for use of rcp, remsh, or rlogin. The users .rhosts file authorizes the local user and any a remote user from a specific host. Each file can be used to deny access as well authorize access.
Using either of these authentication files in conjunction with NIS to allow remote access from a specific host requires at least the entry of the hostname alias of the interface from which the request will come. If access to host is authorized from either interface of the node hp712, then both aliases hp712-lan0 and hp712-lan1 must be in the /etc/hosts.equiv file or the users .rhosts file, otherwise only requests from the alias listed will be authorized access.
When the request comes from the client (HP-UX), it appears the server daemon resolves the hosts (aliases) listed in each file (/etc/hosts.equiv or a users .rhosts) to an IP address and matches it to the source address of the IP packet. In Figure B-7, if only the hostname is used, such as hp712, then only r-commands from 172.18.4.18 will be authorized to use r-commands on a host because hp712 resolves to IP 172.18.4.18 on every node. Resolving hp712-lan0 will give the IP of 172.18.4.18 and hp712-lan1 resolves to the IP of 172.31.16.8. The use of both aliases are required in the .rhosts file for authorization of either interface when using NIS as the naming service. Even when NIS is configured to use DNS as the resolver for hosts, NIS will only return one IP address for a given host.
Solaris appears to provide authorization by resolving the IP of the source address to the official hostname and matching the resolved hostname to that of one in the authorization file (/etc/hosts.equiv or .rhosts). Thus, if IP to hostname resolution resolves to unique official hostnames, then each hostname must be in the authorization file. Resolving the IPs to a single hostname requires only one host to be in the authorization file.
B.3.3.1.4 OpenView Operations and NIS
OpenView hostname and IP resolution works no differently that any what has been previously explained, but it relies heavily on the official hostname, especially in an enterprise environment where the OpenView products are configured in Manager of Manager and Distributed Internet Discovery and Monitoring configurations. The information given here is to help the administrator better understand how to setup NIS in order that the entire system will benefit when resolving hostnames and IP addresses and how some products use the NIS domain to create an FQDN. Configuring the NIS domain name incorrectly can add administration overhead and hostname resolution failures. In mission critical applications where correct hostname resolution is imperative and communications between systems must take place within milliseconds, there is no time for invalid data or lookup failures because of incorrect configuration of the system. Improper configuration degrades system and application performance by wasting CPU and memory resources.
B.3.3.1.5 OpenView Operations and Agent Communication
Using Figure B-4 as the hosts file or NIS map, an administrator adds the node to the node bank OpenView Operations using only the IP address 172.31.16.9. OVO resolves the IP address to the hostname.
If an administrator uses the hostname when adding a node to the node bank, OVO resolves the hostname to all possible IP addresses. Using NIS as the naming service supplies only the first hostname found; using the DNS provides a popup-menu of all the IP addresses resolved for the hostname.
The administrator should know what IP address is used as the primary interface, here the administrator chooses to use address 172.31.16.9. All communication takes place between the manager and client over what IP network? The answer is 192.168.1.0, because using NIS as the naming service provides only resolves the first host name found to an IP address. In the example given here, it is the 192.168.1.9 address that is resolved for the nodes' hostname on the management server.
An agent on any managed node will "bind to" the first active network interface's IP address. An administrator doesn't have much control over this; often it is the order of the network interfaces, which is determined by the motherboard or backplane of the system. It can be overridden by setting the IP in the opcinfo file.
If an agent sends an opcmsg to the management server, it includes in what is called the rpc "payload" both its IP address and its hostname as it is resolved on the client. The management server will use this IP and hostname information to authenticate the agent sending the information. The management server will query the configured naming services to attempt to match them to a node entry in the node bank. If the management server is successful, the message is accepted. If no match can be established, such in the case that the agent sent a short hostname and the naming service on the manager returns a FQDN or the IP is un-resolvable, then the message is dropped. It is assumed that the agent is an impostor and is trying to gain unauthorized access to the server.
The opc.hosts file can be used to establish an authorization match, but it is not meant to replace incorrectly configured naming services. Naming services should be properly configured.
Because NIS returns only one IP for a given hostname and it is the IP that matches the first instance of the hostname in the NIS hosts map, the second IP address for the managed node is never resolved using the short hostname. The message is discarded and not placed in the operator browser. A properly configured DNS server will return all IP addresses for a multi-homed host and this scenario will be avoided.
Using short hostnames within the official hostname column of the NIS map can cause havoc when the configuration of the nsswitch.conf file is set to use NIS and then the DNS and the NIS server is becomes unavailable. Both NNM and OVO would have set all the hostnames within both their respective databases to the short hostname. The DNS only returns FQDNs, and during a configuration check of the nodes, NNM changes all the hostnames within its database from a short hostname to an FQDN. As long as the NIS server is not responding, all events in the NNM Alarm browser will resolve the hostname to the FQDN. Events captured by the OpenView agent's trap interceptor will also be a FQDN, which may or may not match a node in the node bank for a message entry in the message browser. When the NIS server returns to operation, NNM's configuration check of the nodes will return the hostnames in its database to the short hostname. There is an OpenView-specific trap for the occurrence of a hostname change. If it is enabled, the number of traps can be enormous.
B.3.3.2 OpenView Products and the DNS
OpenView products must have definitive hostname and IP address resolution, especially when configured in a Distributed Internet Discovery and Monitoring or Manager of Manager environment. Every system must resolve each host to the same IP address(es) and each IP address resolve to the proper and same FQDN. The DNS is the
only naming service that is used by all the platforms and by all the operating systems on which the OpenView products run or OpenView products and agents manage. The DNS is the best naming service to use for resolving hostnames and IP addresses, whether or not any OpenView product are used in an enterprise. The DNS can provide all the IP addresses for a single hostname in addition to distinct host aliases across disparate domains. NIS can only lookup information of any kind within a single NIS domain. NIS does not traverse NIS domains; there is no NIS hierarchy. Configuring the DNS in order that everything works properly is a simple task once it is thoroughly understood.
B.3.3.3 Properly Configuring the DNS
B.3.3.3.1 The /etc/resolv.conf File
The specifics of DNS resolution for the local host are configured within the /etc/resolv.conf file. Its configuration specifies what DNS domain(s) to search, and which nameservers to utilize for the search for hostname and IP address resolution. Later versions of the resolver code, such as used in HP-UX 11.11, allow for the additional behavioral options such as a sortlist that will give priority to specified IP address ranges on multi-homed hosts.
A typical /etc/resolv.conf file:
domain | nsr.hp.com |
sortlist 192.168.1.0/255.255.255.0 | |
nameserver 0.0.0.0 | |
nameserver 192.168.1.3 | |
nameserver 10.4.3.2 |
The
domain entry is the local domain name. If this line is omitted, the domain is determined by the hostname returned by
gethostname() , truncated after the first dot. If the official hostname in the second column in the /etc/hosts file is not an FQDN, the resolver assumes the root domain.
The
nameserver entry contains the IP address of the nameserver to use for querying. A maximum of three nameservers can be listed in the file. All others are ignored. The entry "nameserver 0.0.0.0" tells the resolver that this host is a nameserver and to query itself for resolution. Any other IP address assigned to the system will also work on this line. Seeing the 0.0.0.0 entry should immediately trigger in the mind that this system is a nameserver over having to "resolve" in the mind the IP to the hostname. If the /etc/resolv.conf file has only a single domain or search line, this too tells the resolver to use the local system as a DNS server.
The
search option sets a list of domains (up to six) for hostname lookup. The first domain within the
search list must be the local domain to enable the use of short hostnames within authentication files, such as .rhosts and inetd.sec. If a host is not found within the first domain in the search list, the resolver will continue the search through every domain in the search list until it finds a match. If no match is found in any of the domains within the search list, the search either stops, or continues with the next naming service as defined by the configuration of /etc/nsswitch.conf.
The
sortlist option (HP-UX 11.11, Solaris 8/9) tells the resolver in what order to sort the IP addresses of a multi-homed host, thus allowing connectivity to nodes through specific IP addresses over other IP addresses configured on a system. A total of 10 pairs of IP address/subnet mask (subnet mask is optional) entries, separated by white space, can be specified.
It is highly recommended that all management and collection stations be caching only name servers. NNM and OVO are forward and reverse address lookup intensive. As well they should be; it is how they begin communication for what they manage.
B.3.3.3.2 Forward Lookups
Forward lookups are "name-to-address" mappings within a DNS
DOMAIN db file. When a host is resolved by OVO, it needs to know every IP address associated with that host. To ensure that specific IP addresses are returned others, the
sortlist option can be configured in the clients /etc/resolv.conf file (if the resolver supports it). The sortlist can also be applied within the named.conf file of the DNS server itself to indicate preferred networks in answer to queries. To use the sortlist in conjunction with the DNS server itself, the DNS server must reside on the same network as the client making the request.
The DNS allows for the inclusion of unique names for the same IP address within its forward lookup dbs. This capability allows for the retrieval of all the IP addresses associated with a particular hostname and a single IP address for a different hostname. In the example in Figure B-8, we have just that. The resolution of ovwsrv using the DNS as configured in Figure B-8 will resolve two addresses for the host ovwsrv.nsr.hp.com, 192.168.1.3 and 172.31.16.3. The resolution of the alias ovwsrv-lan0 will resolve only the 192.168.1.3 address and the resolution of ovwsrv-lan1 will resolve only the 172.31.16.3 address. This behavior is what most administrators wish to see.
Figure B-8. Figure B-8 shows a correctly configured DNS address file.
If a CNAME record is used in place of an additional A record for ovwsrv-lan0 (or any alias), the DNS will resolve both addresses for ovwsrv. This is not the behavior most administrators wish to see in resolving an alias for a host. A CNAME for a uni-homed host will give the same resolution "look and feel" as /etc/hosts and the NIS hosts map.
Using the DNS for hostname resolution is preferred over NIS because the DNS allows for both forward and reverse address lookups within any DNS domain. NIS, on the other hand, only allows for hostname resolution within the local NIS domain.
B.3.3.3.3 Reverse Lookups
Reverse-address lookups are "address-to-name" mappings within a DNS
ADDRESS db file and allow only one unique name per IP address. All IP addresses for a multi-homed host must resolve to the same FQDN in order for all the products and services to function properly.
Figure B-9 shows a simulated db.192.168.1 or db.172.31.16 DNS file (without the SOA record) since the example networks are using the same host in each network. A resolution of 192.168.1.3 or 172.31.16.3 will return ovwsrv.nsr.hp.com. Any process resolving either IP will receive the same FQDN.
Figure B-9. Figure B-9 is an in-addr file that could be used for networks 192.168.1 or 172.31.16.
B.3.3.3.4 Testing Resolution: nsquery versus nslookup
On HP-UX, either the nslookup or nsquery command will traverse the /etc/nsswitch.conf file and provide the source (/etc/hosts, NIS, or DNS) from which the resolution came. The nslookup command is not accurate when it comes to forward address lookup of mult-homed hosts within the DNS. The nslookup command "round robins" the IP addresses of multi-homed hosts whether or not the
sortlist option is configured in the /etc/resolv.conf file. The exception is when one of the resolved IP addresses is on the same network as both the host requesting resolution and the DNS server itself (HP-UX only, Solaris round robins). HP-UX 11.x provides the
nsquery command in /usr/contrib/bin and provides more accurate resolution. The
nsquery command does traverse the /etc/nsswitch.conf file for the services to query for resolution and will sort the returned IP addresses according to the
sortlist option configured in /etc/resolv.conf.
The
nslookup command provided with Solaris does not traverse the /etc/nsswitch.conf file, it only queries the DNS. If a hostname is not resolved using nslookup on Solaris, either the resolver is not configured, the configured DNS servers are not operational, the host does not exist in the DNS as queried, or it doesn't exist in the DNS. It does not mean that the host does not exist within /etc/hosts or NIS. A "ping s <hostname>" will resolve the <hostname> using /etc/nsswitch.conf file and the ICMP echo will show the IP address to which it resolved. The nsquery command is not provided with the Solaris operating system. The ping command doesn't state where the resolution came from, but it does state the IP address it is pinging for the host given on the command line.
Dig can also be used, but currently must be downloaded for either platform.
B.3.3.4 Sendmail and the DNS
A mail server using sendmail relies on the accuracy of the DNS to deliver electronic mail as much as an OpenView management server does to manage the nodes and networks within the enterprise. Sendmail uses hostname naming services, such as the DNS, to retrieve the IP address of the destination host and to determine the host from which the mail came, or the host(s) through which mail has been relayed. Only through the DNS can sendmail determine if there is a mail exchange record (MX) for the destination host. An MX record tells sendmail to send the mail to a specific host or hosts when delivering e-mail. This host can be the same host or a host specifically configured to receive mail, such as a mail host or mail server.
If name resolution is configured to use the DNS and sendmail cannot resolve an FQDN through NIS or the /etc/hosts file, sendmail will try to resolve the $j and $w macros through the DNS. If the host is a mail server or mail host, it needs to utilize the DNS to deliver and receive mail.
B.3.3.5 NFS and the DNS
Mounting NFS file systems using hostname resolution from the DNS is no different from using the described /etc/hosts and NIS hosts map. Configuring alias names with an additional IP address within the forward address DNS db resolves the alias to a single IP address. Using the hostname alias in a static mount within /etc/fstab or within an automount map will resolve the alias to the appropriate IP address for mounting through the desired network interface at the NFS server.
The benefit of using the DNS in conjunction with the replicated filesystems option is that the automounter configuration only requires the actual hostname. DNS will return all the IP addresses for the multi-homed host.
/home | nfssrv:/home |
This also requires the /etc/networks file to be configured with the correct network class and network netmask on the client. The previous line in the automount map is configured to use the actual hostname of the NFS server, nfssrv. The /etc/nsswitch.conf file is configured to use the host file first then the DNS server for resolution of hostnames on the client hp712. The /etc/netmasks file on hp712 contains the following for this scenario:
172.31.0.0 5.255.255.0
172.18.0.0 5.255.255.0
Refer to Figure B-7 for the network topology.
When the automouter on hp712 needs to mount a specific directory, such as /home from the NFS server nfssrv, the automounter will resolve the hostname nfssrv and receive two IP addresses for the hostname. The automounter will then, through an algorithm using /etc/netmasks file, determine that the IP address 172.31.16.6 is local to that of nfssrv and that the interface responds (is up). The client hp712 requests the mount from the nfssrv at 172.31.16.6 through 172.31.16.0 at lan1 based on its own routing table, which shows a local network route. All NFS communication for this mount point takes place between the NFS server and client over the 172.31.16.0 network. The source and destination packets between the two systems will be either 172.31.16.6 or 172.31.16.18, and each system's routing table states that the 172.31.16.0 is a local route out of a local interface.
Configuring hosts xyzzy and ovwsrv where both interfaces for the hosts are on the same networks as the NFS server and using the same automount map configuration for /home as hp712 results in the mount occurring over either interface.
B.3.3.6 The r-commands and the DNS
The r-commands function the same way with DNS as they do when using an /etc/hosts file or the NIS hosts map. The value added by using the DNS again is the ability to retrieve multiple IP addresses for a single hostname. When using NIS for hostname resolution, all host aliases must be in the /etc/hosts.equiv file or a users .rhosts file to allow remote access to a node from any interface from a specific system. The more interfaces, the more entries that are required. Using DNS as the hostname naming service requires using only one entry for the host within /etc/hosts.equiv or a .rhosts file because all the addresses can be returned for the host.
Having additional address records for the host alias, such as hp712-lan0, still allows for "r-commanding" to, telneting to, or pinging to a specific interface. The resolution of the hostname alias through the DNS (just like with NIS) returns the specific IP address. Resolution of the hostname alias through the DNS also allows for the denial of r-commands from specific interfaces, whether the host alias is configured in the authorization file itself or you are adding them to a specific netgroup designed explicitly for use within the authorization file for denial of access using those interfaces. The reverse address lookup is always the same official hostname.
B.3.3.7 Traceroute
Many will believe that the
traceroute command is now "broken" because configuring each IP for a multi-homed host to reversely resolve to the same hostname will no longer resolve to the hostname alias configured as host-interface. This is true, it no longer will resolve to the hostname alias, but to the actual hostname of the multi-homed host. But the command still works as designed.
The standard
traceroute command only resolves the IP of the interface in which the packet exits a host and does not resolve the IP of the interface in which a packet enters a host. When the
traceroute command begins giving an asterisk (*), it is always either the "goesinta or "goesouta" interface on the next hop after the last good hop. The command does not provide what the next hop will be and therefore a login to the last good hop is required to determine the route in which the packet is to take. That telnet will most likely be to the loopback interface, which would also be aliased in the preferred host-naming service.
HP has a traceroute-like command named
findroute that is supplied with NNM. The program traces the route via SNMP and gives the hostname and both the entering and exiting interface the packet traversed through each multi-homed host to the destination. It also allows for the tracing of packets between two disparate systems other than the host from which it is run. If the SNMP agent on each host in the route is not accessible to
findroute , the trace fails at the first host to which the SNMP agent is not accessible.
Using
findroute to find the route from ovwsrv to hp712 on the sample network in Figure B-7 would occur like this: If ovwsrv is configured to use the DNS and its resolver has a sortlist option included with 172.18.0.0, it is the preferred network with a netmask of 255.255.255.0. Running
findroute on ovwsrv to find the route from ovwsrv to hp712 will result in the resolution of the 172.18.4.18 IP address for hp712.nsr.hp.com. Based on the routing table in ovwsrv, the packet will leave ovwsrv out lan0 to the default router routera.nsr.hp.com (192.168.1.254) and out routera.nsr.hp.com 172.18.4.254 interface to hp712.nsr.hp.com (172.18.8.18). The output of
findroute on the command line would look like this:
findroute hp712.nsr.hp.comSource Source Next Hop Next Hop
Address Address
ovwsrv.nsr.hp 192.168.1.3 routera.nsr.hp 192.168.1.254
routera.nsr.hp 172.18.4.254 hp712.nsr.hp. 172.18.4.18
Note that both interfaces of routera are included in the "found route." If the 172.18.4.254 interface of router 1 is not up, the 192.16.1.254 interface is still displayed as the next hop. Using standard traceroute will only provide the "goesouta" interface, and if the interface is not functional, traceroute will not resolve the IP to the name of the router and a splat, or an asterisk will be displayed.
Executing find route from the NNM GUI provides the output in a window and highlights the route on the NNM map, including the interfaces it traversed. Opening a multi-homed container object shows the highlighted interfaces. If the container object is opened as a separate sub-map, the interface can be selected and the interface in the multi-homed host and the connector line on the IP Map will be highlighted.
B.3.4 Windows NT and Hostname Resolution
Hostname resolution within Microsoft Windows products resolves the names of TCP/IP resources if the resource does not connect using NetBIOS. Hostname naming services can be can a local host file or the DNS. The location of the local host file itself is Windows product dependent. For Windows NT or Windows 2000, it is found in the %Systemroot\System32\Drivers\etc directory. The IP address and hostname entries within the host file are configured just as described in previous paragraphs for the UNIX host file.
The resolution of a host within both Windows NT and Windows 2000 uses the following order:
Local host file
DNS
NetBIOS
The order of using the DNS or NetBIOS can be changed by modifying the following registry entry (Windows NT):
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TcipParameters
\DnsNbtLookupOrder
Value Type: REG_SZ - Character string
Valid Range: 0 (DNS first) or 1 (NetBIOS first)
Default: 0 (DNS)
The 0 value specifies that DNS name resolution takes priority over NBT name resolution. A value of 1 will place NBT name resolution over DNS name resolution. Windows 2000 resolves through DNS by default. The can be changed by the same registry entry listed previously.