HP OpenView System Administration Handbook [Electronic resources] : Network Node Manager, Customer Views, Service Information Portal, HP OpenView Operations نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

HP OpenView System Administration Handbook [Electronic resources] : Network Node Manager, Customer Views, Service Information Portal, HP OpenView Operations - نسخه متنی

Tammy Zitello

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید


3.2 DEFINE THE MANAGEMENT DOMAIN


There are many management "domains" when designing network and systems management systems. Each domain is uniquely defined, but they can also overlap each other. A domain can be defined by a geographical location, such as a building, city, state, country, or any combination of these. It can be administrative, such as who is responsible for maintaining the network and systems management system for a specific area. Networks, protocols, operating systems, or firewall boundaries can also define domains. These are also called demarcation points.

It's important to begin managing a small area, such as the local network, and expand the management domain slowly. Don't venture out into

Distributed Internet Discovery and Monitoring (DIDM) out-of-the-box. Define exactly what is to be managed.

3.2.1 What Will Be Managed?


Nodes and Networks

Define which nodes are going to be managed with NNM and OVO. If the node will be managed with OVO, it must be managed by NNM in order to receive interface and node up/down traps through the trap template into the OVO message browser. Knowing the total number of nodes will determine the licensing requirements for NNM and OVO as well as other OpenView products.

Determine which nodes are most critical. Use this information to define the polling interval for the nodes.

Using the managed nodes IP addresses, the managed networks can now be determined. When NNM discovers an interface with an IP address, it has to create a segment and network object for that IP/netmask combination in which to reside. Are any networks more critical than another is? Use this information to modify the polling interval for those networks.

IPX Nodes

Will there be a need to manage Novell Networks using the IPX protocol? Discovering IPX protocol systems can only be done with NNM on Windows NT. These collected objects can be sent to NNM on UNIX running an enterprise license.

DHCP Hosts

How many network addresses are handled by DHCP, and will they need to be discovered and managed? NNM will discover employees' computers that are plugged into the network in the morning and obtain an IP address; that way, the employees can take the laptop home in the evening. When the employee shuts down the computer, NNM will show that system as down. If such addresses need to be managed they must be configured within NNM as DHCP addresses to be handled properly.

Level-2 Discovery

Is level-2 discovery a requirement for the network management system? Depending on your type of network, level-2 discovery will add hundreds if not thousands of objects into the NNM object database. NNM may show unused switch ports on some types of network switches in a down state, which will propagate that status upward through the submaps.

OVO Managed Nodes

How many nodes will be managed by OVO? The status of a system within the OVO Node Bank is based on the status of the messages for that system in the operator's active message browser.

In order to ensure that only meaningful alarms are displayed within an operator's active message browser, decide which two or three things (outside of Interface Up/Down and Node Up/Down) are "must-know" and must be managed through OVO's actions, commands, and monitors. Start by configuring a small set of alarms here, or else you'll be deluged with messages of unimportance that will begin to increase the size of the history messages table in the OVO database.

All acknowledged active messages are moved to the history messages table. The history messages table is the largest table within the OVO database. The number of received messages at the management server is based on the number of nodes managed and the number of templates that are distributed to those managed nodes. Messages from the nodes do not only notify of a problem, but also rectify it. They can also notify of normal behavior. The distributed templates have actions, commands, and monitors assigned that actually send the messages from the managed node to the management server based on what is checked by it. When, how often, and how many messages are sent from a node is entirely up to the person who configured the template.

How fast and large the history messages table is depends on the number of received and acknowledged messages and how often the history messages are downloaded out of the table. How often the history messages are downloaded is based on how long the information must be kept for examining messages over time, looking for troublesome nodes, or repeated events.

If the plan is to keep a lot of data for trending analysis, make sure the disk on which the database resides is fast and there is plenty of space.

3.2.2 Collection Station Requirements


Defining the number of required collection stations and management stations is based upon the overall design of the NMS itself. Does the design require overlapping management domains? The numbers of nodes within each domain, license requirements, slow network links, and so on are a factor in designing the system. Collection stations are used by NNM to remotely discover and monitor the network and report back node status and traps to an NNM management station.

3.2.2.1 Collection Domains

Use the defined management domains to determine the number of collection stations required to manage the enterprise. These management systems will be the "collection" domains for use with Distributed Internet Discovery and Monitoring. Any collection station is a management station. The difference between a management station and collection station is that an operator actively uses a management station NNM GUI to access NNM or integrated management products.

3.2.2.1.1 Will there be any Overlapping Domains?

Overlapping collection stations provide for monitoring redundancy, especially when the two collection stations use separate routes to the same destination. It also provides for continued monitoring in the event that one collection station requires maintenance or has experienced a failure. Don't confuse overlapping collection stations with automatic fail-over in a Manager of Managers situation. Overlapping collection domains for monitoring do not require an enterprise license. There are two management stations monitoring the same networks and/or nodes, but they do not replicate object data from one to another.

3.2.2.1.2 Will there be any Collection Station Fail-over?

Which collection stations need to fail-over for another? This scenario requires an enterprise license to operate. Here a management station "manages" another management station and uses it for a collection station. The acting collection station sends its object database through a topology filter (if defined) to the managing station. If there is no topology filter defined, it sends everything for which it is responsible for managing. The acting collection station does all status monitoring for those nodes. After it is replicated, only changes in topology or status are sent to the station(s) that managed it. If the collection station is no longer accessible by the management station, the management station can assume status polling of the collection station's nodes.

There are several traps that are pre-defined to be sent to the REMOTE_MANAGERS list. When a management station manages a collection station, its name is automatically placed in the REMOTE_MANAGERS list. Those traps are automatically forwarded to that list.

Knowing all the information documented thus far will assist with the purchase of the appropriate number and type of licenses and hardware requirements. Don't forget to account for future monitoring growth.

3.2.2.2 Distributed Internet Discovery and Monitoring (DIDM)

Distributed Internet Discovery and Monitoring means exactly that: The discovery and monitoring of the infrastructure is distributed across the enterprise. It is done by standing up separate collection stations to discover and monitor specific portions of the network by the defining discovery filters and noDiscover file entries. Whether or not DIDM is used, review the information in this chapter and decide how to handle map, discovery, topology filters, and noDiscover entries. Agree that only one filters file will be maintained and distributed to every management/collection station. Having one filters file makes the overall system more flexible and saves time in troubleshooting. If the systems are all HP-UX, think seriously about using Software Distributor for maintaining and distributing configuration files to all systems.

3.2.3 Distributed Consoles, Web Presenter, or the Java GUI?


NNM and OVO each provide a Motif Graphical User Interface for use by operators and administrators of the respective product. NNM can be configured to use distributed consoles and the web presenter to remotely distribute a display to present the network and node status, alarms, and reports. OVO has only a Java GUI for remotely distributing the display, and at this writing only the operator console is supported. The decision to use any or all of these requires

additional system resources. Decide during the planning stages what is best method for distributing the presentation and removing the operator load on the management console, or plan to purchase enough hardware resources for all users to display Motif GUI from the management console to a local X server. No additional license is required to use any of these features.

3.2.3.1 Distributed Consoles

Configuring NNM to use distributed consoles allows the Motif GUI to be run from a console other than the management server. Third-party applications can be run from the client if the applications support a distributed console or are not dependent on access to NNM's database. To configure NNM to use distributed consoles, the management station must become an NFS server. In order for the NFS server to perform well, there are some kernel parameters that need to be adjusted, one of which is the amount of memory that is assigned to buffer cache. The default HP-UX setting for the maximum dynamic buffer cache (dbc_max_pct) is fifty percent. The maximum amount of buffer cache should not be more that 1GB on the NFS server and not more that 400MB on the NFS client. These are not minimum memory requirements; they are the maximum that should be used for maintaining NFS performance. Take this into account when calculating memory requirements. For more information on NFS tuning and performance see

Optimizing NFS Performance by Dave Olker, published by Prentice Hall.

3.2.3.2 OpenView Web Interface

The OpenView Web Interface provides read-only access to maps and limited write access to events to multiple people simultaneously. Presented maps are updated dynamically. The OpenView Web interface requires an open

ovw session on the management server for each map that will be displayed through the web interface. Logins to the interface are separate from those of a UNIX or NT login.

Only NNM information such as maps, reports, inventory, and alarms is available through this interface. No OVO information can be seen through this interface. Knowledge of web server configuration, authorization, authentication, access control, and performance tuning will be extremely helpful in its deployment.

3.2.3.3 Java GUI

The OVO Java GUI is installed and runs from an NT or UNIX client and provides an operator interface to the management server. Using the Java interface can offload memory requirements needed to run the same number of Motif GUIs back to an X server. The operator interface is fully functional for OVO. It can launch NNM applications from the management server and display them back to an X client. The administrator must still use the Motif interface for configuring the various OVO banks (node, application, and so on).

3.2.4 Data Warehouse and Data Collection


NNM provides everything necessary to export data collected within its various databases (topology database, event database, and SNMP trend data) to the data warehouse. The warehouse can be one of three databases: Oracle or NNM's internal database, or Microsoft SQL. These databases use

Open Database Connectivity (ODBC), making it easier to create customized reports using spreadsheets or web pages. There are "canned" reports available through the Web Presenter and contributed spreadsheet reports. These are delivered with NNM.

For more information on the data warehouse, see the online manual

Reporting and Data Analysis with HP OpenView NNM . It provides configuration and formulas to determine the approximate amount of disk space required.

3.2.4.1 Questions to Consider when Planning for the Data Warehouse

What types of reports are to be created?

What MIBs need to be collected to create the desired reports?

How much space is required based on the collected and exported trend data?

How long must the data be kept before archiving?

Is there a requirement to access archived data?

How many users will access the reports?


    / 276