HP OpenView System Administration Handbook [Electronic resources] : Network Node Manager, Customer Views, Service Information Portal, HP OpenView Operations نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

HP OpenView System Administration Handbook [Electronic resources] : Network Node Manager, Customer Views, Service Information Portal, HP OpenView Operations - نسخه متنی

Tammy Zitello

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید


3.10 HARDWARE CONFIGURATION REQUIREMENTS


The functional requirements for the NMS drive both software selection and hardware selection. The software selection for the NMS has bearing on the hardware selection based on the operating systems on which it can run and the amount of memory required to run the applications. How much hardware to purchase is based on the overall design and how NMS is going to be instrumented for problem detection, high availability requirements, data collection, distributed consoles, and so on.

3.10.1 Memory Requirements


Gather all the information available about the running the server software, operating system, client software, databases, data collection, and so on, and calculate the memory requirements. Err on the high side. One can have all the virtual memory in the world configured on a system, but software cannot execute in virtual memory; it must be placed in

random access memory (RAM) to run. It is better to have more random access memory than too little and have to swap out processes.

3.10.2 Disk Space and Disk I/O Requirements


Purchase fast disks, preferably arrays. Disk arrays allow for the creation of multiple LUNs. These physical volumes can usually be created smaller than the size of a physical disk within the array. How much space you need depends on many different variables. When using disks within a high availability solution, most likely all the products will be configured in an active/standby configuration. But will the products be configured on a single cluster node and when a fail-over is required they are shutdown on the first cluster node and started on another? Or will all the cluster nodes be used to run the associated products and any product can be run on any cluster node?

The latter will require more volume groups for configuration using a high availability product such as MC/ServiceGuard. Volume groups within MC/ServiceGuard are activated in "exclusive" mode and the logical volumes within that particular volume group can only be mounted on the system that has activated it "exclusively."

The chosen design may also require additional software licenses. For instance, a three-node cluster is created and a database server (using the same database product) is running on each system. Each database server is a serving a different database. Here, there needs to be three separate volume groups, and therefore three separate disks for the database data to be exclusive to its primary node. It also requires three database server licenses versus one system running a single database server that serves three databases. Check the license agreements.

From the standpoint of HP-UX, for every volume group, there must be at least one physical volume. Using

Just A Bunch of Disks (JBOD) as storage can waste much-needed disk space and limits the flexibility of MC/ServiceGuard configurations. If 36GB drives are purchased in a JBOD configuration and minimum redundancy is required, two 36GB drives must be purchased in order to mirror the logical volumes to the second drive. These two disks are one volume group, and the volume group can only be activated exclusively on one cluster node. For the disk space to be used effectively, all products must be configured to run on the same cluster node to have access to the volume group. There is no way to configure JBOD into separate physical volumes that are smaller than the size of one physical disk.

An HP AutoRaid allows for the creation of only 8 LUNs (physical volumes) whereas an HP Virtual Array allows for the creation many more LUNs. The disk array is best suited for those who wish to have a multi-node cluster. Unlike JBOD, arrays allow for the creation of physical volumes smaller than the size of a single physical disk within the array. The ability to create more physical volumes within an array facilitates multi-node clustering; making it easier to distribute products over several systems that can provide fail over capabilities for one another.

How highly available must be my products (binaries) and data? This directly affects the purchase of the number of controllers in the system, the array or disk storage unit, the cache in the array, and so on.

The following are additional questions that will help plan for enough of the proper disk space and disk I/O:

Is the database in archive mode (Oracle)? (It should be for online backup.)

How large do the redo logs need to be?

How many users will be using the NMS system at one time?

Will they log into the NMS and display clients back to a workstation?

Will they use the Java GUI and Web Interface?

How many objects are in NNM's object database?

How many days of history must be kept in the OVO history table?

Where will the OVO history table be archived?

How much data will be accumulated with SNMP data collection?

Is there room for periodically defragmenting the OVO database?

Where will the SNMP data collection be archived?

Will NNM's data warehouse be used?

How much data will be online?

When using an HA product such as MC/ServiceGuard, how will the products be configured?

Will all the products be configured in single Active/Standby cluster configuration?

The high availability and fault tolerance requirements of products and systems and their inherent configurations will literally specify the amount of storage to purchase, its type, and required number of interface cards. The same can be stated for databases and product configuration. These too can force the amount and type of storage required. Various products can be configured on a single system to easily access and update its associated data on a single large spindle (disk). There may be no complaint from a user about data access or system performance. Multiple users, backup, and incoming events hitting a disk uses I/O. All these events occurring at one time will stack up the I/O queue of the disk. One network "hiccup" that prevents OVO messages from reaching the management server or causes additional messages to be generated from NNM traps can flood the OVO database with hundreds of thousands of messages. Correlating these messages for automatic acknowledgement or logging into OVO and bringing up the message browser takes, disk I/O, memory, and CPU. Incidents such as these are rare and one doesn't have to design the system around it, but it does prevent the use of the NMS until the database is purged of the messages.

3.10.3 One Database Vendor


This is not always possible, but a good goal to shoot for. Many companies want to utilize one database vendor, and rightly so. It reduces both purchase and support costs. HP OpenView standardizes on Oracle on UNIX and NT as well as Microsoft SQL on NT. It would be great if all the products used the same database vendor. Creating a single database server is a good idea, but if it is not configured for high availability and fault tolerance and there is a failure of some sort, then nothing within the NMS will be accessible. All productivity is halted.

3.10.4 Memory Amount and Kernel Parameter Requirements


To make things easier, obtain all the recommended memory, swap, and kernel parameter requirements from the installation instructions for each product and place them in a spreadsheet. Determine what each kernel parameter setting needs to be set to in order that the products will run properly at the same time on a system. This is especially true when clustering and systems are meant to fail over for one another. Not having the kernel parameters properly configured on each system can prevent processes from starting upon fail over. Be careful: Not all kernel parameters are "additive" and some only require the largest setting for all products. A product's documentation generally states the "minimum" required setting, and not all kernel parameters log to the syslog when the parameter has been expended. Certain kernel parameters will increase memory requirements, but that should be accounted for in the products stated memory requirements.


    / 276