hit counter script
Download Print this page
IBM Computer Hardware User Manual
IBM Computer Hardware User Manual

IBM Computer Hardware User Manual

Ibm computer hardware user manual

Advertisement

Quick Links

Redbooks
Hardware Management Console
(HMC) Case Configuration Study for
LPAR Management
This IBM® Redpaper provides Hardware Management Console (HMC)
configuration considerations and describes case studies about how to use the
HMC in a production environment. This document does not describe how to
install the HMC or how to set up LPARs. We assume you are familiar with the
HMC. Rather, the case studies presented in this Redpaper provide a framework
to implement some of the more useful HMC concepts. It provides examples to
give you ideas on how to exploit the capabilities of the HMC.
The topics discussed in this Redpaper are:
Basic HMC considerations
Partitioning considerations
Takeover case study:
– Description of the scenario
– Setting up remote ssh connection to the HMC
– Using the HMC to perform CoD operations
– Examples of dynamic LPAR operations
– Using micropartitioning features
– Security considerations
© Copyright IBM Corp. 2005. All rights reserved.
Paper
ibm.com/redbooks
Dino Quintero
Sven Meissner
Andrei Socoliuc
1

Advertisement

loading

Summary of Contents for IBM Computer

  • Page 1 Hardware Management Console (HMC) Case Configuration Study for LPAR Management This IBM® Redpaper provides Hardware Management Console (HMC) configuration considerations and describes case studies about how to use the HMC in a production environment. This document does not describe how to install the HMC or how to set up LPARs.
  • Page 2: Introduction And Overview

    HMC configuration for POWER5 systems. The case studies are illustrated with POWER5 systems only. Basic HMC considerations The Hardware Management Console (HMC) is based on the IBM eServer™ xSeries® hardware architecture running dedicated applications to provide partition management for single or multiple servers called managed systems.
  • Page 3 Table 1 Types of HMCs Type Supported managed systems 7315-CR3 (rack mount) POWER4 or POWER5 7315-C04 (desktop) POWER4 or POWER5 7310-CR3 (rack mount) POWER5 7310-C04 (desktop) POWER5 Licensed Internal Code needed (FC0961) to upgrade these HMCs to manager POWER5 systems. A single HMC cannot be used to manage a mixed environment of POWER4 and POWER5 systems.
  • Page 4 The maximum number of HMCs supported by a single POWER5 managed system is two. The number of LPARs managed by a single HMC has been increased from earlier versions of the HMC to the current supported release as shown in Table 3. Table 3 HMC history HMC code No.
  • Page 5 LPAR and Service Focal Point. Service Agent (SA) connections: SA is the application running on the HMC for reporting hardware failures to the IBM support center. It uses a modem for dial-out connection or an available Internet connection. It can also be used to transmit service and performance information to IBM and also for CoD enablement and billing information.
  • Page 6 multi-threading. SMT is a feature supported only in AIX 5.3 and Linux at an appropriate level. Multiple operating system support: Logical partitioning allows a single server to run multiple operating system images concurrently. On a POWER5 system the following operating systems can be installed: AIX 5L™ Version 5.2 ML4 or later, SUSE Linux Enterprise Server 9 Service Pack 2, Red Hat Enterprise Linux ES 4 QU1, and i5/OS.
  • Page 7 To calculate your desired and maximum memory values accurately, we recommend that you use the LVT tool. This tool is available at: http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.htm Figure 1 shows an example of how you can use the LPAR validation tool to verify a memory configuration. In Figure 1, there are 4 partitions (P1..P4) defined on a p595 system with a total amount of 32 GB of memory.
  • Page 8 The memory allocated to the hypervisor is 1792 MB. When we change the maximum memory parameter of partition P3 from 4096 MB to 32768 MB, the memory allocated to the hypervisor increases to 2004 MB as shown in Figure 2. Figure 2 Memory used by hypervisor Figure 3 is another example of using LVT when verifying a wrong memory configuration.
  • Page 9 Micro-partitioning With POWER5 systems, increased flexibility is provided for allocating CPU resources by using micropartitioning features. The following parameters can be set up on the HMC: Dedicated/shared mode, which allows a partition to allocate either a full CPU or partial units. The minimum CPU allocation unit for a partition is 0.1. Minimum, desired, and maximum limits for the number of CPUs allocated to a dedicated partition.
  • Page 10: Capacity On Demand

    – On a registered system, the customer selects the capacity and activates the resource. – Capacity can be turned ON and OFF by the customer; usage information is reported to IBM. – This option is post-pay. You are charged at activation. Reserve Capacity on Demand: –...
  • Page 11 CoD is active only. You have to inform IBM about the amount of days you have made use of CoD monthly. This can be done by the service agent automatically. For more information, refer to “APPENDIX”...
  • Page 12 P550 – 2 CPU - 8GB nils (production) 2 CPUs (dedicated) 7 GB Figure 4 Initial configuration Table 5 shows our configuration in detail. Our test system has only one 4-pack DASD available. Therefore we installed a VIO server to have sufficient disks available for our partitions.
  • Page 13 Table 6 Memory allocation Memory (MB) Partition name nicole_vio 1024 julia Enabling ssh access to HMC By default, the ssh server on the HMC is not enabled. The following steps configure ssh access for node julia on HMC. The procedure will allow node julia to run HMC commands without providing a password.
  • Page 14 Openssl is required for installing the Openssh package. You can install it from the AIX 5L Toolbox for Linux CD, or access the Web site: http://www.ibm.com/servers/aix/products/aixos/linux/download.html After the installation, verify that the openssh filesets are installed by using the lslpp command on the AIX node, as shown in Example 1.
  • Page 15 openssh.msg.en_US 3.8.0.5302 Log in the user account used for remote access to the HMC. Generate the ssh keys using the ssh-keygen command. In Example 2, we used the root user account and specified the RSA algorithm for encryption. The security keys are saved in the /.ssh directory.
  • Page 16 Enabling On/Off CoD for processor and memory Before activating the CPU and memory resources, you have to prepare the CoD environment by getting an enablement code from IBM. For more information about how to get an activation code, refer to the CoD Web site: http://www.ibm.com/servers/eserver/pseries/ondemand/cod/...
  • Page 17 Figure 8 Activating the On/Off CoD Activating On/Off CoD using the command line interface. Example 4 shows how node julia activates 2 CPUs and 8 GB of RAM for 3 days by running via ssh the command chcod on the HMC. Example 4 Activating CoD using command line interface CPU: root@julia/.ssh>ssh hscroot@hmctot184 "chcod -m p550_itso1 -o a -c onoff...
  • Page 18 Note: If you use reserve CoD instead of ON/OFF CoD to temporarily activate processors, you can assign the CPUs to shared partitions only. In order for node julia to operate with the same resources as node nils had, we have to add 1.8 processing units and 6.5 GB memory to this node. Allocation of processor units.
  • Page 19 Example 5 Perform the CPU addition from the command line root@julia/>lsdev -Cc processor proc0 Available 00-00 Processor root@julia/>ssh hscroot@hmctot184 lshwres -r proc -m p550_itso1 --level\ \ >lpar --filter "lpar_names=julia" -F lpar_name:curr_proc_units:curr_procs\ \ >--header lpar_name:curr_proc_units:curr_procs julia:0.2:1 root@julia/>ssh hscroot@hmctot184 chhwres -m p550_itso1 -o a -p julia \ \ -r proc --procunits 1.8 --procs 1 root@julia/>lsdev -Cc processor proc0 Available 00-00 Processor...
  • Page 20 Figure 10 Add memory to partition – Using the command line. Example 6 shows how to allocate 6 GB of memory to partition julia. Example 6 Memory allocation using command line interface root@julia/>lsattr -El mem0 goodsize 1024 Amount of usable physical memory in Mbytes False size 1024 Total amount of physical memory in Mbytes False root@julia/>ssh hscroot@hmctot184 lshwres -r mem -m p550_itso1 --level \ \...
  • Page 21 At the time node nils is back and ready to reacquire the applications running on node julia, we reduce the memory and CPU to the initial values and turn off CoD. In order for node julia to operate with the initial resources, we have to remove 1.8 processing units and 6 GB memory from this partition.
  • Page 22 – Using the command line interface. Note: When allocating memory to a partition or moving it between partitions, you can increase the time-out limit of the operation to prevent a failure response before the operation completes. Use the Advance tab of the dynamic LPAR memory menu (see Figure 10 on page 20) to increase the time-out limit.
  • Page 23 Figure 12 Perform the deallocation for the CPU units – Using the command line interface to remove 1.8 processing units from node julia is shown in Example 8. Example 8 Deallocating the CPU root@julia/>lsdev -Cc processor proc0 Available 00-00 Processor proc2 Available 00-02 Processor root@julia/>ssh hscroot@hmctot184 lshwres -r proc -m p550_itso1 --level\ \ >lpar --filter "lpar_names=julia"...
  • Page 24 2. Deactivating the On/Off CoD for CPU and memory. For an example of the graphical interface, refer to the menu presented in Figure 8 on page 17, and the section “Activating On/Off CoD using the command line interface.” on page 17. Example 9 shows how to use the command line interface to deactivate the processor and memory CoD resources.
  • Page 25 Figure 13 Toggle the Capped/Uncapped option You have to consider the number of virtual processors to be able to use all the CPUs from the shared processor pool. In our example, after the CoD operation, we have 3.0 available processing units in the shared processor pool and 1 dedicated processor allocated to node oli.
  • Page 26 Example of using two uncapped partitions and the weight For the example of two uncapped partitions using the same shared processor pool, we use the configuration described in Table 7. Table 7 CPU allocation table Partition name (Min/Des/Max) nicole_vio 1/1/1 0.1/1.0/2.0 julia 0.1/1.0/2.0...
  • Page 27 Cpu2 771 699 Cpu3 712 698 We changed the weight for the partition oli to the maximum value 255 while partition julia is set to 128. The operation can be performed dynamically. For accessing the GUI menus, from the Server Management menu of the HMC, right-click on the partition name and select Dynamic Logical Partitioning →...
  • Page 28 Cpu2 Cpu3 In Example 13 and Example 14 the two nodes. Example 14 Output of topas -L on node julia Interval: Psize: Ent: 1.00 Partition CPU Utilization %usr %sys %wait %idle physc %entc %lbusy =============================================================================== LCPU minpf majpf intr Cpu0 Cpu1 0 1490 Cpu2...
  • Page 29 Node oli has increased processing loads during the workday: 7 AM to 7 PM and it is idle most of the time outside this interval. Partition julia has an increased processing load during 10 PM to 5 AM and is idle the rest of the time. Since both partitions are uncapped, we will reallocate only a piece of memory to partition julia during the idle period of time of partition oli.
  • Page 30 Figure 16 Selecting the scheduled operation 3. Next, in the Date and Time tab, select the time for the beginning of the operation and a time window where the operation can be started as shown in Figure 17. Figure 17 Selecting the starting window of the scheduled operation 4.
  • Page 31 Figure 18 Selecting the days of the week for the schedule 5. Click on the Options tab and specify the details of the dynamic LPAR operation as shown in Figure 19. Figure 19 Specifying the details of the dynamic LPAR operation Click on the Save button to activate the scheduler.
  • Page 32 6. Repeat steps 1 through 5 for creating the reverse operation, specifying julia the target partition for the scheduled operation, and 06:00:00 AM for the start window of the scheduler. 7. After setting up both operations, their status can be checked in the Customize Scheduled Operations window for each of the nodes as shown in Figure 20.
  • Page 33 Comparing profile values with current settings If you perform a dynamic LPAR operation and you want to make this change permanent, you have to do maintenance on the appropriate profile. Otherwise, after the next shutdown and power on of the LPAR, the partition will have the old properties and this might not be desired.
  • Page 34 Here is a sample output from the script shown in Example 15 on page 33. Example 16 Monitoring sample script output julia:/home/romeo # ./compare_profile_current hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue hmc1 cec-blue...
  • Page 35 hmc2 cec-green hmc2 cec-green hmc2 cec-green hmc2 cec-green hmc2 cec-green hmc2 cec-green hmc2 cec-green hmc2 cec-green hmc2 cec-green hmc2 cec-green In Example 16 on page 34, you can see that the LPAR blue6 has 2 GB memory configured instead of the desired 4 GB or that LPAR blue4 works currently with one processor instead of the desired 2 processors.
  • Page 36 Working with two HMCs eases the planning of HMC downtimes for software maintenance, as there is no downtime needed. While doing the HMC code update on one HMC, the other one continues to manage the environment. This situation allows one HMC to run at the new fix level, while the other HMC can continue to run the previous one.
  • Page 37 Note: Either eth0 or eth1 can be a DHCP server on the HMC. The managed system will be automatically visible on the HMCs. This is our recommended way to do high availability with HMCs. It is supported by all POWER5 systems. Two HMCs on the same network, using static IP addresses is shown in Figure 23.
  • Page 38 A new system is shipped with a default IP-addresses. You can change these IP-addresses by connecting your laptop to either T1 or T2 of the CEC. Assign an IP-address to your laptop’s interface that is in the same network as the respective network adapter of your CEC.
  • Page 39 For more detailed information, refer to “Access to the ASMI menu” on page 40“. On HMC1, the managed system becomes automatically visible. On HMC2, the managed system must be added manually. To add a managed system, select the Server Management bar and choose Add Managed System(s) as shown in Figure 25.
  • Page 40 APPENDIX The following sections contain additional information to be considered when dealing with HMCs. Access to the ASMI menu Depending on your network connection to the FSP interfaces, you have several possibilities to access the ASMI menu using an IP connection: Using a Web browser: Connect a system to the FSP network, launch a browser, and access the following URL:...
  • Page 41 Figure 26 Accessing the ASMI menu using WebSM For further information related to the access to the ASMI menus, refer to the “ASMI Setup Guide” at: http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/info/iphby/iphby.pdf Configuring a secure connection for WebSM The following example describes how to set up a secure WebSM connection for a Windows client and a cluster of two HMCs.
  • Page 42 Access the secure WebSM download page and run the InstallShield program for your platform: http://<hmchost>/remote_client_security.html Verify the WebSM installation by starting the WebSM client program and connect to the HMC. The next steps describe how to configure the secure connection to WebSM server.
  • Page 43 For our example, we perform the following actions: – Enter an organization name: ITSO. – Verify the certificate expiration date is set to a future date. – Click the OK button, and a password is requested at the end of the process.
  • Page 44 At this menu: – Add both HMCs in the list of servers (the current HMC should already be listed): hmctot184.itso.ibm.com, hmctot182.itso.ibm.com – Enter the organization name: ITSO. – Verify that the certificate expiration date is set to a future date.
  • Page 45 Figure 30 Copying the private key ring file to removable media Tip: To transfer the security keys from the HMC, you can use the floppy drive or a flash memory. Plug the device in the USB port, before running the copy procedure, and then, it will show up in the menu as shown in Figure 30.
  • Page 46 Figure 31 Installing the private key ring file for the second HMC Copy the public key ring file to removable media for installing the key file on the client PC. Select System Manager Security → Certificate Authority, and in the right panel, select Copy this Certificate Authority Public Key Ring File to removable media.
  • Page 47 Figure 32 Save the public key ring file to removable media You will be provided with a second window to specify the format of the file to be saved. Depending on the platform of the WebSM client, you can select either: –...
  • Page 48 Figure 33 Select the security option for the authentication Select one of the two options: – Always use a secure connection: Only an SSL connection is allowed. – Allow the user to choose secure or unsecure connections: A checkbox is displayed at the time of connecting the WebSM client to the HMC, allowing you to choose a secure (SSL) or an unsecure connection.
  • Page 49 HMC is not available. The policy for the microcode update can be changed from the ASMI. For further details, refer to the ASMI Setup Guide at: http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/info/iphby/iphby.pdf Hardware Management Console (HMC) Case Configuration Study for LPAR Management...
  • Page 50 For further information, refer to the microcode download for eServer pSeries systems page at: http://techsupport.services.ibm.com/server/mdownload The following procedure is an example of running a microcode update procedure for a p550 system using the HMC.
  • Page 51 Figure 36 License Internal Code Updates menus on the HMC Note: In our example, we choose to upgrade to a new release. When updating the firmware level at the same release, choose Change Licensed Internal Code for the same release. 2.
  • Page 52 3. We downloaded the microcode image to an FTP server, so we specify as LIC Repository FTP Site (Figure 38). Figure 38 Specify the microcode location 4. In the details window, enter the IP address of the FTP server, username and password for the access and the location of the microcode image (see Figure 39).
  • Page 53: Referenced Web Sites

    Figure 41. Figure 41 Update microcode completed Referenced Web sites Latest HMC code updates: http://techsupport.services.ibm.com/server/hmc Manual pages for the command line interface on HMC for POWER5 systems: http://techsupport.services.ibm.com/server/hmc/power5/tips/hmc_man_GA5.pdf A reference page for the command line interface on HMC for POWER4 systems: http://techsupport.services.ibm.com/server/hmc/power4/tips/mcode/tip001_cli...
  • Page 54 Dual HMC cabling on the IBM 9119-595 and 9119-590 Servers: http://www.redbooks.ibm.com/abstracts/tips0537.html?Open ASMI setup guide: http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/info/iphby/iph by.pdf Hardware Management Console (HMC) Case Configuration Study for LPAR Management...
  • Page 55: The Team That Wrote This Redpaper

    Dino Quintero is a Consulting IT Specialist at ITSO in Poughkeepsie, New York. Before joining ITSO, he worked as a Performance Analyst for the Enterprise Systems Group and as a Disaster Recovery Architect for IBM Global Services. His areas of expertise include disaster recovery and pSeries clustering solutions.
  • Page 56 Yvonne Lyon International Technical Support Organization, Austin Center Hardware Management Console (HMC) Case Configuration Study for LPAR Management...
  • Page 57 IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead.
  • Page 58 Send your comments in an e-mail to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 905, 11501 Burnet Road Austin, Texas 78758-3493 U.S.A. Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Eserver®...