Product SiteDocumentation Site

Red Hat Enterprise MRG 2

Management Console Installation Guide

Installing the MRG Management Console for use with MRG Messaging

Edition 1

Lana Brindley

Red Hat Engineering Content Services

Alison Young

Red Hat Engineering Content Services

Legal Notice

Copyright © 2011 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
All other trademarks are the property of their respective owners.


1801 Varsity Drive
 RaleighNC 27606-2072 USA
 Phone: +1 919 754 3700
 Phone: 888 733 4281
 Fax: +1 919 754 3701

Abstract
This book contains basic overview and installation procedures for the MRG Management Console component of the Red Hat Enterprise MRG distributed computing platform. The MRG Management Console provides a web-based tool for management of MRG Messaging

Preface
1. Document Conventions
1.1. Typographic Conventions
1.2. Pull-quote Conventions
1.3. Notes and Warnings
2. Getting Help and Giving Feedback
2.1. Do You Need Help?
2.2. We Need Feedback!
1. Scale Requirements
2. Installing the MRG Management Console
2.1. Configuring the MRG Messaging Broker for use with the MRG Management Console and MRG Grid
2.2. Installing and Configuring the MRG Management Console
3. Start Console
3.1. First Run
3.2. Logging
4. Using the MRG Management Console
4.1. Using the MRG Management Console with MRG Grid
4.1.1. Job Server Configuration
5. Configuring the MRG Management Console for Medium Scale Deployment
5.1. Running Multiple MRG Management Console Web Servers
5.2. Limiting Objects Processed by the MRG Management Console
5.3. Configuring the Remote Configuration Feature for a Separate MRG Messaging Broker
5.4. Increasing the Default QMF Update Interval for MRG Grid Components
6. Frequently Asked Questions
7. More Information
A. Revision History

Preface

Red Hat Enterprise MRG
This book contains basic overview and installation information for the MRG Management Console component of Red Hat Enterprise MRG. Red Hat Enterprise MRG is a high performance distributed computing platform consisting of three components:
  1. Messaging — Cross platform, high performance, reliable messaging using the Advanced Message Queuing Protocol (AMQP) standard.
  2. Realtime — Consistent low-latency and predictable response times for applications that require microsecond latency.
  3. Grid — Distributed High Throughput (HTC) and High Performance Computing (HPC).
All three components of Red Hat Enterprise MRG are designed to be used as part of the platform, but can also be used separately.
MRG Management Console
This book explains how to install and configure the MRG Management Console. The MRG Management Console, also known as Cumin, provides a web-based graphical interface to manage your Red Hat Enterprise MRG deployment.
MRG Messaging is built on the Qpid Management Framework (QMF). The MRG Management Console uses QMF to access data and functionality provided by the MRG Messaging broker (qpidd), inventory daemon (sesame) and MRG Grid components.
This book describes how to set up and configure Cumin, a MRG Messaging broker, and a distributed inventory. The broker is necessary for communication between the distributed components and Cumin. The inventory and MRG Grid component installations must be performed on all nodes in the deployment.
For more information about MRG Messaging architecture, including advanced installation and configuration of the MRG Messaging broker, see the MRG Messaging User Guide.
For more information about MRG Grid, including advanced features and configuration, see the MRG Grid User guide.

1. Document Conventions

This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default.

1.1. Typographic Conventions

Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight keycaps and key combinations. For example:
To see the contents of the file my_next_bestselling_novel in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a keycap, all presented in mono-spaced bold and all distinguishable thanks to context.
Key combinations can be distinguished from keycaps by the hyphen connecting each part of a key combination. For example:
Press Enter to execute the command.
Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to return to your X-Windows session.
The first paragraph highlights the particular keycap to press. The second highlights two key combinations (each a set of three keycaps with each set pressed simultaneously).
If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog box text; labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example:
Choose SystemPreferencesMouse from the main menu bar to launch Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).
To insert a special character into a gedit file, choose ApplicationsAccessoriesCharacter Map from the main menu bar. Next, choose SearchFind… from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose EditPaste from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh john@example.com.
The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home.
To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release.
Note the words in bold italics above — username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:
Publican is a DocBook publishing system.

1.2. Pull-quote Conventions

Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in mono-spaced roman and presented thus:
books        Desktop   documentation  drafts  mss    photos   stuff  svn
books_tests  Desktop1  downloads      images  notes  scripts  svgs
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
package org.jboss.book.jca.ex1;

import javax.naming.InitialContext;

public class ExClient
{
   public static void main(String args[]) 
       throws Exception
   {
      InitialContext iniCtx = new InitialContext();
      Object         ref    = iniCtx.lookup("EchoBean");
      EchoHome       home   = (EchoHome) ref;
      Echo           echo   = home.create();

      System.out.println("Created Echo");

      System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));
   }
}

1.3. Notes and Warnings

Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.

Note

Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.

Important

Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled 'Important' will not cause data loss but may cause irritation and frustration.

Warning

Warnings should not be ignored. Ignoring warnings will most likely cause data loss.

2. Getting Help and Giving Feedback

2.1. Do You Need Help?

If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. Through the customer portal, you can:
  • search or browse through a knowledgebase of technical support articles about Red Hat products.
  • submit a support case to Red Hat Global Support Services (GSS).
  • access other product documentation.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.

2.2. We Need Feedback!

If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Enterprise MRG.
When submitting a bug report, be sure to mention the manual's identifier: Management_Console_Installation_Guide
If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.

Chapter 1. Scale Requirements

The MRG Management Console is designed to scale for use with different sized deployments of MRG Messaging and MRG Grid. The following configurations indicate typical size and load characteristics for small, medium and large deployments.
Small
The default software configuration of the MRG Management Console is appropriate for small scale deployments. An example small scale deployment is:
  • 64 nodes (each quad dual-core CPUs)
  • 5 concurrent console users, accessing the console at 1 page view per second (peak)
  • 10 job submitters, submitting 1 job per second concurrently (peak)
  • 10 job completions per minute (sustained), 3 years of job history (1 million jobs)
  • Ability to sustain peak rates for at least 5 minutes
Medium
To configure the MRG Management Console for use with medium scale deployments, see Chapter 5, Configuring the MRG Management Console for Medium Scale Deployment . An example medium scale deployment is:
  • 500 nodes (each quad dual-core CPUs)
  • 20 concurrent console users, accessing the console at 1 page view per second (peak)
  • 20 job submitters, submitting 2 jobs per second concurrently (peak)
  • 100 job completions per minute (sustained), 3 years of job history (10 million jobs)
  • Ability to sustain peak rates for at least 5 minutes
Large
A large scale console is defined as a console supporting 5000 Execute Nodes and 100 concurrent users accessing the console at 1 page view per second during peak periods. There are several considerations when implementing a large scale console. Red Hat, Inc recommends that customers configure large scale MRG Management Console installations in cooperation with a Solutions Architect through Red Hat, Inc consulting.

Chapter 2. Installing the MRG Management Console

In order to install the MRG Management Console you will need to have registered your system with Red Hat Network. This table lists the Red Hat Enterprise MRG channels available on Red Hat Network for the MRG Management Console.
Table 2.1. Red Hat Enterprise MRG Channels Available on Red Hat Network
Channel Name Operating System Architecture
Red Hat MRG Management RHEL-5 Server 32-bit, 64-bit
Red Hat MRG Management RHEL-6 Server 32-bit, 64-bit

Hardware Requirements
It is recommended that you have the following minimum hardware requirements before attempting to install the MRG Management Console:
  • Intel Pentium IV or AMD Athlon class machine
  • 512MB RAM
  • 10 GB disk space
  • A network interface card

Important

Before you install Red Hat Enterprise MRG check that your hardware and platform is supported. A complete list is available on the Red Hat Enterprise MRG Supported Hardware Page.

2.1. Configuring the MRG Messaging Broker for use with the MRG Management Console and MRG Grid

In order to use the MRG Messaging broker with MRG Management Console and MRG Grid, the MRG Messaging broker must first be installed and configured. The MRG Messaging components and the MRG Management Console can be installed on the same machine or on different machines that share a network.
Checking for Installation of the MRG Messaging Broker
  1. A full installation of the MRG Messaging components is recommended, but the MRG Management Console can use the broker alone at a minimum. Check to see if the broker RPM package is installed using the following command:
    $ rpm -qa qpid-cpp-server
    
  2. If the broker package is installed, it will be displayed, and you can continue with configuration. If it is not displayed, ensure that the package is installed before continuing. For more information on installing the MRG Messaging broker, see the MRG Messaging Installation Guide.
Configuring the MRG Messaging Broker for Authentication of the MRG Management Console and MRG Grid

Important

The MRG Management Console must connect to the MRG Messaging broker using password authentication for full operability. The MRG Management Console Installation Guide assumes that MRG Messaging has already been configured to support password authentication using the Cyrus SASL library. For information on configuring security in MRG Messaging see the MRG Messaging Installation Guide and MRG Messaging User Guide.
  1. Authentication credentials for the MRG Management Console must be created on the host running the MRG Messaging broker.
    On the host, run the saslpasswd2 command as the qpidd user:
    $ sudo -u qpidd /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID cumin
    
    When prompted, create a password.
    This command will create a cumin user in the SASL database. These credentials will be used by the Management Console to authenticate to the broker. The username and password will be needed later during installation and configuration of the MRG Management Console.
    For more information on the saslpasswd2 command, see the MRG Messaging Installation Guide.

    Note

    The qpid user should be able to read /var/lib/qpidd/qpidd.sasldb. If the ownership is wrong /var/log/messages will display a permission denied error.
  2. MRG Grid will authenticate to the MRG Messaging broker using the anonymous mechanism by default. If anonymous authentication is permitted by the broker, this step can be skipped. If the broker has been configured to disallow anonymous authentication, credentials for MRG Grid nodes must be created also.
    A user named grid is created below. This username is used by every MRG Grid node. On the host, run the saslpasswd2 command as the qpidd user:
    $ sudo -u qpidd /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID grid
    
    When prompted, create a password.
    This command creates a grid user in the SASL database. These credentials will be used by MRG Grid nodes to authenticate to the broker. Any valid username may be used, multiple users may be created to be used by different MRG Grid nodes. The username and password will be needed later during configuration of the MRG Grid for use with the MRG Management Console.

    Note

    The qpid user should be able to read /var/lib/qpidd/qpidd.sasldb. If the ownership is wrong /var/log/messages will display a permission denied error.
  3. The Sesame package provides content for the MRG Management Console's Inventory page; installation and configuration is described below in Installing Sesame. Like MRG Grid anonymous authentication is used by default.
    If the broker has been configured to disallow anonymous authentication, credentials must be created for use by all nodes running Sesame. For example:
    # /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID sesame
    
    When prompted, create a password.
    These credientials will be used during the configuration of Sesame below.
  4. The MRG Messaging broker will always run with authentication checks turned on by default.

    Warning

    It is possible to run the broker without authentication but this is discouraged for security reasons.
    Passwords will be sent to the MRG Messaging broker from the MRG Management Console in plain text. For greater security, SSL encryption can be used for communication between the MRG Management Console and the broker. For more information on setting up SSL encryption, see the MRG Messaging User Guide.
Adding MRG Management Console and MRG Grid credentials to optional broker ACLs
The MRG Messaging broker can be configured to use an access control list (ACL). If an ACL is present for the MRG Messaging broker, ensure the cumin and any MRG Grid users are added. Note that if MRG Grid is using anonymous authentication, the anonymous@qpid user must be added to the ACL. Information on setting up ACLs can be found in the MRG Messaging User Guide.
Changing the Update Interval
  1. By default, the MRG Messaging broker will send updated information to the MRG Management Console every ten seconds. Increase the interval to receive fewer updates and reduce load on the broker or the network. Decrease the interval to receive more updates.
    To change the update interval, open the /etc/qpidd.conf file in your preferred text editor and add the mgmt-pub-interval configuration option on the broker:
    mgmt-pub-interval=30
    
    Enter the required update interval in seconds.
Installing Sesame
Sesame is a management package that allows a system on which it is installed to display on the MRG Management Console's Inventory page. It should be installed on every system that is part of a MRG Grid deployment.
  1. Sesame is part of the MRG Messaging package group and should be automatically included in any full MRG Messaging installation.
    Check to see if the Sesame package is installed using the following command:
    $ rpm -q sesame
    
  2. If the Sesame package is not installed, use yum to install it before continuing.
    # yum install sesame
    
  3. Open the /etc/sesame/sesame.conf in your preferred text editor and locate the host parameter. This parameter must be set to the hostname of the machine running the MRG Messaging broker:
    host=example.com
    
    The port parameter can also be adjusted, although the default settings should be adequate for most configurations.
  4. If authentication credentials were created for Sesame in Configuring the MRG Messaging Broker for Authentication of the MRG Management Console and MRG Grid set the uid and pwd fields in the /etc/sesame/sesame.conf file according to those credentials.
  5. If Sesame is not enabled for the default run levels, enable it with the chkconfig command:
    # chkconfig sesame on
    

2.2. Installing and Configuring the MRG Management Console

The MRG Management Console can be installed on any machine that has a network connection to the MRG Messaging broker.
Install the Console
  1. Install the MRG Management Console group by switching to the root user, and giving the yum command.
    # yum groupinstall "MRG Management"
    
  2. You can check the installation location and that the components have been installed successfully by using the rpm -ql command with the cumin package name. For example:
    # rpm -ql cumin
    /etc/cumin/cumin.conf
    /etc/cumin/cumin.crt
    /etc/cumin/cumin.key
    /etc/rc.d/init.d/cumin
    /usr/bin/cumin
    /usr/bin/cumin-admin
    ...[output truncated]...
    

Note

If you find that yum is not installing all the dependencies you require, make sure that you have registered your system with Red Hat Network.
Ownership and Permissions of Configuration Files
If you are upgrading the MRG Management Console from an earlier installation and the /etc/cumin/cumin.conf file was modified after the original installation, this file will not be replaced by the update. It is important that the owner and permission settings are correct on this file.
  1. Check the owner and permissions using the following command:
    $ ls -l /etc/cumin/
    
  2. The output should look similar to this example, which shows the owner of the file as cumin with the following permissions:
    -rw------- 1 cumin cumin  454 Oct  1 15:51 cumin.conf
    -rw------- 1 cumin cumin 2372 Feb 26  2008 cumin.crt
    -rw------- 1 cumin cumin 2372 Feb 26  2008 cumin.key
    
  3. If the owner or the permissions do not match, modify them using the chown and chmod commands:
    # chown cumin /etc/cumin/*
    # chmod 600 /etc/cumin/*
    
Setting the Network Interface
The MRG Management Console is a web-based tool. You can use any internet browser to access the tool whether it is running on the local host or on a remote machine.
The web console is bound to the localhost network interface by default. This setting allows only local connections to be made. To make the MRG Management Console accessible to other machines on the network, the IP address of another network interface on the host needs to be specified in the configuration file.
  1. Specify the IP address by opening the /etc/cumin/cumin.conf file and locating the [web] section.
    On installation, the [web] section in the configuration file will have the following lines commented out. Remove the # symbol and edit each line to bind the web console to a different network interface:
    [web] 
    host: 192.168.0.20 
    port: 1234
    
  2. Using 0.0.0.0 as the IP address for this configuration parameter will make the web console bind to all local network interfaces that have been defined.
Setting the Broker Address and Authentication
The default configuration settings will connect the MRG Management Console without authentication to a MRG Messaging broker running on the same machine. You will need to add authentication information and optionally modify the broker host and port.
  1. The authentication information will be stored in plain text in the /etc/cumin/cumin.conf file. However, as long as the owner and permissions on the file are set as described in Ownership and Permissions of Configuration Files, this information will be secure, provided users do not have root access. It is important to make sure the owner and permissions are correctly set.
  2. Open the /etc/cumin/cumin.conf file in your preferred text editor and locate the broker address.
    The format of a broker address is:
    <username>/<password>@<target-host>[:<tcp-port>]
    
    In this syntax, username, password, and target-host are required. The tcp-port parameter is optional and will default to 5672 if not specified.
  3. The username value in this case is cumin, the user that was added to the SASL configuration for the MRG Messaging broker in Configuring the MRG Messaging Broker for Authentication of the MRG Management Console and MRG Grid.
    The password will be the password that you supplied when prompted by the saslpasswd2 command. For example, if you set the password for cumin to oregano and you want to connect to a broker on the local host, you would set the brokers field as follows:
    [common] 
    brokers: cumin/oregano@localhost:5672
    
  4. The sasl-mech-list specifies which mechanisms the MRG Management Console may use to authenticate to a broker. The default setting allows any mechanism that is available in the local configuration. For a default configuration, these are anonymous and plain as defined in the Cyrus SASL documentation. It is recommended, but not required, that the anonymous mechanism be disallowed to ensure that the MRG Management Console always authenticates with user and password information. Doing so will guarantee that all features of the console are available. To disallow anonymous authentication, set sasl-mech-list to a space separated list containing any other supported mechanisms. In the default configuration, sasl-mech-list will be set to disallow anonymous as follows:
    [common]
    sasl-mech-list: PLAIN
    
  5. The brokers setting can be changed to connect to a broker at a different address. For example, using the user cumin and the password oregano as above to connect to a broker at alpha.example.com:
    [common] 
    brokers: cumin/oregano@alpha.example.com
    
    Note, this setting will implicitly use the default port of 5672.
  6. To connect to more than one broker, specify multiple addresses using a comma-separated list on a single line:
    [common] 
    brokers: cumin/oregano@alpha.example.com, cumin/thyme@beta.example.com:5671
    
Setting the MRG Management Console Persona
The default installation prepares the MRG Management Console interface for use with both MRG Grid and MRG Messaging. It is possible to streamline the interface for use with one or the other by selecting an alternate persona. To do this, edit the /etc/cumin/cumin.conf file and change the persona value in the [web] section from default to either messaging or grid. For example:
[web]
persona: grid

Chapter 3. Start Console

3.1. First Run

Before you run the MRG Management Console for the first time, you will need to install the Cumin database.
  1. Install the Cumin database:
    $ cumin-database install
    
    This command will produce a warning that it is about to modify any existing configuration. Enter yes to continue with the installation.
  2. Add a new user:
    	
    				
    				
    $ cumin-admin add-user
    
    This will add a new user user and then prompt you for a password, this ensures the password is not retained in the shell history. This is the user that is used to log in to the web interface.
  3. Switch to the root user and use the /sbin/service command to start the MRG Messaging broker, Sesame, and the MRG Management Console.
    Start the MRG Messaging broker:
    # /sbin/service qpidd start 
    Starting Qpid AMQP daemon:                    [  OK   ]
    
    Start Sesame:
    # /sbin/service sesame start 
    Starting Sesame daemon:                    [  OK   ]
    
    Start the MRG Management Console
    # /sbin/service cumin start 
    Starting Cumin daemon:                    [  OK   ]
    
    The /sbin/service command can be used to stop, start, and restart these applications, as well as check on their status. After a configuration option has been changed, use the /sbin/service command to restart the running application:
    # /sbin/service cumin status 
    cumin (pid PID) is running... 
    
    # /sbin/service cumin restart 
    Stopping Cumin daemon:                    [  OK   ] 
    Starting Cumin daemon:                    [  OK   ] 
    
    # /sbin/service cumin stop 
    Stopping Cumin daemon:                    [  OK   ]
    
  4. Open your internet browser and navigate to the MRG Management Console page. In the default configuration the web address for Cumin will be http://localhost:45672/. TCP port 45672 must be allowed for incoming traffic on the MRG Management Console host firewall to use the console from the network.
    The MRG Management Console main window

3.2. Logging

The MRG Management Console keeps log files in the /var/log/cumin directory. This directory will contain log files for the master script and each cumin-web or cumin-data process that is started as part of the cumin service.
Three log files are kept for each process and have the extensions .log, .stderr and .stdout. The .log file contains log entries from the running application. The .stderr and .stdout files contain redirected terminal output. Normally the .stderr and .stdout would be empty but they may contain error information. The master script makes an entry in the master.log file each time it starts or restarts another cumin process. If /sbin/service reports [FAILED] when cumin is started or if cumin does not seem to be running as expected, check these files for information.
A maximum log file size is enforced, and logs will be rolled over when they reach the maximum size. The maximum log file size and the number of rolled-over log files to archive can be set in the /etc/cumin/cumin.conf file with the log-max-mb and log-max-archives parameters.

Chapter 4. Using the MRG Management Console

This chapter contains information on getting started with the MRG Management Console and Red Hat Enterprise MRG components.

4.1. Using the MRG Management Console with MRG Grid

To use the MRG Management Console to manage a MRG Grid installation, some configuration must be performed. The Condor QMF plugins allow the condor daemons to connect to a MRG Messaging broker using QMF. Each of the nodes in the MRG Grid pool will then need to have the configuration modified.

Note

MRG Grid can also be configured remotely using the remote configuration feature. For more information about the remote configuration feature and how to use it, see the MRG Grid User Guide.
Connecting the MRG Management Console with MRG Grid
  1. Install MRG Grid using the procedures described in the MRG Grid Installation Guide.
  2. Install the QMF plugins on each node in the condor pool, so that the MRG Management Console can communicate with them. The condor-qmf package can be installed using the yum command:
    # yum install condor-qmf
    
  3. Create a new file in the /etc/condor/config.d/ directory called 40QMF.config:
    # cd /etc/condor/config.d/
    # touch 40QMF.config
    
  4. To set the broker address on all nodes which are not running the MRG Messaging broker locally, add the following line, to the 40QMF.config file and specify the hostname of the machine running the broker:
    QMF_BROKER_HOST = '<hostname>'
    
  5. All MRG Grid nodes will attempt to use the anonymous authentication mechanism unless specific parameters are set. Authentication credentials were optionally created for use by MRG Grid nodes in chapter 2. To use password authentication (the plain mechanism), set the parameters in the 40QMF.config file on all nodes according to the grid credentials created in Chapter 2.
    QMF_BROKER_AUTH_MECH = PLAIN
    QMF_BROKER_USERNAME = grid
    QMF_BROKER_PASSWORD_FILE = '<path>'
    
    The last parameter contains the path of a file containing the password for the grid user in plain text. This is the password supplied for the grid user when credentials were created. The security of the password file is the responsibility of system administrators.
  6. To be able to edit fair-share in the MRG Management Console, edit the 40QMF.config file on all nodes running the condor_negotiator to add the following line:
    ENABLE_RUNTIME_CONFIG = TRUE
    
    To enable Cumin runtime configuration of Limit values it is vital that this line is present.
  7. The sampling frequency of some graphs in the MRG Grid overview screens is related to how frequently the condor collector sends updates. The default rate is fifteen minutes (900 seconds). This can be changed by adjusting the COLLECTOR_UPDATE_INTERVAL parameter.
    Do this by editing the new 40QMF.config file on the node running the condor_collector to add the following line, with the desired value in seconds:
    COLLECTOR_UPDATE_INTERVAL = 60
    
  8. Restart the condor service to pick up the changes (this command will also start the condor service if it is not already running):
    # /sbin/service condor restart
    

4.1.1. Job Server Configuration

A Job Server must be configured in the MRG Grid pool for Cumin to show job submissions and details. The Job Server can be configured in one of two ways:
  1. As a feature of the Schedd plugin that provides job data for jobs in the Schedd job queue log.
    Add the following parameter to the Schedd plug-in configuration file:
    QMF_PUBLISH_SUBMISSIONS = True
    
  2. As a dedicated daemon process with the same ability as 1. in addition to being able to provide data for jobs that have been moved to history files. Add the following settings to the Schedd plug-in configuration file:
    QMF_PUBLISH_SUBMISSIONS = False
    HISTORY = $(SPOOL)/history
    JOB_SERVER = $(SBIN)/condor_job_server
    JOB_SERVER_ARGS = -f
    JOB_SERVER.JOB_SERVER_LOG = $(LOG)/JobServerLog
    JOB_SERVER.JOB_SERVER_ADDRESS_FILE = $(LOG)/.job_server_address
    JOB_SERVER.SCHEDD_NAME = name assigned to the scheduler
    DAEMON_LIST = $(DAEMON_LIST) JOB_SERVER
    DC_DAEMON_LIST = + JOB_SERVER
    
    Optionally you can add or modify the following settings:
    HISTORY_INTERVAL = 60 
    JOB_SERVER.JOB_SERVER_DEBUG = D_FULLDEBUG
    
    The default value for HISTORY_INTERVAL is 120 seconds and the JOB_SERVER.JOB_SERVER_DEBUG setting will enable detailed logging.
  3. A feature named JobServer is predefined in the configuration store for use with remote configuration tools. This feature implements option 2 above.

Chapter 5. Configuring the MRG Management Console for Medium Scale Deployment

Configuration considerations for deployments change scale increases. This chapter describes how to configure the MRG Management Console installation for medium scale deployments. A medium scale deployment is described in Chapter 1, Scale Requirements.

5.1. Running Multiple MRG Management Console Web Servers

In medium scale environments, it may be necessary to run multiple MRG Management Console web servers as the total number of page views per second increases. To ensure optimal performance, it is recommended that a single web server is used by no more than 20 to 30 simultaneous users. This section describes how to configure the MRG Management Console installation to run multiple web servers.
  1. Creating Additional Sections in /etc/cumin/cumin.conf.
    To add web servers, a new configuration section must be added to /etc/cumin/cumin.conf for each additional server. These sections have the same structure and default values as the standard [web] section with the exception of the log-file parameter. By default, each new server will log to a file in /var/log/cumin/section_name.log.
    Each new section must specify a unique value for port as each server binds to its own port. Adding the following lines to /etc/cumin/cumin.conf will add 3 new web servers to the configuration, web1, web2 and web3; using default values for each server except port. The default port for the web section is 45672.
    [web1]
    port: 45674
    
    [web2]
    port: 45675
    
    [web3]
    port: 45676
    
    The port values used above are chosen arbitrarily.
    The names of the sections created above must be added to the webs in the [master] section in order for the new web servers to run.
    [master]
    webs: web, web1, web2, web3
    
  2. Checking the Configuration.
    After making the changes above, Cumin may be restarted. The /var/log/cumin/master.log file should contain entries for the new web servers.
    # /sbin/service cumin restart
    Stopping cumin:                                            [  OK  ] 
    Starting cumin:                                            [  OK  ] 
    
    # tail /var/log/cumin/master.log
    ... 
    20861 2011-04-01 12:09:45,560 INFO Starting: cumin-web --section=web --daemon 
    20861 2011-04-01 12:09:45,588 INFO Starting: cumin-web --section=web1 --daemon 
    20861 2011-04-01 12:09:45,602 INFO Starting: cumin-web --section=web2 --daemon 
    20861 2011-04-01 12:09:45,609 INFO Starting: cumin-web --section=web3 –daemon
    ...
    
  3. Accessing different servers.
    To visit a particular server, navigate using the appropriate port value. For example, on the machine where the MRG Management Console is installed, open an internet browser and navigate to http://localhost:45675/. This visits the [web2] server as configured above.
  4. Troubleshooting.
    Make sure that the section names listed in the webs parameter of the [master] section are spelled correctly. Section naming errors can be identified by searching for NoSectionError in /var/log/cumin/*.stderr.
    If Cumin is running but cannot be accessed on a particular port as expected, make sure the port values specified in /etc/cumin/cumin.conf for each section are correct and that the ports are not used by any other application on the system.
    Whenever changes are made to /etc/cumin/cumin.conf the service must be restarted for the changes to take effect.
  5. A note about load balancing and proxies.
    The above instructions do not cover setting up a web server proxy; users must select a port manually. However, it may be desirable in a particular installation to set up a proxy which handles load balancing automatically and allows users to visit a single URL rather than specific ports.

5.2. Limiting Objects Processed by the MRG Management Console

In the default configuration, the MRG Management Console will process all objects available from the MRG Messaging broker. If the persona value for all cumin-web instances at a site has been specialized for MRG Messaging or MRG Grid, the types of objects processed by cumin may be limited (refer to Setting the MRG Management Console Persona in Section 2.2, “Installing and Configuring the MRG Management Console” for specialization of web servers). This will reduce the load on the MRG Messaging broker and on the host running the Cumin service.
For convenience, the standard /etc/cumin/cumin.conf file already contains several alternative settings for the datas in the [master] section with explanatory comments. Select one of these settings based on the persona value being used.

5.3. Configuring the Remote Configuration Feature for a Separate MRG Messaging Broker

If the remote configuration feature is being used to configure a MRG Grid deployment, the configuration tools can be set up to use a different MRG Messaging broker. Doing this will decrease message traffic when the configuration tools are in use.
Additional brokers can be run on the same or different hosts. For instructions on running multiple brokers on a single host, see the MRG Messaging Installation Guide.

Note

The below section contains only essential information on remote configuration. For further information about the remote configuration feature and how to use it, see the Remote Configuration chapter in the MRG Grid User Guide.
Remotely Configuring each Condor Node
  1. Ensure the condor-wallaby-client package is installed and up to date with the latest version. Use the yum command as the root user:
    # yum install condor-wallaby-client
    
    The condor-wallaby-client package needs to be installed on all nodes running MRG Grid to be managed.
  2. Add QMF broker information for the remote configuration feature. Note, condor_configure_store is part of the condor-wallaby-tools package and should be run from a remote configuration administration machine:
    condor_configure_store -a -p \
    CONFIGD.QMF_BROKER_HOST,CONFIGD.QMF_BROKER_PORT,CONFIGD.QMF_BROKER_AUTH_MECHANISM
    
    Change the parameters as follows:
    name: CONFIGD.QMF_BROKER_HOST
    type: string
    default: ''
    description: 'The hostname where a QMF broker is running that communicates with the configuration store'
    conflicts: []
    depends: []
    level: 0
    must_change: true
    restart: false
    
    name: CONFIGD.QMF_BROKER_PORT
    type: string
    default: ''
    description: 'The port on CONFIGD.QMF_BROKER_HOST that the QMF broker is listening on'
    conflicts: []
    depends: []
    level: 0
    must_change: false
    restart: false
    
    name: CONFIGD.QMF_BROKER_AUTH_MECHANISM
    type: string
    default: ''
    description: 'The authentication mechanisms to use when communicating with the QMF broker CONFIGD.QMF_BROKER_HOST'
    conflicts: []
    depends: []
    level: 0
    must_change: false
    restart: false
    
  3. Add the parameters to the Master feature, by editing the Master feature with the condor_configure_store command:
    condor_configure_store -e -f Master
    
    The condor_configure_store command will invoke the default text editor so that the configuration file can be edited. For more information about editing metadata, see the Remote Configuration chapter in the MRG Grid User Guide.
    Add to the map of parameters associated with the Master feature:
    CONFIGD.QMF_BROKER_HOST: '<broker ip/host for use with remote configuration>'
    
    Optionally, add the following parameters:
    CONFIGD.QMF_BROKER_PORT: '<port>'
    CONFIGD.QMF_BROKER_AUTH_MECHANISM: '<methods>'
    
    When the changes are saved, the tool will prompt you as follows:
    Use the default value for parameter "COLLECTOR_NAME" in feature "Master"? [Y/n] Y
    Use the default value for parameter "CONDOR_HOST" in feature "Master"? [Y/n] Y
    
    Answer Y to both questions to use the default values.
  4. Activate the changes using the condor_configure_pool command. Note, condor_configure_pool is part of the condor-wallaby-tools package and should be run from a remote configuration administration machine:
    condor_configure_pool --activate
    

5.4. Increasing the Default QMF Update Interval for MRG Grid Components

The default QMF update interval for MRG Grid components is 10 seconds. This interval affects how frequently MRG Grid notifies the MRG Management Console of changes in status. Increasing this interval for certain components can noticeably decrease load on the MRG Management Console. Edit the /etc/condor/config.d/40QMF.config file created in Chapter 4, Using the MRG Management Console to add the following recommended setting for a medium scale deployment:
STARTD.QMF_UPDATE_INTERVAL = 30

Important

The NEGOTIATOR.QMF_UPDATE_INTERVAL should be less than or equal to the NEGOTIATOR_INTERVAL (which defaults to 60 seconds). If either of these intervals is modified, check that this relationship still holds.

Chapter 6. Frequently Asked Questions

Q: If I uninstall, reinstall or update the Cumin software will my database be lost?
Q: So what if I want to create a fresh database?
Q: Help! My database is corrupted! What do I do now?
Q: Will I ever be required to recreate my database as part of a software upgrade?
Q: If I have to recreate my database, what will I actually lose?
If I uninstall, reinstall or update the Cumin software will my database be lost?
No, the data in the database will persist. Even an uninstall, reinstall, or update of PostgreSQL should not affect your data. However, you're advised to back up the database prior to any such operations (more information on backup can be found in the PostgreSQL documentation).
So what if I want to create a fresh database?
To discard your data, the database must be destroyed and recreated. Optionally, you may preserve the user account data during this procedure.
To backup your user account data:
$ cumin-admin export-users my_users
Then destroy the old database and create a new one:

Warning

This command will cause you to lose all data previously stored in the database. Use only with extreme caution.
$ cumin-database drop
$ cumin-database create
To restore your user account data:
$ cumin-admin import-users my_users
Help! My database is corrupted! What do I do now?
If the database is completely corrupted, the easiest way to fix the problem is to destroy the old database, and create a new one as described above.
Will I ever be required to recreate my database as part of a software upgrade?
Occasionally, new features in Cumin may require changes to the database schema. If this is the case, the Release Notes will inform you that the database must be recreated for use with the new version of software. If practical, additional instructions or facilities may be included to help with the transition. For example, instructions on preserving the user account data.
If I have to recreate my database, what will I actually lose?
Presently Cumin stores 24 hours of sample data for calculating statistics along with user account data and information about agents and objects it discovers through QMF. Cumin will dynamically rediscover agents and objects while it runs, so this type of data is not really lost.
User account data will be lost but may be restored as described above, this is assuming it has previously been exported with cumin-admin. Sample data from the last 24 hours will be lost, affecting some statistics and charts displayed by Cumin.

Chapter 7. More Information

Reporting a Bug
If you have found a bug in the MRG Management Console, follow these instructions to enter a bug report:
  1. You will need a Bugzilla account. You can create one at Create Bugzilla Account.
  2. Once you have a Bugzilla account, log in and click on Enter A New Bug Report.
  3. When submitting a bug report, identify the product (Red Hat Enterprise MRG), the version (2.0), and whether the bug occurs in the software (component = management) or in the documentation (component = Management_Console_Installation_Guide).
Further Reading
Red Hat Enterprise MRG and MRG Messaging Product Information
Red Hat Enterprise MRG manuals
Red Hat Knowledgebase

Revision History

Revision History
Revision 1-3Wed Sep 07 2011Alison Young
Prepared for publishing
Revision 1-1Wed Sep 07 2011Alison Young
BZ#735358 - Update for adding cumin and grid to sasldb
Revision 1-0Thu Jun 23 2011Alison Young
Prepared for publishing
Revision 0.1-5Tue May 31 2011Alison Young
Rebuilt as some changes missing from previous build.
Revision 0.1-4Mon May 30 2011Alison Young
Technical review fixes
BZ#674834 - treatment of data on uninstall/upgrade/reinstall
BZ#705828 - Sesame installation updates
BZ#706182 - configuration parameter settings for Job Server
BZ#706446 - RHEL-6 Server channel missing from table 2.1
Revision 0.1-3Thu Apr 07 2011Alison Young
BZ#692227 - setting sasl_mech_list parameter in cumin.conf
BZ#696223 - Changed section 2.1 default MRG Messaging set up has changed
Revision 0.1-2Thu Apr 07 2011Alison Young
BZ#681283 - Scale Documentation (2.x)
BZ#689785 - Change default QMF update interval, special config for submissions
BZ#690453 - setting the 'persona' value for console specialization
BZ#692983 - subsection on logging to Chapter 3
Revision 0.1-1Tue Apr 05 2011Alison Young
BZ#687872- Need instructions for anonymous@QPID plugin authentication
added update from v1.3 for BZ#634932 - Runtime Grid config setting
Revision 0.1-0Tue Feb 22 2011Alison Young
Fork from 1.3