Edition 1
1801 Varsity Drive
Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
qpidd
), inventory daemon (sesame) and MRG Grid components.
Mono-spaced Bold
To see the contents of the filemy_next_bestselling_novel
in your current working directory, enter thecat my_next_bestselling_novel
command at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to return to your X-Windows session.
mono-spaced bold
. For example:
File-related classes includefilesystem
for file systems,file
for files, anddir
for directories. Each class has its own associated set of permissions.
Choose Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).→ → from the main menu bar to launchTo insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
Mono-spaced Bold Italic
or Proportional Bold Italic
To connect to a remote machine using ssh, typessh
at a shell prompt. If the remote machine isusername
@domain.name
example.com
and your username on that machine is john, typessh john@example.com
.Themount -o remount
command remounts the named file system. For example, to remount thefile-system
/home
file system, the command ismount -o remount /home
.To see the version of a currently installed package, use therpm -q
command. It will return a result as follows:package
.
package-version-release
Publican is a DocBook publishing system.
mono-spaced roman
and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
mono-spaced roman
but add syntax highlighting as follows:
package org.jboss.book.jca.ex1; import javax.naming.InitialContext; public class ExClient { public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create(); System.out.println("Created Echo"); System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); } }
Execute Nodes
and 100 concurrent users accessing the console at 1 page view per second during peak periods. There are several considerations when implementing a large scale console. Red Hat, Inc recommends that customers configure large scale MRG Management Console installations in cooperation with a Solutions Architect through Red Hat, Inc consulting.
Channel Name | Operating System | Architecture |
---|---|---|
Red Hat MRG Management | RHEL-5 Server | 32-bit, 64-bit |
Red Hat MRG Management | RHEL-6 Server | 32-bit, 64-bit |
$ rpm -qa qpid-cpp-server
saslpasswd2
command as the qpidd user:
$ sudo -u qpidd /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID cumin
cumin
user in the SASL database. These credentials will be used by the Management Console to authenticate to the broker. The username and password will be needed later during installation and configuration of the MRG Management Console.
saslpasswd2
command, see the MRG Messaging Installation Guide.
/var/lib/qpidd/qpidd.sasldb
. If the ownership is wrong /var/log/messages
will display a permission denied error.
anonymous
mechanism by default. If anonymous
authentication is permitted by the broker, this step can be skipped. If the broker has been configured to disallow anonymous
authentication, credentials for MRG Grid nodes must be created also.
grid
is created below. This username is used by every MRG Grid node. On the host, run the saslpasswd2
command as the qpidd user:
$ sudo -u qpidd /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID grid
grid
user in the SASL database. These credentials will be used by MRG Grid nodes to authenticate to the broker. Any valid username may be used, multiple users may be created to be used by different MRG Grid nodes. The username and password will be needed later during configuration of the MRG Grid for use with the MRG Management Console.
/var/lib/qpidd/qpidd.sasldb
. If the ownership is wrong /var/log/messages
will display a permission denied error.
anonymous
authentication, credentials must be created for use by all nodes running Sesame. For example:
# /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID sesame
cumin
and any MRG Grid users are added. Note that if MRG Grid is using anonymous
authentication, the anonymous@qpid
user must be added to the ACL. Information on setting up ACLs can be found in the MRG Messaging User Guide.
/etc/qpidd.conf
file in your preferred text editor and add the mgmt-pub-interval
configuration option on the broker:
mgmt-pub-interval=30
$ rpm -q sesame
yum
to install it before continuing.
# yum install sesame
/etc/sesame/sesame.conf
in your preferred text editor and locate the host
parameter. This parameter must be set to the hostname of the machine running the MRG Messaging broker:
host=example.com
port
parameter can also be adjusted, although the default settings should be adequate for most configurations.
uid
and pwd
fields in the /etc/sesame/sesame.conf
file according to those credentials.
chkconfig
command:
# chkconfig sesame on
yum
command.
# yum groupinstall "MRG Management"
rpm -ql
command with the cumin
package name. For example:
# rpm -ql cumin /etc/cumin/cumin.conf /etc/cumin/cumin.crt /etc/cumin/cumin.key /etc/rc.d/init.d/cumin /usr/bin/cumin /usr/bin/cumin-admin ...[output truncated]...
yum
is not installing all the dependencies you require, make sure that you have registered your system with Red Hat Network.
/etc/cumin/cumin.conf
file was modified after the original installation, this file will not be replaced by the update. It is important that the owner and permission settings are correct on this file.
$ ls -l /etc/cumin/
cumin
with the following permissions:
-rw------- 1 cumin cumin 454 Oct 1 15:51 cumin.conf -rw------- 1 cumin cumin 2372 Feb 26 2008 cumin.crt -rw------- 1 cumin cumin 2372 Feb 26 2008 cumin.key
chown
and chmod
commands:
# chown cumin /etc/cumin/* # chmod 600 /etc/cumin/*
localhost
network interface by default. This setting allows only local connections to be made. To make the MRG Management Console accessible to other machines on the network, the IP address of another network interface on the host needs to be specified in the configuration file.
/etc/cumin/cumin.conf
file and locating the [web]
section.
[web]
section in the configuration file will have the following lines commented out. Remove the #
symbol and edit each line to bind the web console to a different network interface:
[web] host: 192.168.0.20 port: 1234
0.0.0.0
as the IP address for this configuration parameter will make the web console bind to all local network interfaces that have been defined.
/etc/cumin/cumin.conf
file. However, as long as the owner and permissions on the file are set as described in Ownership and Permissions of Configuration Files, this information will be secure, provided users do not have root access. It is important to make sure the owner and permissions are correctly set.
/etc/cumin/cumin.conf
file in your preferred text editor and locate the broker address.
<username>/<password>@<target-host>[:<tcp-port>]
username
, password
, and target-host
are required. The tcp-port
parameter is optional and will default to 5672
if not specified.
cumin
, the user that was added to the SASL configuration for the MRG Messaging broker in Configuring the MRG Messaging Broker for Authentication of the MRG Management Console and MRG Grid.
saslpasswd2
command. For example, if you set the password for cumin
to oregano
and you want to connect to a broker on the local host, you would set the brokers
field as follows:
[common] brokers: cumin/oregano@localhost:5672
sasl-mech-list
specifies which mechanisms the MRG Management Console may use to authenticate to a broker. The default setting allows any mechanism that is available in the local configuration. For a default configuration, these are anonymous
and plain
as defined in the Cyrus SASL documentation. It is recommended, but not required, that the anonymous
mechanism be disallowed to ensure that the MRG Management Console always authenticates with user and password information. Doing so will guarantee that all features of the console are available. To disallow anonymous
authentication, set sasl-mech-list
to a space separated list containing any other supported mechanisms. In the default configuration, sasl-mech-list
will be set to disallow anonymous
as follows:
[common] sasl-mech-list: PLAIN
brokers
setting can be changed to connect to a broker at a different address. For example, using the user cumin
and the password oregano
as above to connect to a broker at alpha.example.com
:
[common] brokers: cumin/oregano@alpha.example.com
[common] brokers: cumin/oregano@alpha.example.com, cumin/thyme@beta.example.com:5671
/etc/cumin/cumin.conf
file and change the persona
value in the [web]
section from default
to either messaging
or grid
. For example:
[web] persona: grid
$ cumin-database install
yes
to continue with the installation.
$ cumin-admin add-user
user
and then prompt you for a password, this ensures the password is not retained in the shell history. This is the user that is used to log in to the web interface.
/sbin/service
command to start the MRG Messaging broker, Sesame, and the MRG Management Console.
# /sbin/service qpidd start Starting Qpid AMQP daemon: [ OK ]
# /sbin/service sesame start Starting Sesame daemon: [ OK ]
# /sbin/service cumin start Starting Cumin daemon: [ OK ]
/sbin/service
command can be used to stop, start, and restart these applications, as well as check on their status. After a configuration option has been changed, use the /sbin/service
command to restart the running application:
# /sbin/service cumin status cumin (pid PID) is running... # /sbin/service cumin restart Stopping Cumin daemon: [ OK ] Starting Cumin daemon: [ OK ] # /sbin/service cumin stop Stopping Cumin daemon: [ OK ]
/var/log/cumin
directory. This directory will contain log files for the master script and each cumin-web or cumin-data process that is started as part of the cumin service.
.log
, .stderr
and .stdout
. The .log
file contains log entries from the running application. The .stderr
and .stdout
files contain redirected terminal output. Normally the .stderr
and .stdout
would be empty but they may contain error information. The master script makes an entry in the master.log
file each time it starts or restarts another cumin process. If /sbin/service
reports [FAILED]
when cumin is started or if cumin does not seem to be running as expected, check these files for information.
/etc/cumin/cumin.conf
file with the log-max-mb
and log-max-archives
parameters.
condor-qmf
package can be installed using the yum
command:
# yum install condor-qmf
/etc/condor/config.d/
directory called 40QMF.config
:
# cd /etc/condor/config.d/ # touch 40QMF.config
40QMF.config
file and specify the hostname of the machine running the broker:
QMF_BROKER_HOST = '<hostname
>'
anonymous
authentication mechanism unless specific parameters are set. Authentication credentials were optionally created for use by MRG Grid nodes in chapter 2. To use password authentication (the plain
mechanism), set the parameters in the 40QMF.config
file on all nodes according to the grid
credentials created in Chapter 2.
QMF_BROKER_AUTH_MECH = PLAIN
QMF_BROKER_USERNAME = grid
QMF_BROKER_PASSWORD_FILE = '<path
>'
grid
user in plain text. This is the password supplied for the grid
user when credentials were created. The security of the password file is the responsibility of system administrators.
40QMF.config
file on all nodes running the condor_negotiator
to add the following line:
ENABLE_RUNTIME_CONFIG = TRUE
COLLECTOR_UPDATE_INTERVAL
parameter.
40QMF.config
file on the node running the condor_collector
to add the following line, with the desired value in seconds:
COLLECTOR_UPDATE_INTERVAL = 60
condor
service to pick up the changes (this command will also start the condor
service if it is not already running):
# /sbin/service condor restart
Job Server
can be configured in one of two ways:
Schedd
plugin that provides job data for jobs in the Schedd
job queue log.
Schedd
plug-in configuration file:
QMF_PUBLISH_SUBMISSIONS = True
Schedd
plug-in configuration file:
QMF_PUBLISH_SUBMISSIONS = False
HISTORY = $(SPOOL)/history
JOB_SERVER = $(SBIN)/condor_job_server
JOB_SERVER_ARGS = -f
JOB_SERVER.JOB_SERVER_LOG = $(LOG)/JobServerLog
JOB_SERVER.JOB_SERVER_ADDRESS_FILE = $(LOG)/.job_server_address
JOB_SERVER.SCHEDD_NAME = name assigned to the scheduler
DAEMON_LIST = $(DAEMON_LIST) JOB_SERVER
DC_DAEMON_LIST = + JOB_SERVER
HISTORY_INTERVAL = 60 JOB_SERVER.JOB_SERVER_DEBUG = D_FULLDEBUG
HISTORY_INTERVAL
is 120 seconds and the JOB_SERVER.JOB_SERVER_DEBUG
setting will enable detailed logging.
JobServer
is predefined in the configuration store for use with remote configuration tools. This feature implements option 2 above.
/etc/cumin/cumin.conf
.
/etc/cumin/cumin.conf
for each additional server. These sections have the same structure and default values as the standard [web]
section with the exception of the log-file
parameter. By default, each new server will log to a file in /var/log/cumin/section_name
.log
.
port
as each server binds to its own port. Adding the following lines to /etc/cumin/cumin.conf
will add 3 new web servers to the configuration, web1
, web2
and web3
; using default values for each server except port
. The default port for the web
section is 45672.
[web1] port: 45674 [web2] port: 45675 [web3] port: 45676
port
values used above are chosen arbitrarily.
webs
in the [master]
section in order for the new web servers to run.
[master] webs: web, web1, web2, web3
/var/log/cumin/master.log
file should contain entries for the new web servers.
# /sbin/service cumin restart Stopping cumin: [ OK ] Starting cumin: [ OK ] # tail /var/log/cumin/master.log ... 20861 2011-04-01 12:09:45,560 INFO Starting: cumin-web --section=web --daemon 20861 2011-04-01 12:09:45,588 INFO Starting: cumin-web --section=web1 --daemon 20861 2011-04-01 12:09:45,602 INFO Starting: cumin-web --section=web2 --daemon 20861 2011-04-01 12:09:45,609 INFO Starting: cumin-web --section=web3 –daemon ...
port
value. For example, on the machine where the MRG Management Console is installed, open an internet browser and navigate to http://localhost:45675/. This visits the [web2]
server as configured above.
webs
parameter of the [master]
section are spelled correctly. Section naming errors can be identified by searching for NoSectionError
in /var/log/cumin/*.stderr
.
/etc/cumin/cumin.conf
for each section are correct and that the ports are not used by any other application on the system.
/etc/cumin/cumin.conf
the service must be restarted for the changes to take effect.
persona
value for all cumin-web instances at a site has been specialized for MRG Messaging or MRG Grid, the types of objects processed by cumin may be limited (refer to Setting the MRG Management Console Persona in Section 2.2, “Installing and Configuring the MRG Management Console” for specialization of web servers). This will reduce the load on the MRG Messaging broker and on the host running the Cumin service.
/etc/cumin/cumin.conf
file already contains several alternative settings for the datas
in the [master]
section with explanatory comments. Select one of these settings based on the persona
value being used.
condor-wallaby-client
package is installed and up to date with the latest version. Use the yum
command as the root user:
# yum install condor-wallaby-client
condor-wallaby-client
package needs to be installed on all nodes running MRG Grid to be managed.
condor_configure_store
is part of the condor-wallaby-tools
package and should be run from a remote configuration administration machine:
condor_configure_store -a -p \ CONFIGD.QMF_BROKER_HOST,CONFIGD.QMF_BROKER_PORT,CONFIGD.QMF_BROKER_AUTH_MECHANISM
name: CONFIGD.QMF_BROKER_HOST type: string default: '' description: 'The hostname where a QMF broker is running that communicates with the configuration store' conflicts: [] depends: [] level: 0 must_change: true restart: false
name: CONFIGD.QMF_BROKER_PORT type: string default: '' description: 'The port on CONFIGD.QMF_BROKER_HOST that the QMF broker is listening on' conflicts: [] depends: [] level: 0 must_change: false restart: false
name: CONFIGD.QMF_BROKER_AUTH_MECHANISM type: string default: '' description: 'The authentication mechanisms to use when communicating with the QMF broker CONFIGD.QMF_BROKER_HOST' conflicts: [] depends: [] level: 0 must_change: false restart: false
Master
feature, by editing the Master
feature with the condor_configure_store
command:
condor_configure_store -e -f Master
condor_configure_store
command will invoke the default text editor so that the configuration file can be edited. For more information about editing metadata, see the Remote Configuration chapter in the MRG Grid User Guide.
Master
feature:
CONFIGD.QMF_BROKER_HOST: '<broker ip/host for use with remote configuration
>'
CONFIGD.QMF_BROKER_PORT: '<port
>' CONFIGD.QMF_BROKER_AUTH_MECHANISM: '<methods
>'
Use the default value for parameter "COLLECTOR_NAME" in feature "Master"? [Y/n] Y Use the default value for parameter "CONDOR_HOST" in feature "Master"? [Y/n] Y
Y
to both questions to use the default values.
condor_configure_pool
command. Note, condor_configure_pool
is part of the condor-wallaby-tools
package and should be run from a remote configuration administration machine:
condor_configure_pool --activate
/etc/condor/config.d/40QMF.config
file created in Chapter 4, Using the MRG Management Console to add the following recommended setting for a medium scale deployment:
STARTD.QMF_UPDATE_INTERVAL = 30
NEGOTIATOR.QMF_UPDATE_INTERVAL
should be less than or equal to the NEGOTIATOR_INTERVAL
(which defaults to 60 seconds). If either of these intervals is modified, check that this relationship still holds.
$ cumin-admin export-users my_users
$ cumin-database drop $ cumin-database create
$ cumin-admin import-users my_users
cumin-admin
. Sample data from the last 24 hours will be lost, affecting some statistics and charts displayed by Cumin.
Revision History | |||||||
---|---|---|---|---|---|---|---|
Revision 1-3 | Wed Sep 07 2011 | ||||||
| |||||||
Revision 1-1 | Wed Sep 07 2011 | ||||||
| |||||||
Revision 1-0 | Thu Jun 23 2011 | ||||||
| |||||||
Revision 0.1-5 | Tue May 31 2011 | ||||||
| |||||||
Revision 0.1-4 | Mon May 30 2011 | ||||||
| |||||||
Revision 0.1-3 | Thu Apr 07 2011 | ||||||
| |||||||
Revision 0.1-2 | Thu Apr 07 2011 | ||||||
| |||||||
Revision 0.1-1 | Tue Apr 05 2011 | ||||||
| |||||||
Revision 0.1-0 | Tue Feb 22 2011 | ||||||
|